As we learned in the previous section, each Agent needs an AI Model at its core, and the most common AI model for those are Large Language Models (LLMs).
Therefore, we need now to understand what are LLMs and how they power Agents.
This section offers a concise technical explanation but if you’re really not yet familiar with LLMs, you should check first our free Natural Language Processing Course.
A Large Language Model is a type of AI model that excels at understanding and generating human language. They are trained on vast amounts of text data, allowing them to learn patterns, nuances, and structure in language. These models typically consist of billions of parameters.
Most LLMs are built on the Transformer architecture—a deep learning framework that has gained significant interest since the release of BERT from Google in 2018.
There are 3 types of transformers :
Encoders
An encoder-based Transformer takes text (or other data) as input and outputs a dense representation (or embedding) of that text.
Decoders
A decoder-based Transformer focuses on generating new tokens to complete a sequence, token by token.
Seq2Seq (Encoder–Decoder)
A sequence-to-sequence Transformer combines an encoder and a decoder. The encoder first processes the input sequence into a context representation, then the decoder generates an output sequence.
Although Large Language Models come in various forms, most people think of LLMs as decoder-based models with billions of parameters. Here are some of the most well-known LLMs:
Model | Provider |
---|---|
GPT4 | OpenAI |
LLaMA3 | Meta (Facebook AI Research) |
Deepseek-R1 | DeepSeek |
SmollLM2 | Hugging Face |
Gemma | |
Mistral | Mistral |
The underlying principle of an LLM is simple yet highly effective: its objective is to predict the next token in a sequence. We use the term “token” rather than “word” because not every token corresponds to a whole word.
For example, while English has an estimated 600,000 words, an LLM might have a vocabulary of around 32,000 tokens (as is the case with LLaMA 2). This tokenization often works on sub-word units.
For instance, consider how the tokens “interest” and “##ing” can be combined to form “interesting” or “##ed” can be appended to form “interested.”
You can experiment with different tokenizers in the interactive playground below:
Each LLM has some special tokens specific to the model. The most important of those special token is the End of sequence token (EOS).
Model | Provider | EOS Token |
---|---|---|
GPT4 | OpenAI | <\|endoftext\|> |
LLaMA3 | Meta (Facebook AI Research) | <\|eot_id\|> |
Deepseek-R1 | DeepSeek | <|end▁of▁sentence|> |
SmollLM2 | Hugging Face | <\|im_end\|> |
Gemma | <end_of_turn> | |
Mistral | Mistral | [/INST] |
LLMs are said to be autoregressive, meaning that the output from one pass becomes the input for the next one. This loop continues until the model predicts the next token to be the EOS token, at which point the model can stop.
In other words, an LLM will decode text until it reaches the EOS. But what happens during a single decoding loop?
While the full process can be quite technical for the purpose of learning agents, here’s a brief overview:
Once out of the model, we have multiple strategies to select the tokens to complete the sentence.
The most naive decoding strategy would be to always take the word with the maximum score.
You can interract with the decoding process yourself with SmollLM2 in this space (remember, it decodes untill reaching an EOS token which is <|im_end|> for this model):
If you want to know more about decoding, you can take a look at the NLP course.
But there are also more advanced decoding strategies. Fore example, beam search explores multiple candidate sequences to find the one with the maximum total score—even if some individual tokens have lower scores.
A key aspect of the Transformer architecture is Attention. When predicting the next word, not every word in a sentence is equally important; words like “France” and “capital” in the sentence “The capital of France is …” carry the most meaning.
Although the basic principle of LLMs—predicting the next token—has remained consistent since GPT-2, there have been significant advancements in scaling neural networks and refining attention mechanisms.
If you’ve interacted with LLMs, you’re probably familiar with the term context length, which refers to the maximum number of tokens the LLM can process.
Considering that the only job of an LLM is to predict the next token by looking at every input token, and to chose which are “important” to decode what the next word should be, the wording of your input token is very important.
This called a prompt in LLM and will allow to guide the generation of the LLM toward the desired output.
LLMs are trained on large datasets of text, where they learn to predict the next word in a sequence through a self-supervised or masked language modeling objective.
From this unsupervised learning, the model learns the structure of the language and underlying patterns in text allowing to generalize on unseen data.
Following this, LLMs can be fine-tuned on a supervised learning objective to perform specific tasks. For example, some are trained for conversational structures or tool usage, while others focus on classification or code generation.
You have two main options:
Run Locally (if you have sufficient hardware).
Use a Cloud/API (e.g., via the Hugging Face API).
Throughout this course, we will primarily use models via APIs on the Hugging Face Hub. Later on, we will explore how to run these models locally on your hardware.
LLMs are a key component of AI Agents, providing the foundation for understanding and generating human language.
They can interpret user instructions, maintain context in conversations, define a plan and decide which tools to use.
We will explore these steps in more detail in this Unit, but for now, what you need to understand is that LLM is the brain of the Agent.
That was a lot of information! We’ve covered the basics of what LLMs are, how they function, and their role in powering AI agents.
If you’d like to dive even deeper into the fascinating world of language models and natural language processing, don’t hesitate to check out our free NLP course.
Now that we understand how LLMs function, it’s time to see how these LLM models structure their generations in a conversational context.
< > Update on GitHub