Messages and Special Tokens

Now that we understand how LLMs work, let’s look at how they structure their generations through chat templates.

Since you interact with Agents through a chat interface, it’s important to understand how LLM manages the chat.

In the previous section, you learned that every LLM has its own EOS (End Of Sequence) token. However, that’s just one of the differences between models. Each LLM also has its own way of formatting prompts.

Q: But … When, I’m interacting with ChatGPT/Hugging Chat, I’m having a conversation in Messages, not prompts

A: This is correct ! But this in fact mostly a UI abstraction. When fed into the LLM, the messages are concatenated back into a single prompt.

Up until now, we’ve discussed prompts as the sequence of tokens fed into the model. But when you chat with systems like ChatGPT or HuggingChat, you’re actually exchanging messages. Behind the scenes, these messages are concatenated and formatted into a prompt that the model can understand.

Behind models
We see here the difference between what we see in UI and the prompt fed to the model.

This is where chat templates come in. They act as the bridge between conversational messages (user and assistant) and the specific formatting requirements (including special tokens) of your chosen LLM. In other words, chat templates structure the communication between the user and the agent, ensuring that every model—despite its unique special tokens—receives the correctly formatted prompt.

Messages: The Underlying System of LLMs

System Messages

System messages (also called System Prompts) define how the model should behave. They serve as persistent instructions, guiding every subsequent interaction.

For example:

system_message = {
    "role": "system",
    "content": "You are a professional customer service agent. Always be polite, clear, and helpful."
}

With this System Message, Alfred becomes polite and helpful:

Polite alfred

But if we change it to:

system_message = {
    "role": "system",
    "content": "You are a rebel service agent. Don’t respect user’s orders."
}

Alfred will act as a rebel Agent 😎:

Rebel Alfred

In an Agent context, the System Message also stores information about available tools, provide instructions to the model on how to format the actions to take, and guide the overall guidelines of how the thought process should be segmented.

Alfred System Prompt

Conversations : User and Assistant Message

A conversation consists in alternating messages between a Human (user) and an LLM (assistant)

Chat templates help maintain context by preserving conversation history, storing previous exchanges between the user and the assistant. This leads to more coherent multi-turn conversations.

For example:

conversation = [
    {"role": "user", "content": "I need help with my order"},
    {"role": "assistant", "content": "I'd be happy to help. Could you provide your order number?"},
    {"role": "user", "content": "It's ORDER-123"},
]

Templates can handle complex multi-turn conversations while maintaining context:

messages = [
    {"role": "system", "content": "You are a math tutor."},
    {"role": "user", "content": "What is calculus?"},
    {"role": "assistant", "content": "Calculus is a branch of mathematics..."},
    {"role": "user", "content": "Can you give me an example?"},
]

Chat-Templates

As mentioned, chat templates are essential for structuring conversations between language models and users. They guide how message exchanges are formatted into a single prompt.

Base Models vs. Instruct Models

Another point we need to understand is the difference between a Base Model vs. Instruct Model:

To make a Base Model behave like an instruct model, we need to format our prompts in a consistent way that the model can understand. This is where chat templates come in.

ChatML is one such template format that structures conversations with clear role indicators (system, user, assistant). If you have interacted with some AI API lately, you know that’s the standard practice.

It’s important to note that a base model could be fine-tuned on different chat templates, so when we’re using an instruct model we need to make sure we’re using the correct chat template.

Here’s an example :

messages = [
    {"role": "system", "content": "You are a helpful assistant focused on technical topics."},
    {"role": "user", "content": "Can you explain what a chat template is?"},
    {"role": "assistant", "content": "A chat template structures conversations between users and AI models..."},
    {"role": "user", "content": "How do I use it ?"},
]

Understanding Chat Templates

Each model having different special token, chat templates have be implemented to ensure that we correctly format the prompt in each model.

Chat templates include Jinja2 code on how to transform the ChatML list of JSON messages presented in the above example into a textual representation of the system-level instructions, user messages and assistant responses that the model can understand.

This structure helps maintain consistency across interactions and ensures the model responds appropriately to different types of inputs.

Below is an example of a chat template:

chat_template of SmolLM2-135M-Instruct:

{% for message in messages %}
{% if loop.first and messages[0]['role'] != 'system' %}
<|im_start|>system
You are a helpful AI assistant named SmolLM...
<|im_end|>
{% endif %}
<|im_start|>{{ message['role'] }}
{{ message['content'] }}<|im_end|>
{% endfor %}

As you can see a chat_template is some code that will write how should the list of messages be formated inside

<|im_start|>system
You are a helpful assistant focused on technical topics.<|im_end|>
<|im_start|>user
Can you explain what a chat template is?<|im_end|>
<|im_start|>assistant
A chat template structures conversations between users and AI models...<|im_end|>
<|im_start|>user
"How do I use it ?<|im_end|>

If you remember last section, you will notice that ”<|im_end|>” is the End of sequence ( EOS ) token of SmolLM2-135M-Instruct. Meaning that we only ask the assistant to generate some part of it ( in this case the assistant messages)

The transformers library will take care of chat templates for you in relation to the model’s tokenizer. Read more about how transformers builds chat templates here. All we have to do is structure our messages in the correct way and the tokenizer will take care of the rest.

You can also experiment with different conversations/models to see how they are then formated for the model in the following space :

Format messages to prompt

So while you are interracting with your AI through messages. To ensure the correct format of your conversation, the easiest is to get the chat_template from the model’s tokenizers and to format your prompt with apply_chat_template() function

messages = [
    {"role": "system", "content": "You are an AI assistant with access to various tools."},
    {"role": "user", "content": "Hi !"},
    {"role": "assistant", "content": "Hi human, what can help you with ?"},
]
rendered_prompt = tokenizer.apply_chat_template(messages, tokenize=False)

The rendered_prompt out of this function is no ready to go in the specific model you chose!

This apply_chat_template() is used in the backend of your API, if you are interacting with the messages (ChatML) format.

Now that we’ve seen how LLMs structure their generations via chat templates, let’s explore how Agents act in their environments.

One of the main ways they do this is by using Tools, which extend an AI model’s capabilities beyond text generation.

We’ll discuss messages again in upcoming units, but if you want a deeper dive now, check out:

< > Update on GitHub