Chat templates are essential for structuring interactions between language models and users. They provide a consistent format for conversations, ensuring that models understand the context and role of each message while maintaining appropriate response patterns.
A base model is trained on raw text data to predict the next token, while an instruct model is fine-tuned specifically to follow instructions and engage in conversations. For example, SmolLM2-135M
is a base model, while SmolLM2-135M-Instruct
is its instruction-tuned variant.
To make a base model behave like an instruct model, we need to format our prompts in a consistent way that the model can understand. This is where chat templates come in. ChatML is one such template format that structures conversations with clear role indicators (system, user, assistant).
Different models use different chat template formats. To illustrate this, let’s look at a few chat templates. Here’s how the same conversation would be formatted for different models:
We’ll use the following conversation structure for all examples:
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hi! How can I help you today?"},
{"role": "user", "content": "What's the weather?"},
]
This is using the mistral
template format:
<s>[INST] You are a helpful assistant. [/INST]
Hi! How can I help you today?</s>
[INST] Hello! [/INST]
This is the chat template for a Qwen 2 model:
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Hello!<|im_end|>
<|im_start|>assistant
Hi! How can I help you today?<|im_end|>
<|im_start|>user
What's the weather?<|im_end|>
<|im_start|>assistant
Key differences between these formats include:
System Message Handling:
<<SYS>>
tags<|system|>
tags with </s>
endingssystem
role with <|im_start|>
tagsSYSTEM:
prefixMessage Boundaries:
[INST]
and [/INST]
tags<|system|>
, <|user|>
, <|assistant|>
) with </s>
endings[INST]
and [/INST]
with <s>
and </s>
Special Tokens:
<s>
and </s>
for conversation boundaries</s>
to end each message<s>
and </s>
for turn boundariesThe transformers library handles these differences through model-specific chat templates. When you load a tokenizer, it automatically uses the correct template for that model:
from transformers import AutoTokenizer
# These will use different templates automatically
llama_tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
mistral_tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
qwen_tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
]
# Each will format according to its model's template
llama_chat = llama_tokenizer.apply_chat_template(messages, tokenize=False)
mistral_chat = mistral_tokenizer.apply_chat_template(messages, tokenize=False)
qwen_chat = qwen_tokenizer.apply_chat_template(messages, tokenize=False)
The transformers library provides built-in support for chat templates through the apply_chat_template()
method:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-135M-Instruct")
messages = [
{"role": "system", "content": "You are a helpful coding assistant."},
{"role": "user", "content": "Write a Python function to sort a list"},
]
# Apply the chat template
formatted_chat = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
This will return a formatted string that looks like:
<|im_start|>system
You are a helpful coding assistant.<|im_end|>
<|im_start|>user
Write a Python function to sort a list<|im_end|>
Chat templates can handle more complex scenarios, including:
For multimodal conversations, chat templates can include image references or base64-encoded images:
messages = [
{
"role": "system",
"content": "You are a helpful vision assistant that can analyze images.",
},
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{"type": "image", "image_url": "https://example.com/image.jpg"},
],
},
]
Here’s an example of a chat template with tool use:
messages = [
{
"role": "system",
"content": "You are an AI assistant that can use tools. Available tools: calculator, weather_api",
},
{"role": "user", "content": "What's 123 * 456 and is it raining in Paris?"},
{
"role": "assistant",
"content": "Let me help you with that.",
"tool_calls": [
{
"tool": "calculator",
"parameters": {"operation": "multiply", "x": 123, "y": 456},
},
{"tool": "weather_api", "parameters": {"city": "Paris", "country": "France"}},
],
},
{"role": "tool", "tool_name": "calculator", "content": "56088"},
{
"role": "tool",
"tool_name": "weather_api",
"content": "{'condition': 'rain', 'temperature': 15}",
},
]
When working with chat templates, follow these key practices:
Let’s practice implementing chat templates with a real-world example.
from datasets import load_dataset
dataset = load_dataset("HuggingFaceTB/smoltalk")
def convert_to_chatml(example):
return {
"messages": [
{"role": "user", "content": example["input"]},
{"role": "assistant", "content": example["output"]},
]
}
Remember to validate your output format matches your target model’s requirements!