No description provided.

no chat template yet ?

I'm getting an exception calling tools.

mistral2-1  | got exception: {"code":500,"message":"Trying to access property '0' on null! at row 243, column 16:\n        {%- else %}\n            {{- message['content'][0]['text'] }}\n               ^\n        {%- endif %}\n at row 243, column 16:\n        {%- else %}\n            {{- message['content'][0]['text'] }}\n               ^\n        {%- endif %}\n at row 243, column 13:\n        {%- else %}\n            {{- message['content'][0]['text'] }}\n            ^\n        {%- endif %}\n at row 242, column 20:\n            {{- message['content'] }}\n        {%- else %}\n                   ^\n            {{- message['content'][0]['text'] }}\n at row 240, column 9:\n    {%- elif message['role'] == 'assistant' %}\n        {%- if message['content'] is string %}\n        ^\n            {{- message['content'] }}\n at row 239, column 47:\n\n    {%- elif message['role'] == 'assistant' %}\n                                              ^\n        {%- if message['content'] is string %}\n at row 197, column 5:\n{%- for message in loop_messages %}\n    {%- if message['role'] == 'user' %}\n    ^\n\n at row 196, column 36:\n\n{%- for message in loop_messages %}\n                                   ^\n    {%- if message['role'] == 'user' %}\n at row 196, column 1:\n\n{%- for message in loop_messages %}\n^\n    {%- if message['role'] == 'user' %}\n at row 1, column 69:\n{#- Copyright 2025-present the Unsloth team. All rights reserved. #}\n                                                                    ^\n{#- Licensed under the Apache License, Version 2.0 (the \"License\") #}\n","type":"server_error"}
mistral2-1  | srv  log_server_r: request: POST /v1/chat/completions 172.19.0.7 500

But I see no template in this repo. How can that be?

@dr-e : you are clearly using the template of unsloth : "Copyright 2025-present the Unsloth team. All rights reserved."

@patrickvonplaten ,

Does this model (3.2) use the same chat template as 3.1?

Mistral AI_ org

Hi, we do not plan to release chat templates in this and future releases. Instead please use mistral-common as recommended in the model card.

I'll try to summarize why we recommend doing it this way:

  • We use mistral-common internally. This means that we can guarantee the best usage of our models through this lib.
  • Chat templates are often prone to errors especially in early days or weeks after the release. This is not only for Mistral models but also for other model providers. Sadly there is no easy way to properly test them due to their nature.
  • mistral-common is tested and thanks to pydantic offers an API that is validated. We also handle message aggregations and format correctly the requests. This could be seen as boilerplate but it offers more guarantees by failing instead of silently bugging.
  • mistral-common is integrated in vLLM and Transformers. You can use them without knowing what is going on in the background. In case all the features are not supported / easily reachable, we are open to suggestions to ensure smooth usage.

Of course you can still rely on community chat templates, but those will not be officials, might not be out at release time, and can contain errors.

Thank you for the explanation!! Really appreciate the detailed reasoning behind using mistral-common instead of chat templates.

I have one small request, if possible: could you please add the files text_v11.txt and text_v13.txt to main/tests/data/samples
They greatly helped me in understanding the tokenizer of Mistral Small 3.1/3.2, and I think they would be useful for others as well.

We try to avoid putting too much files in the git history but you can create it yourself if you want too ! You'd need to write a bit of code but by loading the jsons you can then:

# make your imports

mistral_tokenizer = MistralTokenizer.from_hf_hub("mistralai/Mistral-Small-3.2-24B-Instruct-2506")

with open("json_path", encoding="utf-8") as f:
   json_content = json.load(f)

request = ChatCompletionRequest(**json_content)
tokenized = mistral_tokenizer.encode_chat_completion(request)
print(tokenized.tokens, tokenized.text) # First gives token ids, second gives string representation of tokens (only for debugging purposes, useful for chat template)

I didn't test the code but this should give a good starting point.

Publish this branch
This branch is in draft mode, publish it to be able to merge.

Sign up or log in to comment