|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
datasets: |
|
- kurakurai/luth-sft |
|
language: |
|
- fr |
|
- en |
|
base_model: |
|
- Qwen/Qwen3-0.6B |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
 |
|
|
|
|
|
# Luth-0.6B-Instruct |
|
|
|
**Luth-0.6B-Instruct** is a French fine-tuned version of [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B), trained on the [Luth-SFT](https://huggingface.co/datasets/kurakurai/luth-sft) dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas. |
|
|
|
Our Evaluation, training and data scripts are available on [GitHub](https://github.com/kurakurai/Luth), along with the [Blog](https://huggingface.co/blog/MaxLSB/luth) we wrote. |
|
|
|
 |
|
|
|
## Model Details |
|
|
|
Luth was trained using full fine-tuning on the Luth-SFT dataset with [Axolotl](https://github.com/axolotl-ai-cloud/axolotl). The resulting model was then merged with the base Qwen3-0.6B model. This process successfully retained the model's English capabilities while improving its performance on nearly all selected benchmarks in both French and English. |
|
|
|
## Benchmark Results |
|
|
|
We used LightEval for evaluation, with custom tasks for the French benchmarks. The models were evaluated with a `temperature=0`. |
|
|
|
### French Benchmark Scores |
|
|
|
| Model | IFEval<br>French | GPQA-Diamond<br>French | MMLU<br>French | Math500<br>French | Arc-Challenge<br>French | Hellaswag<br>French | |
|
|------------------------|-----------------|-----------------------|----------------|-----------------|------------------------|-------------------| |
|
| **Luth-0.6B-Instruct** | <u>48.24</u> | <u>34.52</u> | <u>40.12</u> | <u>44.00</u> | <u>33.88</u> | 45.58 | |
|
| Llama-3.2-1B | 27.79 | 25.38 | 25.49 | 15.80 | 29.34 | 25.09 | |
|
| Qwen3-0.6B | 44.86 | 26.90 | 27.13 | 29.20 | 31.57 | 25.10 | |
|
| Qwen2.5-0.5B-Instruct | 22.00 | 25.89 | 35.04 | 12.00 | 28.23 | <u>51.45</u> | |
|
|
|
|
|
### English Benchmark Scores |
|
|
|
| Model | IFEval<br>English | GPQA-Diamond<br>English | MMLU<br>English | Math500<br>English | Arc-Challenge<br>English | Hellaswag<br>English | |
|
|------------------------|-----------------|------------------------|----------------|------------------|-------------------------|--------------------| |
|
| **Luth-0.6B-Instruct** | 53.73 | 25.76 | <u>48.12</u> | <u>48.80</u> | <u>36.09</u> | 47.03 | |
|
| Llama-3.2-1B | 44.05 | 25.25 | 31.02 | 26.40 | 34.30 | <u>55.84</u> | |
|
| Qwen3-0.6B | <u>57.18</u> | <u>29.29</u> | 36.79 | 43.40 | 33.70 | 42.92 | |
|
| Qwen2.5-0.5B-Instruct | 29.70 | <u>29.29</u> | 43.80 | 32.00 | 32.17 | 49.56 | |
|
|
|
|
|
## Code Example |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("kurakurai/Luth-0.6B-Instruct") |
|
model = AutoModelForCausalLM.from_pretrained("kurakurai/Luth-0.6B-Instruct") |
|
messages = [ |
|
{"role": "user", "content": "Quelle est la capitale de la France?"}, |
|
] |
|
inputs = tokenizer.apply_chat_template( |
|
messages, |
|
add_generation_prompt=True, |
|
tokenize=True, |
|
return_dict=True, |
|
return_tensors="pt", |
|
).to(model.device) |
|
|
|
outputs = model.generate(**inputs, max_new_tokens=100) |
|
print( |
|
tokenizer.decode( |
|
outputs[0][inputs["input_ids"].shape[-1] :], skip_special_tokens=True |
|
) |
|
) |
|
``` |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@misc{luth2025kurakurai, |
|
title = {Luth-0.6B-Instruct}, |
|
author = {Maxence Lasbordes, Sinoué Gad}, |
|
year = {2025}, |
|
howpublished = {\url{https://huggingface.co/kurakurai/Luth-0.6B-Instruct}}, |
|
} |
|
``` |
|
|