plato-9b

plato-9b is a fine-tuned version of the google/gemma-2-9b-it model for generating responses in the Russian language. This 9-billion parameter model excels at conversational tasks, offering rich contextual understanding and fine-grained results.

Usage

To use plato-9b with the transformers library:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("deepvk/plato-9b")
model = AutoModelForCausalLM.from_pretrained("deepvk/plato-9b")

input_text = "Что стоит посетить в России?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids

output = model.generate(input_ids, max_length=150, do_sample=True, temperature=0.7)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
# Что стоит посетить в России?
# 1. Красная площадь и Кремль в Москве
# 2. Эрмитаж в Санкт-Петербурге
# 3. Байкал
# 4. Соловецкие острова
# 5. Камчатка и её вулканы
# 6. Золотое Кольцо
# 7. Казанский Кремль
# 8. Алтай
# 9. Астраханская область и Волго-Донской канал
# 10. Кавказские горы и Черноморское побережье
# 
# Каждое из этих мест предлагает уникальные культурные, исторические и природные достопримечательности,
# которые делают Россию столь удивительной и разнообразной страной.

Dataset

We applied both Supervised Fine-Tuning (SFT) and Preference Optimization (PO). For SFT, we used an 8B token instruction dataset, with 4B tokens consisting of dialogues and the rest covering math, biology, chemistry, code, and general knowledge. The PO dataset contains 200M tokens featuring common knowledge instructions. We trained on both datasets for several epochs.

Evaluation

To evaluate, we applied LLM-as-a-judge approach on academic tasks. Specifically, we used arena-general-ru and arena-hard-ru with gpt4o judge and gpt4o-mini baseline.

arena-general-ru

Model Score Score w/ SC
gpt-4o-2024-11-20 81.87 (-2.04, +1.81) 78.42 (-2.39, +2.33)
gpt-4o-mini-2024-07-18 50.00 (-0.00, +0.00) 50.00 (-0.00, +0.00)
deepvk/plato-9b 41.27 (-2.18, +2.24) 32.13 (-1.97, +2.05)
t-tech/T-lite-it-1.0 38.52 (-2.04, +2.98) 30.38 (-1.90, +3.15)
google/gemma-2-9b-it 27.46 (-2.06, +1.74) 25.80 (-2.09, +1.98)
Qwen/Qwen2.5-7B-Instruct 24.60 (-2.36, +2.38) 23.67 (-2.36, +2.28)
IlyaGusev/saiga_gemma2_9b 17.83 (-1.95, +1.66) 18.46 (-2.22, +1.69)

arena-hard-ru

Model Score Score w/ SC
gpt-4o-2024-11-20 85.70 (-1.45, +1.38) 80.19 (-1.99, +2.04)
gpt-4o-mini-2024-07-18 50.00 (-0.00, +0.00) 50.00 (-0.00, +0.00)
t-tech/T-lite-it-1.0 34.80 (-1.98, +2.38) 26.99 (-1.74, +2.67)
deepvk/plato-9b 31.81 (-1.92, +1.90) 24.25 (-1.71, +1.84)
Qwen/Qwen2.5-7B-Instruct 20.84 (-1.99, +1.67) 17.70 (-1.63, +1.68)
google/gemma-2-9b-it 12.98 (-1.36, +1.57) 12.97 (-1.46, +1.69)
IlyaGusev/saiga_gemma2_9b 9.72 (-1.34, +1.50) 10.64 (-1.40, +1.78)

Citation

Both authors contribute equally, order is alphabetical.

@misc{deepvk2024plato-9b,
    title={plato-9b},
    author={Eliseev, Anton and Semin, Kirill},
    url={https://huggingface.co/deepvk/plato-9b},
    publisher={Hugging Face}
    year={2025},
}
Downloads last month
254
Safetensors
Model size
9.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for deepvk/plato-9b

Base model

google/gemma-2-9b
Finetuned
(125)
this model