plato-9b

plato-9b is a fine-tuned version of the google/gemma-2-9b-it model for generating responses in the Russian language. This 9-billion parameter model excels at conversational tasks, offering rich contextual understanding and fine-grained results.

Usage

To use plato-9b with the transformers library:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("deepvk/plato-9b")
model = AutoModelForCausalLM.from_pretrained("deepvk/plato-9b")

input_text = "ะงั‚ะพ ัั‚ะพะธั‚ ะฟะพัะตั‚ะธั‚ัŒ ะฒ ะ ะพััะธะธ?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids

output = model.generate(input_ids, max_length=150, do_sample=True, temperature=0.7)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
# ะงั‚ะพ ัั‚ะพะธั‚ ะฟะพัะตั‚ะธั‚ัŒ ะฒ ะ ะพััะธะธ?
# 1. ะšั€ะฐัะฝะฐั ะฟะปะพั‰ะฐะดัŒ ะธ ะšั€ะตะผะปัŒ ะฒ ะœะพัะบะฒะต
# 2. ะญั€ะผะธั‚ะฐะถ ะฒ ะกะฐะฝะบั‚-ะŸะตั‚ะตั€ะฑัƒั€ะณะต
# 3. ะ‘ะฐะนะบะฐะป
# 4. ะกะพะปะพะฒะตั†ะบะธะต ะพัั‚ั€ะพะฒะฐ
# 5. ะšะฐะผั‡ะฐั‚ะบะฐ ะธ ะตั‘ ะฒัƒะปะบะฐะฝั‹
# 6. ะ—ะพะปะพั‚ะพะต ะšะพะปัŒั†ะพ
# 7. ะšะฐะทะฐะฝัะบะธะน ะšั€ะตะผะปัŒ
# 8. ะะปั‚ะฐะน
# 9. ะัั‚ั€ะฐั…ะฐะฝัะบะฐั ะพะฑะปะฐัั‚ัŒ ะธ ะ’ะพะปะณะพ-ะ”ะพะฝัะบะพะน ะบะฐะฝะฐะป
# 10. ะšะฐะฒะบะฐะทัะบะธะต ะณะพั€ั‹ ะธ ะงะตั€ะฝะพะผะพั€ัะบะพะต ะฟะพะฑะตั€ะตะถัŒะต
# 
# ะšะฐะถะดะพะต ะธะท ัั‚ะธั… ะผะตัั‚ ะฟั€ะตะดะปะฐะณะฐะตั‚ ัƒะฝะธะบะฐะปัŒะฝั‹ะต ะบัƒะปัŒั‚ัƒั€ะฝั‹ะต, ะธัั‚ะพั€ะธั‡ะตัะบะธะต ะธ ะฟั€ะธั€ะพะดะฝั‹ะต ะดะพัั‚ะพะฟั€ะธะผะตั‡ะฐั‚ะตะปัŒะฝะพัั‚ะธ,
# ะบะพั‚ะพั€ั‹ะต ะดะตะปะฐัŽั‚ ะ ะพััะธัŽ ัั‚ะพะปัŒ ัƒะดะธะฒะธั‚ะตะปัŒะฝะพะน ะธ ั€ะฐะทะฝะพะพะฑั€ะฐะทะฝะพะน ัั‚ั€ะฐะฝะพะน.

Dataset

We applied both Supervised Fine-Tuning (SFT) and Preference Optimization (PO). For SFT, we used an 8B token instruction dataset, with 4B tokens consisting of dialogues and the rest covering math, biology, chemistry, code, and general knowledge. The PO dataset contains 200M tokens featuring common knowledge instructions. We trained on both datasets for several epochs.

Evaluation

To evaluate, we applied LLM-as-a-judge approach on academic tasks. Specifically, we used arena-general-ru and arena-hard-ru with gpt4o judge and gpt4o-mini baseline.

arena-general-ru

Model Score Score w/ SC
gpt-4o-2024-11-20 81.87 (-2.04, +1.81) 78.42 (-2.39, +2.33)
gpt-4o-mini-2024-07-18 50.00 (-0.00, +0.00) 50.00 (-0.00, +0.00)
deepvk/plato-9b 41.27 (-2.18, +2.24) 32.13 (-1.97, +2.05)
t-tech/T-lite-it-1.0 38.52 (-2.04, +2.98) 30.38 (-1.90, +3.15)
google/gemma-2-9b-it 27.46 (-2.06, +1.74) 25.80 (-2.09, +1.98)
Qwen/Qwen2.5-7B-Instruct 24.60 (-2.36, +2.38) 23.67 (-2.36, +2.28)
IlyaGusev/saiga_gemma2_9b 17.83 (-1.95, +1.66) 18.46 (-2.22, +1.69)

arena-hard-ru

Model Score Score w/ SC
gpt-4o-2024-11-20 85.70 (-1.45, +1.38) 80.19 (-1.99, +2.04)
gpt-4o-mini-2024-07-18 50.00 (-0.00, +0.00) 50.00 (-0.00, +0.00)
t-tech/T-lite-it-1.0 34.80 (-1.98, +2.38) 26.99 (-1.74, +2.67)
deepvk/plato-9b 31.81 (-1.92, +1.90) 24.25 (-1.71, +1.84)
Qwen/Qwen2.5-7B-Instruct 20.84 (-1.99, +1.67) 17.70 (-1.63, +1.68)
google/gemma-2-9b-it 12.98 (-1.36, +1.57) 12.97 (-1.46, +1.69)
IlyaGusev/saiga_gemma2_9b 9.72 (-1.34, +1.50) 10.64 (-1.40, +1.78)

Citation

Both authors contribute equally, order is alphabetical.

@misc{deepvk2024plato-9b,
    title={plato-9b},
    author={Eliseev, Anton and Semin, Kirill},
    url={https://huggingface.co/deepvk/plato-9b},
    publisher={Hugging Face}
    year={2025},
}
Downloads last month
4
Safetensors
Model size
9B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for deepvk/plato-9b

Finetuned
(403)
this model
Quantizations
2 models

Space using deepvk/plato-9b 1