Embarrassingly Simple Self-Distillation Improves Code Generation
Paper • 2604.01193 • Published • 47
This model is a fine-tuned version of Qwen/Qwen3-VL-2B-Instruct. It has been trained using TRL.
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ml-agent-explorers/ssd-qwen3vl-oxfordpets", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
This model was trained with SSD, a method introduced in Embarrassingly Simple Self-Distillation Improves Code Generation.
Cite SSD as:
@article{zhang2026ssd,
title = {{Embarrassingly Simple Self-Distillation Improves Code Generation}},
author = {Ruixiang Zhang and Richard He Bai and Huangjie Zheng and Navdeep Jaitly and Ronan Collobert and Yizhe Zhang},
year = 2026,
eprint = {arXiv:2604.01193}
}
Cite TRL as:
@software{vonwerra2020trl,
title = {{TRL: Transformers Reinforcement Learning}},
author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
license = {Apache-2.0},
url = {https://github.com/huggingface/trl},
year = {2020}
}
Base model
Qwen/Qwen3-VL-2B-Instruct