SOD
Collection
SOD (Step-wise On-policy Distillation) model family for small language model agents. • 3 items • Updated
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("youngzhong/SOD-1.7B")
model = AutoModelForCausalLM.from_pretrained("youngzhong/SOD-1.7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))SOD-1.7B is a 1.7B student model distilled from a 4B teacher using SOD (Step-wise On-policy Distillation), a method designed for training small language model agents with tool-integrated reasoning capabilities.
SOD addresses the cascading error propagation problem in on-policy distillation for agentic reasoning by introducing an adaptive step-level weighting mechanism that suppresses distillation loss on drifted steps and restores supervision when the student recovers alignment — all at negligible additional computational cost.
| Attribute | Value |
|---|---|
| Base Model | Qwen3-1.7B |
| Teacher Model | SOD-GRPO_teacher-4B |
| Training Pipeline | Cold-Start SFT → SOD (Step-wise On-policy Distillation) |
| Parameters | 1.7B |
| Model | Description |
|---|---|
| SOD-0.6B | SOD-distilled 0.6B student |
| SOD-1.7B | SOD-distilled 1.7B student (this model) |
| SOD-GRPO_teacher-4B | GRPO-trained 4B teacher model |
We report average@32 over 5 runs on challenging math, science, and code benchmarks.
| Method | AIME 2024 | AIME 2025 | GPQA-Diamond | LiveCodeBench-v6 | Average |
|---|---|---|---|---|---|
| Vanilla | 9.90 | 8.96 | 26.80 | 22.73 | 17.10 |
| SFT | 26.77 | 22.40 | 29.85 | 24.63 | 25.91 |
| GRPO | 25.63 | 21.67 | 33.55 | 20.70 | 25.39 |
| OPD | 43.86 | 37.04 | 31.73 | 32.45 | 36.27 |
| OPSD_gt | 33.85 | 24.69 | 35.02 | 22.73 | 29.07 |
| OPSD_hint | 34.42 | 21.43 | 33.46 | 23.12 | 28.11 |
| SOD (This Model) | 50.83 | 41.72 | 38.72 | 40.63 | 42.98 |
| Method | AIME 2024 | AIME 2025 | GPQA-Diamond | LiveCodeBench-v6 | Average |
|---|---|---|---|---|---|
| GRPO | 67.60 | 60.42 | 55.19 | 63.13 | 61.59 |
@article{zhong2026sod,
title={SOD: Step-wise On-policy Distillation for Small Language Model Agents},
author={Qiyong Zhong and Mao Zheng and Mingyang Song and Xin Lin and Jie Sun and Houcheng Jiang and Xiang Wang and Junfeng Fang},
journal={arXiv preprint arXiv:2605.07725},
year={2026}
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="youngzhong/SOD-1.7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)