SOD
Collection
SOD (Step-wise On-policy Distillation) model family for small language model agents. • 3 items • Updated
SOD-GRPO_teacher-4B is a 4B agentic reasoning model trained with GRPO (Group Relative Policy Optimization), serving as the teacher model in the SOD distillation framework.
This model is used to distill smaller student models (SOD-0.6B and SOD-1.7B) via the SOD method, which introduces adaptive step-level weighting to handle cascading error propagation in tool-integrated reasoning.
| Attribute | Value |
|---|---|
| Base Model | Qwen3-4B |
| Training Pipeline | Cold-Start SFT → GRPO |
| Parameters | 4B |
| Model | Description |
|---|---|
| SOD-0.6B | SOD-distilled 0.6B student |
| SOD-1.7B | SOD-distilled 1.7B student |
| SOD-GRPO_teacher-4B | GRPO-trained 4B teacher model (this model) |
We report average@32 over 5 runs on challenging math, science, and code benchmarks.
| Method | AIME 2024 | AIME 2025 | GPQA-Diamond | LiveCodeBench-v6 | Average |
|---|---|---|---|---|---|
| GRPO (This Model) | 67.60 | 60.42 | 55.19 | 63.13 | 61.59 |
| Model | AIME 2024 | AIME 2025 | GPQA-Diamond | LiveCodeBench-v6 | Average |
|---|---|---|---|---|---|
| SOD-0.6B | 20.84 | 26.13 | 22.19 | 27.72 | 24.22 |
| SOD-1.7B | 50.83 | 41.72 | 38.72 | 40.63 | 42.98 |
We sincerely thank the authors of DemyAgent-4B and the paper "Demystifying Reinforcement Learning in Agentic Reasoning" (arXiv:2510.11701) for their contribution.
@article{zhong2026sod,
title={SOD: Step-wise On-policy Distillation for Small Language Model Agents},
author={Qiyong Zhong and Mao Zheng and Mingyang Song and Xin Lin and Jie Sun and Houcheng Jiang and Xiang Wang and Junfeng Fang},
journal={arXiv preprint arXiv:2605.07725},
year={2026}
}