SOD-GRPO_teacher-4B

Paper on arXiv Code on GitHub HuggingFace Collection

About

SOD-GRPO_teacher-4B is a 4B agentic reasoning model trained with GRPO (Group Relative Policy Optimization), serving as the teacher model in the SOD distillation framework.

This model is used to distill smaller student models (SOD-0.6B and SOD-1.7B) via the SOD method, which introduces adaptive step-level weighting to handle cascading error propagation in tool-integrated reasoning.

Model Information

Attribute Value
Base Model Qwen3-4B
Training Pipeline Cold-Start SFT → GRPO
Parameters 4B

Related Models

Model Description
SOD-0.6B SOD-distilled 0.6B student
SOD-1.7B SOD-distilled 1.7B student
SOD-GRPO_teacher-4B GRPO-trained 4B teacher model (this model)

Performance

We report average@32 over 5 runs on challenging math, science, and code benchmarks.

Method AIME 2024 AIME 2025 GPQA-Diamond LiveCodeBench-v6 Average
GRPO (This Model) 67.60 60.42 55.19 63.13 61.59

Distilled Students

Model AIME 2024 AIME 2025 GPQA-Diamond LiveCodeBench-v6 Average
SOD-0.6B 20.84 26.13 22.19 27.72 24.22
SOD-1.7B 50.83 41.72 38.72 40.63 42.98

Acknowledgement

We sincerely thank the authors of DemyAgent-4B and the paper "Demystifying Reinforcement Learning in Agentic Reasoning" (arXiv:2510.11701) for their contribution.

Citation

@article{zhong2026sod,
      title={SOD: Step-wise On-policy Distillation for Small Language Model Agents}, 
      author={Qiyong Zhong and Mao Zheng and Mingyang Song and Xin Lin and Jie Sun and Houcheng Jiang and Xiang Wang and Junfeng Fang},
      journal={arXiv preprint arXiv:2605.07725},
      year={2026}
}
Downloads last month
-
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for youngzhong/SOD-GRPO_teacher-4B

Finetuned
Qwen/Qwen3-4B
Finetuned
(643)
this model

Collection including youngzhong/SOD-GRPO_teacher-4B

Papers for youngzhong/SOD-GRPO_teacher-4B