Co-rewarding
Collection
Co-rewarding is a novel self-supervised RL framework that improves training stability by seeking complementary supervision from another views. • 75 items • Updated • 1
This is the Qwen2.5-7B model trained by the Self-Certainty method using the MATH training set. It is part of the work presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
For more details on the Co-rewarding framework and its implementation, you can find the code and further information on the official GitHub repository: https://github.com/tmlr-group/Co-rewarding.
@article{zhang2025coreward,
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
journal={arXiv preprint arXiv:2508.00410},
year={2025}
}