Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models

This repository contains the Co-rewarding-I: Qwen3-8B-Base model, trained on the OpenRS dataset. This model is an instantiation of the Co-rewarding framework, a novel self-supervised reinforcement learning (RL) approach presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.

Co-rewarding aims to improve the reasoning ability of large language models (LLMs) by enhancing training stability. It addresses the common "training collapse" issue found in single-view self-rewarding methods by seeking complementary supervision from multiple perspectives. Specifically, Co-rewarding-I uses a data-side instantiation that derives reward signals from contrastive agreement across semantically analogous questions. The method has shown stable training and significant performance improvements over other self-rewarding baselines on various mathematical reasoning benchmarks.

For more details, including installation instructions, training scripts, additional checkpoints, and evaluation procedures, please refer to the official GitHub repository: https://github.com/tmlr-group/Co-rewarding

Citation

If you use our datasets or models, please cite our paper:

@article{zhang2025co,
  title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
  author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
  journal={arXiv preprint arXiv:2508.00410},
  year={2025}
}
Downloads last month
9
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/Co-rewarding-I-Qwen3-8B-Base-OpenRS

Quantizations
2 models

Collection including TMLR-Group-HF/Co-rewarding-I-Qwen3-8B-Base-OpenRS

Paper for TMLR-Group-HF/Co-rewarding-I-Qwen3-8B-Base-OpenRS