Co-rewarding: GT-GRPO Qwen3-4B-Base trained on OpenRS
This model is a checkpoint of the Qwen3-4B-Base model, specifically trained using the GT-GRPO (Ground-Truth Guided Policy Optimization) method on the OpenRS training set. It is part of the work presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
Paper Abstract Summary
The paper introduces Co-rewarding, a novel self-supervised reinforcement learning (RL) framework designed to enhance the reasoning abilities of large language models (LLMs). It aims to achieve training stability by leveraging complementary supervision from multiple views. The framework is instantiated in two ways:
- Co-rewarding-I: A data-side approach that derives reward signals from contrastive agreement across semantically analogous questions.
- Co-rewarding-II: A model-side approach that uses a slowly-updated reference teacher with pseudo labels for self-distillation. These instantiations introduce discrepancies to prevent training collapse on trivial reasoning solutions. Empirically, Co-rewarding demonstrates stable training and superior performance compared to other self-rewarding baselines on various mathematical reasoning benchmarks, notably surpassing ground-truth RLVR in some cases.
GitHub Repository
For comprehensive details on the Co-rewarding framework, installation instructions, training scripts, and additional checkpoints, please visit the official GitHub repository: https://github.com/tmlr-group/Co-rewarding
Citation
If you use this model or any resources from the Co-rewarding project, please cite the following paper:
@article{zhang2025co,
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
journal={arXiv preprint arXiv:2508.00410},
year={2025}
}
- Downloads last month
- 15