Papers
arxiv:2506.18254

RLPR: Extrapolating RLVR to General Domains without Verifiers

Published on Jun 23
· Submitted by Yirany on Jun 24
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

RLPR, a verifier-free framework using LLM's token probability scores as reward signals, enhances reasoning capabilities across both general and mathematical domains, outperforming other methods in various benchmarks.

AI-generated summary

Reinforcement Learning with Verifiable Rewards (RLVR) demonstrates promising potential in advancing the reasoning capabilities of LLMs. However, its success remains largely confined to mathematical and code domains. This primary limitation stems from the heavy reliance on domain-specific verifiers, which results in prohibitive complexity and limited scalability. To address the challenge, our key observation is that LLM's intrinsic probability of generating a correct free-form answer directly indicates its own evaluation of the reasoning reward (i.e., how well the reasoning process leads to the correct answer). Building on this insight, we propose RLPR, a simple verifier-free framework that extrapolates RLVR to broader general domains. RLPR uses the LLM's own token probability scores for reference answers as the reward signal and maximizes the expected reward during training. We find that addressing the high variance of this noisy probability reward is crucial to make it work, and propose prob-to-reward and stabilizing methods to ensure a precise and stable reward from LLM intrinsic probabilities. Comprehensive experiments in four general-domain benchmarks and three mathematical benchmarks show that RLPR consistently improves reasoning capabilities in both areas for Gemma, Llama, and Qwen based models. Notably, RLPR outperforms concurrent VeriFree by 7.6 points on TheoremQA and 7.5 points on Minerva, and even surpasses strong verifier-model-dependent approaches General-Reasoner by 1.6 average points across seven benchmarks.

Community

Paper author Paper submitter
•
edited Jun 24

We demonstrate the effectiveness of RLPR on models including Gemma, Llama, and Qwen series. RLPR surpasses the results of RLVR on all three models and surpassing methods using model-based verifiers.

🤗 All code, data, and models are open-sourced for the community!

image.png

Paper author Paper submitter

We manually analyze the reward quality and observe that our proposed probability-based reward (PR) exhibits better reward quality than naive likelihood, rule-based verifiers, and even model-based verifiers.

image.png

Paper author Paper submitter

With only one forward pass, we can obtain high-quality rewards for responses across various domains without requiring verifiers.

image.png

@Yirany
Really Interesting paper, I loved the idea of using the intrinsic token probabilities as a proxy for reward for non verifiable domains.

That said, I had a question: since the reward is directly tied to the model’s current probability of generating the correct answer, doesn’t this setup inherently promote exploitation over exploration? In other words, won’t the model just reinforce the paths it already thinks are correct, without exploring potentially better or alternative reasoning chains — especially during inference?

Would love to hear how RLPR balances this trade-off, or whether that limitation is acceptable for the targeted use cases.

·
Paper author

Thank you for your interest and insightful question. Unlike RLVR, which assigns non-zero rewards only to fully correct responses, RLPR also provides graded rewards to partially correct samples, thereby encouraging a broader range of reasoning paths. We believe a more direct way to enhance exploration effectiveness could be through controlling model behavior during inference, and we plan to explore this direction in future work.

Interesting idea, RLPR seems to introduce additional training time and add two extra forward propagation. What is the actual training efficiency?

·
Paper author

Thank you for your comment. Since the computation of r’ is performed only once per prompt, we effectively introduce just one additional forward pass per response to calculate r. As a result, the training efficiency of RLPR is comparable to that of RLVR, and the actual per-step training time is primarily influenced by the filter rate.

We observe a much higher filter rate and slower training on RLVR since it tends to wrongly mark all responses as incorrect due to language complexity (e.g., "A < B" is marked as incorrect while the reference answer is "A less than B").

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.18254 in a Space README.md to link it from this page.

Collections including this paper 7