TestTime RLVR: Test-Time Reinforcement Learning with Verification and Reasoning
Based on Absolute Zero Reasoner (AZR) Methodology
π News β’ π Links β’ π Roadmap β’ βοΈ Algorithm Flow β’ π Results
β¨ Getting Started β’ ποΈ Training β’ π§ Usage β’ π Evaluation
π Citation β’ π» Acknowledgement β’ π§ Contact β’ π Star History
π TestTime RLVR Implementation
π Overview
TestTime RLVR implements test-time reinforcement learning for enhanced reasoning capabilities using the AZR (Absolute Zero Reasoner) methodology. The system generates Input-Program-Output (IPO) triples from benchmark problems and creates three types of reasoning tasks (induction, deduction, abduction) to improve model performance at test time.
π― Key Features
- Complete Pipeline: LLM Solution Generation β IPO Extraction β Task Generation β LLM Evaluation β Reward Computation
- AZR Integration: Full integration with Absolute Zero Reasoner templates and evaluation methods
- Benchmark Support: MBPP+ and HumanEval+ datasets with structured data extraction
- Execution-based Evaluation: Program execution comparison instead of string matching
- VLLM Optimization: Faster inference with VLLM backend support
π Implementation Status
- β Phase 1: Infrastructure Setup - Complete pipeline architecture
- β Phase 2: Benchmark System - MBPP+/HumanEval+ integration
- β Phase 3: AZR Template Integration - Three reasoning tasks implementation
- β Phase 4: Complete Pipeline - Fully functional end-to-end system
- π Phase 5: RLVR Training - Reinforcement learning integration (In Progress)
π¦ Dataset Setup
Download required benchmark datasets:
# Download MBPP+ and HumanEval+ datasets
wget -O evaluation/code_eval/data/MbppPlus.jsonl https://huggingface.co/datasets/evalplus/mbppplus/resolve/main/MbppPlus.jsonl
wget -O evaluation/code_eval/data/HumanEvalPlus.jsonl https://huggingface.co/datasets/evalplus/humanevalplus/resolve/main/HumanEvalPlus.jsonl
π Quick Start
Running the Pipeline
# Navigate to test directory
cd test/
# Set GPU device
export CUDA_VISIBLE_DEVICES=6
# Execute complete pipeline
bash run_testtime_gpu6.sh
Command Line Options
# From test/ directory
python test_complete_pipeline.py \
--model "Qwen/Qwen2.5-7B" \
--benchmark "mbpp" \
--problem_id "Mbpp/478" \
--max_tokens 2048 \
--gpu 6 \
--verbose \
--output_dir ../tmp
Batch Evaluation
# From test/ directory
bash run_batch_evaluation.sh "Qwen/Qwen2.5-7B" "mbpp" 10 6
Supported Benchmarks
- MBPP+:
--benchmark mbpp --problem_id "Mbpp/X"
- HumanEval+:
--benchmark humaneval --problem_id "HumanEval/X"
- Test Mode:
--benchmark test
(example problems)
π Results Structure
tmp/{benchmark}/{problem_id}/ # Single problem results
βββ initial_solution/ # LLM's original solution + correctness
β βββ {problem_id}_original_problem.txt # Original benchmark problem
β βββ {problem_id}_llm_solution.txt # LLM solution + correctness evaluation
β βββ {problem_id}_extracted_program.py # Extracted function code
βββ ipo_triples/ # Input-Program-Output triples
βββ task_prompts/ # Generated reasoning tasks
βββ llm_responses/ # LLM responses to tasks
βββ extracted_answers/ # Extracted answers from responses
βββ {problem_id}_reward_analysis.json
βββ {problem_id}_reward_summary.txt
βββ {problem_id}_pipeline_summary.json
test/batch_results/ # Batch evaluation results
βββ batch_evaluation_{timestamp}/
β βββ batch_evaluation_results.json # Detailed results with correctness stats
β βββ evaluation_summary.md # Summary report with accuracy rates
π AZR References
- π [AZR Project Page]
- π [AZR Paper]
- π€ [AZR Models]
- π» [AZR Code]
- π [AZR Logs]
π TestTime RLVR Roadmap
βοΈ TestTime RLVR Algorithm Flow
TestTime RLVR implements a comprehensive test-time reasoning pipeline based on AZR methodology:
π Pipeline Stages
LLM Solution Generation: The model generates an initial solution for a given benchmark problem (MBPP+/HumanEval+)
IPO Triple Extraction: Input-Program-Output triples are created using structured benchmark data and LLM solution execution
Task Generation: Three types of reasoning tasks are generated:
- Induction: Deduce function from input/output pairs + message
- Deduction: Predict output from code + input
- Abduction: Predict input from code + output
LLM Evaluation: The model attempts to solve the generated reasoning tasks using AZR prompts and templates
Reward Computation: Solutions are verified through program execution, receiving accuracy-based rewards
π― Key Innovations
- Structured Data Integration: Direct use of benchmark
base_input
/plus_input
instead of assert parsing - Execution-based Evaluation: Program execution comparison for accurate task evaluation
- Function Name Normalization: Consistent
f
function naming following AZR methodology - Docstring Utilization: LLM-generated docstrings enhance induction task quality
π Results
Main Results
Our approach achieves strong performance across both code and math reasoning benchmarks without using any external data:
Model | Base | #data | Code Avg | Math Avg | Total Avg |
---|---|---|---|---|---|
Base Models | |||||
Qwen2.5-7B | - | - | 52.0 | 27.5 | 39.8 |
Qwen2.5-7B-Ins | - | - | 56.3 | 37.0 | 46.7 |
Qwen2.5-7B-Coder | - | - | 56.6 | 23.9 | 40.2 |
Reasoners Trained on Curated Code Data | |||||
AceCoder-RM | Ins | 22k | 58.3 | 37.4 | 47.9 |
AceCoder-RM | Coder | 22k | 57.3 | 27.5 | 42.4 |
AceCoder-Rule | Ins | 22k | 55.4 | 36.9 | 46.2 |
AceCoder-Rule | Coder | 22k | 60.0 | 28.5 | 44.3 |
CodeR1-LC2k | Ins | 2k | 60.5 | 35.6 | 48.0 |
CodeR1-12k | Ins | 10k | 61.3 | 33.5 | 47.4 |
Reasoners Trained on Curated Math Data | |||||
PRIME-Zero | Coder | 484k | 37.2 | 45.8 | 41.5 |
SimpleRL-Zoo | Base | 8.5k | 54.0 | 38.5 | 46.3 |
Oat-Zero | Math | 8.5k | 45.4 | 44.3 | 44.9 |
ORZ | Base | 57k | 55.6 | 41.6 | 48.6 |
Absolute Zero Training w/ No Curated Data (Ours) | |||||
AZR (Ours) | Base | 0 | 55.2 +3.2 | 38.4 +10.9 | 46.8 +7.0 |
AZR (Ours) | Coder | 0 | 61.6 +5.0 | 39.1 +15.2 | 50.4 +10.2 |
Scaling Results
AZR shows consistent improvements across model sizes and types:
Model Family | Variant | Code Avg | Math Avg | Total Avg |
---|---|---|---|---|
Llama3.1-8b | 28.5 | 3.4 | 16.0 | |
Llama3.1-8b | + AZR (Ours) | 31.6 +3.1 | 6.8 +3.4 | 19.2 +3.2 |
Qwen2.5-3B Coder | 51.2 | 18.8 | 35.0 | |
Qwen2.5-3B Coder | + AZR (Ours) | 54.9 +3.7 | 26.5 +7.7 | 40.7 +5.7 |
Qwen2.5-7B Coder | 56.6 | 23.9 | 40.2 | |
Qwen2.5-7B Coder | + AZR (Ours) | 61.6 +5.0 | 39.1 +15.2 | 50.4 +10.2 |
Qwen2.5-14B Coder | 60.0 | 20.2 | 40.1 | |
Qwen2.5-14B Coder | + AZR (Ours) | 63.6 +3.6 | 43.0 +22.8 | 53.3 +13.2 |
β¨ Getting Started
π Environment Setup
conda env create -f azr_env.yml
conda activate azr
pip install -r flashattn_requirements.txt
πΎ Data Processing
Process evaluation data on CruxEval / LiveCodeBench Execution during AZR Self-play
python -m absolute_zero_reasoner.data_construction.process_code_reasoning_data
ποΈ Training
β οΈWARNINGβ οΈ: The Python executor in this repository is very raw and intended for research purposes only. It is not secure for production environments. We plan to update our executor to more secure implementations in the future. Your use of our code is at your own discretion and risk.
π« Seeding (Optional)
We provide the seed datasets we collected by prompting each model in data/. If you want to create your own seed data, use the following script:
export OUTPUT_SEED_PATH=data/<new_ded_abd_seed_data_name>.jsonl
export OUTPUT_CODE_F_SEED_PATH=data/<new_ind_seed_data_name>.jsonl
bash scripts/seeding/<7b|14b|coder3b|coder7b|coder14b|llama>.sh
βοΈ Self-play
3b models need 2 X 80gb GPUs, 7/8b models need 4 X 80gb, 14b requires 8 X 80gb
bash scripts/selfplay/<7b|14b|coder3b|coder7b|coder14b|llama>.sh
If you want to use your own ded/abd or ind seed dataset:
export OUTPUT_SEED_PATH=data/<your_ded_abd_seed_data_name>.jsonl
export OUTPUT_CODE_F_SEED_PATH=data/<your_ind_seed_data_name>.jsonl
bash scripts/selfplay/<7b|14b|coder3b|coder7b|coder14b|llama>.sh
For using the newly supported sandbox-fusion executor, use docker and set azr.executor=sandboxfusion
.
π Resuming Runs
When resuming runs, put the original run wandb id into the script, i.e., trainer.wandb_run_id=<run_id>
.
π€ Converting veRL checkpoints to HF format
python -m absolute_zero_reasoner.utils.convert2hf \
<veRL_ckpt_path>/actor \
<veRL_ckpt_path>/actor/huggingface/ \
<hf_ckpt_path>
πDesign Your Own Intrinsic Rewards!
In configs, just add your own rewards to azr.reward.generation_reward_config
, check the ones already implemented such as diversity and complexity rewards. Be Creative!
π§ Usage
We use the Deepseek R1 & tags as prompt template:
A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>. User: {question}\nAssistant: <think>
π Evaluation Code
LiveCodeBench
Setup: LCB needs to first download the data
git clone https://hf-mirror.com/datasets/livecodebench/code_generation_lite evaluation/code_eval/coding/LiveCodeBench/code_generation_lite
Evaluation:
bash evaluation/code_eval/scripts/run_lcb_gen.sh --model <andrewzh/Absolute_Zero_Reasoner-Coder-3b>
Evalplus
New conda env is neede for evalplus
conda create -n evalplus python=3.11
pip install --upgrade "evalplus[vllm] @ git+https://github.com/evalplus/evalplus@d362e933265c3e7e3df8101c930a89c3c470cd9f"
Evaluation:
```bash
condda activate evalplus
bash evaluation/code_eval/scripts/run_evalplus.sh 0 <humaneval|mbpp> <andrewzh/Absolute_Zero_Reasoner-Coder-3b>
Math
Please refer to evaluation/math_eval/README.md for math evaluation.
π Citation
If you find Absolute Zero Reasoner helpful, please cite us.
@misc{zhao2025absolutezeroreinforcedselfplay,
title={Absolute Zero: Reinforced Self-play Reasoning with Zero Data},
author={Andrew Zhao and Yiran Wu and Yang Yue and Tong Wu and Quentin Xu and Yang Yue and Matthieu Lin and Shenzhi Wang and Qingyun Wu and Zilong Zheng and Gao Huang},
year={2025},
eprint={2505.03335},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2505.03335},
}
π» Acknowledgement
Our reinforcement learning training codebase is a fork of the veRL framework. For rollouts, we used vLLM. The Python executor components are adapted from the QwQ Repository. Additionally, we borrowed our README structure from PRIME. Many thanks to the authors of these projects for their excellent contributions!
π§ Contact
Feel free to contact Andrew Zhao via email: [email protected]