VL-Rethinker-7B / README.md
wenhu's picture
Update README.md
5439d11 verified
---
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
language:
- en
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- transformers
- multimodal
library_name: transformers
---
# VL-Rethinker-7B
**🚀 News:** <u>We release our meticulously curated collection of RL training queries for multimodal reasoning: [ViRL39K](https://huggingface.co/datasets/TIGER-Lab/ViRL39K).</u>
**VL-Rethinker-7B** achieves SoTA results on various multimodal reasoning benchmarks.
It is trained using the **GRPO-SSR and Forced Rethinking** techniques, using meticulously curated [ViRL39K](https://huggingface.co/datasets/TIGER-Lab/ViRL39K).
For details of our approach and performance comparison, please see our [paper](https://github.com/TIGER-AI-Lab/VL-Rethinker/blob/main/paper.pdf).
For details of training and evaluation, please see our [code repo](https://github.com/TIGER-AI-Lab/VL-Rethinker/).
Explore further via the following links:
| [**🚀Project Page**](https://tiger-ai-lab.github.io/VL-Rethinker/) | [**📖Paper**](https://arxiv.org/abs/2504.08837) | [**🔗Github**](https://github.com/TIGER-AI-Lab/VL-Rethinker/) | [**🤗Data**](https://huggingface.co/datasets/TIGER-Lab/ViRL39K) |
## Citation
If you feel this model useful, please give us a free cite:
```bibtex
@article{vl-rethinker,
title={VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning},
author = {Wang, Haozhe and Qu, Chao and Huang, Zuming and Chu, Wei and Lin,Fangzhen and Chen, Wenhu},
journal={arXiv preprint arXiv:2504.08837},
year={2025}
}
```