Datasets:
File size: 2,411 Bytes
efa34a0 b6d85c5 efa34a0 e5c0e38 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
language:
- en
tags:
- multimodal
- reinforcement-learning
- reflection
- reasoning
- dataset
license: mit
task_categories:
- question-answering
pretty_name: SRPO Dataset
size_categories:
- 10K<n<100K
---
# SRPO Dataset: Reflection-Aware RL Training Data
This repository provides the multimodal reasoning dataset used in the paper:
**[SRPO: Enhancing Multimodal LLM Reasoning via Reflection-Aware Reinforcement Learning](https://arxiv.org/abs/2506.01713)**
We release two versions of the dataset:
- **39K version** (`modified_39Krelease.jsonl` + `images.zip`)
- **Enhanced 47K+ version** (`47K_release_plus.jsonl` + `47K_release_plus.zip`)
Both follow the same unified format, containing multimodal (image–text) reasoning data with self-reflection supervision. The 47K+ version further incorporates high-quality external datasets, such as [PhyX](https://arxiv.org/abs/2505.15929) and [We-Math 2.0](https://arxiv.org/abs/2508.10433), to strengthen physical and mathematical reasoning.
## 📂 Data Format
The data is stored in **JSON Lines (`.jsonl`)** format. Each sample includes an ID, a multimodal input (image + text), and the ground-truth answer.
Example:
```json
{
"id": "12",
"message": "[{\"role\": \"user\", \"content\": [{\"type\": \"image\", \"image\": \"/path/to/images/Processed-65d5feaa-714b-4a86-97e4-dc72802c4593-0.jpg\"}, {\"type\": \"text\", \"text\": \"<image>\\nAre there more berries with two leaves or with one leaf?\"}]}]",
"answer": "\\boxed{Two leaves}"
}
```
- **id**: Unique sample identifier
- **message**: Conversation-style user input, combining image reference and textual query
- **answer**: Ground-truth answer in LaTeX-style format
## 📂 Citation
```
@article{wan2025srpo,
title={Srpo: Enhancing multimodal llm reasoning via reflection-aware reinforcement learning},
author={Wan, Zhongwei and Dou, Zhihao and Liu, Che and Zhang, Yu and Cui, Dongfei and Zhao, Qinjian and Shen, Hui and Xiong, Jing and Xin, Yi and Jiang, Yifan and others},
journal={arXiv preprint arXiv:2506.01713},
year={2025}
}
@article{shen2025phyx,
title={PhyX: Does Your Model Have the "Wits" for Physical Reasoning?},
author={Shen, Hui and Wu, Taiqiang and Han, Qi and Hsieh, Yunta and Wang, Jizhou and Zhang, Yuyue and Cheng, Yuxin and Hao, Zijian and Ni, Yuansheng and Wang, Xin and others},
journal={arXiv preprint arXiv:2505.15929},
year={2025}
}
```
|