bruce360568 commited on
Commit
e5c0e38
·
verified ·
1 Parent(s): 2d6371e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SRPO Dataset: Reflection-Aware RL Training Data
2
+
3
+ This repository provides the multimodal reasoning dataset used in the paper:
4
+
5
+ **[SRPO: Enhancing Multimodal LLM Reasoning via Reflection-Aware Reinforcement Learning](https://arxiv.org/abs/2506.01713)**
6
+
7
+ We release two versions of the dataset:
8
+
9
+ - **39K version** (`modified_39Krelease.jsonl` + `images.zip`)
10
+ - **Enhanced 47K+ version** (`47K_release_plus.jsonl` + `47K_release_plus.zip`)
11
+
12
+ Both follow the same unified format, containing multimodal (image–text) reasoning data with self-reflection supervision. The 47K+ version further incorporates high-quality external datasets, such as [PhyX](https://arxiv.org/abs/2505.15929) and [We-Math 2.0](https://arxiv.org/abs/2508.10433), to strengthen physical and mathematical reasoning.
13
+
14
+ ## 📂 Data Format
15
+
16
+ The data is stored in **JSON Lines (`.jsonl`)** format. Each sample includes an ID, a multimodal input (image + text), and the ground-truth answer.
17
+
18
+ Example:
19
+
20
+ ```json
21
+ {
22
+ "id": "12",
23
+ "message": "[{\"role\": \"user\", \"content\": [{\"type\": \"image\", \"image\": \"/path/to/images/Processed-65d5feaa-714b-4a86-97e4-dc72802c4593-0.jpg\"}, {\"type\": \"text\", \"text\": \"<image>\\nAre there more berries with two leaves or with one leaf?\"}]}]",
24
+ "answer": "\\boxed{Two leaves}"
25
+ }
26
+ ```
27
+ - **id**: Unique sample identifier
28
+ - **message**: Conversation-style user input, combining image reference and textual query
29
+ - **answer**: Ground-truth answer in LaTeX-style format
30
+
31
+ ## 📂 Citation
32
+ ```
33
+ @article{wan2025srpo,
34
+ title={Srpo: Enhancing multimodal llm reasoning via reflection-aware reinforcement learning},
35
+ author={Wan, Zhongwei and Dou, Zhihao and Liu, Che and Zhang, Yu and Cui, Dongfei and Zhao, Qinjian and Shen, Hui and Xiong, Jing and Xin, Yi and Jiang, Yifan and others},
36
+ journal={arXiv preprint arXiv:2506.01713},
37
+ year={2025}
38
+ }
39
+
40
+ @article{shen2025phyx,
41
+ title={PhyX: Does Your Model Have the "Wits" for Physical Reasoning?},
42
+ author={Shen, Hui and Wu, Taiqiang and Han, Qi and Hsieh, Yunta and Wang, Jizhou and Zhang, Yuyue and Cheng, Yuxin and Hao, Zijian and Ni, Yuansheng and Wang, Xin and others},
43
+ journal={arXiv preprint arXiv:2505.15929},
44
+ year={2025}
45
+ }
46
+ ```
47
+
48
+