nielsr HF Staff commited on
Commit
5f6be3a
·
verified ·
1 Parent(s): 276a93f

Improve dataset card for M2RAG benchmark

Browse files

This PR updates the dataset card for M2RAG to enhance its discoverability and utility.

Key improvements include:
- Correcting `task_categories` to `image-text-to-text` to better reflect the dataset's multi-modal text generation tasks, while retaining `visual-question-answering`.
- Adding `retrieval-augmented-generation`, `multimodal`, and `benchmark` as relevant tags for improved searchability.
- Enhancing the overview with details about `MM-RAIT` from the paper abstract and GitHub README.
- Incorporating a comprehensive "Sample Usage" section, porting detailed instructions for setup, reproduction, and evaluation from the GitHub repository to guide users effectively.

Files changed (1) hide show
  1. README.md +133 -8
README.md CHANGED
@@ -1,19 +1,24 @@
1
  ---
 
 
2
  license: mit
3
  task_categories:
4
- - text-to-image
5
  - visual-question-answering
6
- language:
7
- - en
 
 
 
8
  ---
9
- # Data statices of M2RAG
 
10
 
11
  Click the links below to view our paper and Github project.
12
  <a href='https://arxiv.org/abs/2502.17297'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a><a href='https://github.com/NEUIR/M2RAG'><img src="https://img.shields.io/badge/Github-M2RAG-blue?logo=Github"></a>
13
 
14
- If you find this work useful, please cite our paper and give us a shining star 🌟 in Github
15
 
16
- ```
17
  @misc{liu2025benchmarkingretrievalaugmentedgenerationmultimodal,
18
  title={Benchmarking Retrieval-Augmented Generation in Multi-Modal Contexts},
19
  author={Zhenghao Liu and Xingsheng Zhu and Tianshuo Zhou and Xinyi Zhang and Xiaoyuan Yi and Yukun Yan and Yu Gu and Ge Yu and Maosong Sun},
@@ -24,10 +29,13 @@ If you find this work useful, please cite our paper and give us a shining star
24
  url={https://arxiv.org/abs/2502.17297},
25
  }
26
  ```
 
27
  ## 🎃 Overview
28
 
29
  The **M²RAG** benchmark evaluates Multi-modal Large Language Models (MLLMs) by using multi-modal retrieved documents to answer questions. It includes four tasks: image captioning, multi-modal QA, fact verification, and image reranking, assessing MLLMs’ ability to leverage knowledge from multi-modal contexts.
30
 
 
 
31
  <p align="center">
32
  <img align="middle" src="https://raw.githubusercontent.com/NEUIR/M2RAG/main/assets/m2rag.png" style="width: 600px;" alt="m2rag"/>
33
  </p>
@@ -46,6 +54,123 @@ M2RAG/
46
 
47
  ❗️Note:
48
 
49
- - If you encounter difficulties when downloading the images directly, please download and use the pre-packaged image file ```M2RAG_Images.zip``` instead.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
- - To obtain the ```imgs.tsv```, you can follow the instructions in the [WebQA](https://github.com/WebQnA/WebQA?tab=readme-ov-file#download-data) project. Specifically, you need to first download all the data from the folder [WebQA_imgs_7z_chunks](https://drive.google.com/drive/folders/19ApkbD5w0I5sV1IeQ9EofJRyAjKnA7tb), and then run the command ``` 7z x imgs.7z.001```to unzip and merge all chunks to get the imgs.tsv.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: mit
5
  task_categories:
 
6
  - visual-question-answering
7
+ - image-text-to-text
8
+ tags:
9
+ - retrieval-augmented-generation
10
+ - multimodal
11
+ - benchmark
12
  ---
13
+
14
+ # M2RAG: Benchmarking Retrieval-Augmented Generation in Multi-Modal Contexts
15
 
16
  Click the links below to view our paper and Github project.
17
  <a href='https://arxiv.org/abs/2502.17297'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a><a href='https://github.com/NEUIR/M2RAG'><img src="https://img.shields.io/badge/Github-M2RAG-blue?logo=Github"></a>
18
 
19
+ If you find this work useful, please cite our paper and give us a shining star 🌟 in Github
20
 
21
+ ```bibtex
22
  @misc{liu2025benchmarkingretrievalaugmentedgenerationmultimodal,
23
  title={Benchmarking Retrieval-Augmented Generation in Multi-Modal Contexts},
24
  author={Zhenghao Liu and Xingsheng Zhu and Tianshuo Zhou and Xinyi Zhang and Xiaoyuan Yi and Yukun Yan and Yu Gu and Ge Yu and Maosong Sun},
 
29
  url={https://arxiv.org/abs/2502.17297},
30
  }
31
  ```
32
+
33
  ## 🎃 Overview
34
 
35
  The **M²RAG** benchmark evaluates Multi-modal Large Language Models (MLLMs) by using multi-modal retrieved documents to answer questions. It includes four tasks: image captioning, multi-modal QA, fact verification, and image reranking, assessing MLLMs’ ability to leverage knowledge from multi-modal contexts.
36
 
37
+ The **Multi-Modal Retrieval Augmented Instruction Tuning (MM-RAIT)** method further adapts MLLMs to multi-modal in-context learning, enhancing their effectiveness in utilizing knowledge from these retrieval documents.
38
+
39
  <p align="center">
40
  <img align="middle" src="https://raw.githubusercontent.com/NEUIR/M2RAG/main/assets/m2rag.png" style="width: 600px;" alt="m2rag"/>
41
  </p>
 
54
 
55
  ❗️Note:
56
 
57
+ - If you encounter difficulties when downloading the images directly, please download and use the pre-packaged image file `M2RAG_Images.zip` instead.
58
+
59
+ - To obtain the `imgs.tsv`, you can follow the instructions in the [WebQA](https://github.com/WebQnA/WebQA?tab=readme-ov-file#download-data) project. Specifically, you need to first download all the data from the folder [WebQA_imgs_7z_chunks](https://drive.google.com/drive/folders/19ApkbD5w0I5sV1IeQ9EofJRyAjKnA7tb), and then run the command `7z x imgs.7z.001` to unzip and merge all chunks to get the imgs.tsv.
60
+
61
+ ## 🎃 Sample Usage
62
+
63
+ ### 🌵 Requirements
64
+ To use this dataset and reproduce results, install the following packages using Pip or Conda:
65
+ ```
66
+ Python==3.10
67
+ Pytorch
68
+ transformers==4.44.2 (4.46.1 for finetune qwen2-vl)
69
+ clip
70
+ faiss==1.9.0
71
+ tqdm
72
+ numpy
73
+ base64
74
+ diffusers
75
+ flash-attn
76
+ xformers
77
+ llamafactory
78
+ accelerate
79
+ nltk
80
+ rouge_score
81
+ sklearn
82
+ ```
83
+ We provide the version file `requirements.txt` of all our used packages in the GitHub repository for environment configuration.
84
+
85
+ You will also need pretrained models: [MiniCPM-V 2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6), [Qwen2-VL](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct), and [VISTA](https://huggingface.co/BAAI/bge-visualized) (used for multi-modal document retrieval).
86
+
87
+ ### 🌵 Reproduce MM-RAIT
88
+
89
+ #### Download Code & Dataset
90
+ First, clone the project from GitHub:
91
+ ```bash
92
+ git clone https://github.com/NEUIR/M2RAG
93
+ cd M2RAG
94
+ ```
95
+ Second, you can either directly download and use [M2RAG](https://huggingface.co/datasets/whalezzz/M2RAG), or follow the instructions in 'data/data_preprocess' to build it step by step. Please place the downloaded dataset in the `data` folder as shown in the data structure above.
96
+ (❗️Note: For the ```imgs.tsv```, you need to download the data from [this link](https://drive.google.com/drive/folders/1ApfD-RzvJ79b-sLeBx1OaiPNUYauZdAZ?usp=sharing) and run ```7z x imgs.7z.001```).
97
+
98
+ ```
99
+ data/
100
+ └──m2rag/
101
+ ├──fact_verify/
102
+ ├──image_cap/
103
+ ├──image_rerank/
104
+ ├──mmqa/
105
+ ├──imgs.lineidx.new
106
+ └──imgs.tsv
107
+ ```
108
+
109
+ #### Inference for Zero-Shot setting
110
+ Once the dataset and vanilla models are ready, you can follow the instructions below to reproduce our zero-shot results.
111
+
112
+ * Step 1: Encode the queries from the test set and the multi-modal corpus for each task.
113
+ ```bash
114
+ cd script
115
+ bash get_embed_test.sh
116
+ ```
117
+
118
+ * Step 2: Retrieve the topN most relevant multi-modal documents for each query.
119
+ ```bash
120
+ bash retrieval_test.sh
121
+ ```
122
+ * Step 3: Use the retrieved documents for vanilla RAG inference.
123
+ ```bash
124
+ bash inference_cpmv.sh # or bash inference_qwen.sh
125
+ ```
126
+ For Image Reranking task, please use:
127
+ ```bash
128
+ bash compute_ppl_minicpmv.sh # or bash compute_ppl_qwen2vl.sh
129
+ ```
130
+
131
+ #### Train MM-RAIT
132
+ **Using the MiniCPM-V 2.6 models as an example, I will show you how to reproduce the results in this paper. The same is true for the Qwen2-VL. Also, we provide fine-tuned checkpoints. You can skip this step and proceed directly to inference.**
133
 
134
+ * First step: Prepare the training data.
135
+ ```bash
136
+ bash get_embed_train.sh
137
+ bash retrieval_train.sh
138
+ cd ../data/
139
+ bash finetune/construct_finetune_data.sh
140
+ ```
141
+
142
+ * Second step: Fine-tune the MiniCPM-V model using LoRA.
143
+ ```bash
144
+ cd ../script
145
+ bash finetune_cpmv.sh
146
+ ```
147
+
148
+ * Final step: Use the fine-tuned model for inference.
149
+ ```bash
150
+ bash inference_cpmv.sh
151
+ ```
152
+ For Image Reranking task, please use:
153
+ ```bash
154
+ bash compute_ppl_minicpmv.sh
155
+ ```
156
+
157
+ ### 🌵 Evaluate Generation Effectiveness
158
+ Go to the `src/evaluation` folder and evaluate model performance as follows:
159
+ * For Image Captioning and Multi-modal QA tasks, please use:
160
+ ```bash
161
+ python generation.py --reference_file path_to_reference_data --candidate_file path_to_generation_data
162
+ ```
163
+ * For Multi-Modal Fact Verification task, please use:
164
+ ```bash
165
+ python classification.py --true_file path_to_reference_data --pred_file path_to_generation_data
166
+ ```
167
+ * For Image Reranking task, please use:
168
+ ```bash
169
+ python -m pytorch_fid path/to/reference_images path/to/rerank_images
170
+ ```
171
+
172
+ ## 🎃 Contact
173
+ If you have questions, suggestions, and bug reports, please email:
174
+ ```
175
176
+ ```