File size: 6,246 Bytes
e68fc33
5f6be3a
 
e68fc33
 
 
5f6be3a
 
 
 
 
e68fc33
5f6be3a
 
e68fc33
 
 
 
5f6be3a
e68fc33
5f6be3a
e68fc33
 
 
 
 
 
 
 
 
 
5f6be3a
e68fc33
 
 
 
5f6be3a
 
e68fc33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de9c6fe
 
 
5f6be3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de9c6fe
5f6be3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
---
language:
- en
license: mit
task_categories:
- visual-question-answering
- image-text-to-text
tags:
- retrieval-augmented-generation
- multimodal
- benchmark
---

# M2RAG: Benchmarking Retrieval-Augmented Generation in Multi-Modal Contexts

Click the links below to view our paper and Github project.
<a href='https://arxiv.org/abs/2502.17297'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a><a href='https://github.com/NEUIR/M2RAG'><img src="https://img.shields.io/badge/Github-M2RAG-blue?logo=Github"></a>

If you find this work useful, please cite our paper and give us a shining star 🌟 in Github

```bibtex
@misc{liu2025benchmarkingretrievalaugmentedgenerationmultimodal,
      title={Benchmarking Retrieval-Augmented Generation in Multi-Modal Contexts}, 
      author={Zhenghao Liu and Xingsheng Zhu and Tianshuo Zhou and Xinyi Zhang and Xiaoyuan Yi and Yukun Yan and Yu Gu and Ge Yu and Maosong Sun},
      year={2025},
      eprint={2502.17297},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2502.17297}, 
}
```

## 🎃 Overview

The **M²RAG** benchmark evaluates Multi-modal Large Language Models (MLLMs) by using multi-modal retrieved documents to answer questions. It includes four tasks: image captioning, multi-modal QA, fact verification, and image reranking, assessing MLLMs’ ability to leverage knowledge from multi-modal contexts. 

The **Multi-Modal Retrieval Augmented Instruction Tuning (MM-RAIT)** method further adapts MLLMs to multi-modal in-context learning, enhancing their effectiveness in utilizing knowledge from these retrieval documents.

<p align="center">
  <img align="middle" src="https://raw.githubusercontent.com/NEUIR/M2RAG/main/assets/m2rag.png" style="width: 600px;" alt="m2rag"/>
</p>

## 🎃 Data Storage Structure
The data storage structure of M2RAG is as follows:
```
M2RAG/
    ├──fact_verify/
    ├──image_cap/
    ├──image_rerank/
    ├──mmqa/
    ├──imgs.lineidx.new
    └──imgs.tsv
```

❗️Note: 

- If you encounter difficulties when downloading the images directly, please download and use the pre-packaged image file `M2RAG_Images.zip` instead.

- To obtain the `imgs.tsv`, you can follow the instructions in the [WebQA](https://github.com/WebQnA/WebQA?tab=readme-ov-file#download-data) project. Specifically, you need to first download all the data from the folder [WebQA_imgs_7z_chunks](https://drive.google.com/drive/folders/19ApkbD5w0I5sV1IeQ9EofJRyAjKnA7tb), and then run the command `7z x imgs.7z.001` to unzip and merge all chunks to get the imgs.tsv.

## 🎃 Sample Usage

### 🌵 Requirements
To use this dataset and reproduce results, install the following packages using Pip or Conda:
```
Python==3.10
Pytorch
transformers==4.44.2 (4.46.1 for finetune qwen2-vl)
clip
faiss==1.9.0
tqdm
numpy
base64
diffusers
flash-attn
xformers
llamafactory
accelerate
nltk
rouge_score
sklearn
```
We provide the version file `requirements.txt` of all our used packages in the GitHub repository for environment configuration.

You will also need pretrained models: [MiniCPM-V 2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6), [Qwen2-VL](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct), and [VISTA](https://huggingface.co/BAAI/bge-visualized) (used for multi-modal document retrieval).

### 🌵 Reproduce MM-RAIT

#### Download Code & Dataset
First, clone the project from GitHub:
```bash
git clone https://github.com/NEUIR/M2RAG
cd M2RAG
```
Second, you can either directly download and use [M2RAG](https://huggingface.co/datasets/whalezzz/M2RAG), or follow the instructions in 'data/data_preprocess' to build it step by step. Please place the downloaded dataset in the `data` folder as shown in the data structure above.
(❗️Note: For the ```imgs.tsv```, you need to download the data from [this link](https://drive.google.com/drive/folders/1ApfD-RzvJ79b-sLeBx1OaiPNUYauZdAZ?usp=sharing) and run ```7z x imgs.7z.001```).

```
data/
└──m2rag/
    ├──fact_verify/
    ├──image_cap/
    ├──image_rerank/
    ├──mmqa/
    ├──imgs.lineidx.new
    └──imgs.tsv
```

#### Inference for Zero-Shot setting
Once the dataset and vanilla models are ready, you can follow the instructions below to reproduce our zero-shot results.

* Step 1: Encode the queries from the test set and the multi-modal corpus for each task.
```bash
cd script
bash get_embed_test.sh
```

* Step 2: Retrieve the topN most relevant multi-modal documents for each query.
```bash
bash retrieval_test.sh
```
* Step 3: Use the retrieved documents for vanilla RAG inference.
```bash
bash inference_cpmv.sh # or bash inference_qwen.sh
```
For Image Reranking task, please use:
```bash
bash compute_ppl_minicpmv.sh # or bash compute_ppl_qwen2vl.sh
```

#### Train MM-RAIT
**Using the MiniCPM-V 2.6 models as an example, I will show you how to reproduce the results in this paper. The same is true for the Qwen2-VL. Also, we provide fine-tuned checkpoints. You can skip this step and proceed directly to inference.**

* First step: Prepare the training data.
```bash
bash get_embed_train.sh
bash retrieval_train.sh
cd ../data/
bash finetune/construct_finetune_data.sh
```

* Second step: Fine-tune the MiniCPM-V model using LoRA.
```bash
cd ../script
bash finetune_cpmv.sh
```

* Final step: Use the fine-tuned model for inference.
```bash
bash inference_cpmv.sh
```
For Image Reranking task, please use:
```bash
bash compute_ppl_minicpmv.sh
```

### 🌵 Evaluate Generation Effectiveness
Go to the `src/evaluation` folder and evaluate model performance as follows:
* For Image Captioning and Multi-modal QA tasks, please use:
```bash
python generation.py --reference_file path_to_reference_data --candidate_file path_to_generation_data
```
* For Multi-Modal Fact Verification task, please use:
```bash
python classification.py --true_file path_to_reference_data --pred_file path_to_generation_data
```
* For Image Reranking task, please use:
```bash
python -m pytorch_fid path/to/reference_images path/to/rerank_images
```

## 🎃 Contact
If you have questions, suggestions, and bug reports, please email:
```
[email protected]     [email protected] 
```