|
--- |
|
license: mit |
|
task_categories: |
|
- text-to-image |
|
- visual-question-answering |
|
language: |
|
- en |
|
--- |
|
# Data statices of M2RAG |
|
|
|
Click the links below to view our paper and Github project. |
|
<a href='https://arxiv.org/abs/2502.17297'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a><a href='https://github.com/NEUIR/M2RAG'><img src="https://img.shields.io/badge/Github-M2RAG-blue?logo=Github"></a> |
|
|
|
If you find this work useful, please cite our paper and give us a shining star 🌟 in Github |
|
|
|
``` |
|
@misc{liu2025benchmarkingretrievalaugmentedgenerationmultimodal, |
|
title={Benchmarking Retrieval-Augmented Generation in Multi-Modal Contexts}, |
|
author={Zhenghao Liu and Xingsheng Zhu and Tianshuo Zhou and Xinyi Zhang and Xiaoyuan Yi and Yukun Yan and Yu Gu and Ge Yu and Maosong Sun}, |
|
year={2025}, |
|
eprint={2502.17297}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.AI}, |
|
url={https://arxiv.org/abs/2502.17297}, |
|
} |
|
``` |
|
## 🎃 Overview |
|
|
|
The **M²RAG** benchmark evaluates Multi-modal Large Language Models (MLLMs) by using multi-modal retrieved documents to answer questions. It includes four tasks: image captioning, multi-modal QA, fact verification, and image reranking, assessing MLLMs’ ability to leverage knowledge from multi-modal contexts. |
|
|
|
<p align="center"> |
|
<img align="middle" src="https://raw.githubusercontent.com/NEUIR/M2RAG/main/assets/m2rag.png" style="width: 600px;" alt="m2rag"/> |
|
</p> |
|
|
|
## 🎃 Data Storage Structure |
|
The data storage structure of M2RAG is as follows: |
|
``` |
|
M2RAG/ |
|
├──fact_verify/ |
|
├──image_cap/ |
|
├──image_rerank/ |
|
├──mmqa/ |
|
├──imgs.lineidx.new |
|
└──imgs.tsv |
|
``` |
|
|
|
❗️Note: |
|
|
|
- If you encounter difficulties when downloading the images directly, please download and use the pre-packaged image file ```M2RAG_Images.zip``` instead. |
|
|
|
- To obtain the ```imgs.tsv```, you can follow the instructions in the [WebQA](https://github.com/WebQnA/WebQA?tab=readme-ov-file#download-data) project. Specifically, you need to first download all the data from the folder [WebQA_imgs_7z_chunks](https://drive.google.com/drive/folders/19ApkbD5w0I5sV1IeQ9EofJRyAjKnA7tb), and then run the command ``` 7z x imgs.7z.001```to unzip and merge all chunks to get the imgs.tsv. |