|
--- |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: multimodal_question |
|
dtype: string |
|
- name: answer |
|
dtype: string |
|
- name: rationale |
|
dtype: string |
|
- name: text_only_question |
|
dtype: string |
|
- name: image_source |
|
dtype: string |
|
- name: evidence |
|
dtype: string |
|
- name: resolution |
|
dtype: string |
|
- name: proportion_of_roi |
|
dtype: string |
|
- name: category |
|
dtype: string |
|
- name: text_in_image |
|
dtype: string |
|
- name: rationale_granularity |
|
dtype: string |
|
- name: image |
|
dtype: image |
|
- name: cropped_image |
|
dtype: image |
|
splits: |
|
- name: train |
|
num_bytes: 157153160.0 |
|
num_examples: 129 |
|
download_size: 157133331 |
|
dataset_size: 157153160.0 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: hard_data/train-* |
|
--- |
|
# VisualSimpleQA |
|
|
|
## Introduction |
|
VisualSimpleQA is a multimodal fact-seeking benchmark with two key features. First, it enables streamlined and decoupled evaluation of LVLMs in visual and linguistic modalities. Second, it incorporates well-defined difficulty criteria to guide human annotation and facilitates the extraction of a challenging subset, VisualSimpleQA-hard. |
|
Experiments on 15 LVLMs show that even state-of-the-art models such as GPT-4o achieve merely 60%+ correctness in multimodal fact-seeking QA on VisualSimpleQA and 30%+ on VisualSimpleQA-hard. |
|
Furthermore, the decoupled evaluation based on this benchmark across different models highlights substantial opportunities for improvement in both visual and linguistic modules. |
|
The dataset reviewer above illustrates 129 samples from VisualSimpleQA-hard. |
|
|
|
`arXiv:` [https://arxiv.org/pdf/2503.06492](https://arxiv.org/pdf/2503.06492) |
|
|
|
**Data Example:** |
|
``` |
|
{'id': 369, |
|
'multimodal_question': 'Which institution did the creator of this cartoon duck donate her natural science-related paintings to?', |
|
'answer': 'The Armitt Museum, Gallery, Library', 'rationale': 'Jemima Puddle-Duck', |
|
'text_only_question': 'Which institution did the creator of Jemima Puddle-Duck donate her natural science-related paintings to?', |
|
'image_source': 'https://www.gutenberg.org/files/14814/14814-h/images/15-tb.jpg', |
|
'evidence': 'https://www.armitt.com/beatrix-potter-exhibition/\nhttps://en.wikipedia.org/wiki/Beatrix_Potter', |
|
'resolution': '400x360', |
|
'proportion_of_roi': '0.2232', |
|
'category': 'research and education', |
|
'text_in_image': 'absence', |
|
'rationale_granularity': 'fine-grained', |
|
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=400x360 at 0x7FE82C270D70>, |
|
'cropped_image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=164x196 at 0x7FE82C329550>} |
|
``` |
|
|
|
## File Structure |
|
|
|
`data/` |
|
This directory contains all 500 samples of VisualSimpleQA stored in parquet files. |
|
|
|
`hard_data/` |
|
This directory contains 129 VisualSimpleQA-hard samples stored in a parquet file. These samples are selected based on well-defined criteria to ensure they represent more challenging cases from VisualSimpleQA. |
|
|
|
## Disclaimer |
|
|
|
This dataset contains images collected from various sources. The authors do NOT claim ownership or copyright over the images. The images may be subject to third-party rights, and users are solely responsible for verifying the legal status of any content before use. |
|
|
|
- Intended Use: The images are provided for non-commercial research purposes only. |
|
|
|
- Redistribution Prohibition: You may NOT redistribute or modify the images without permission from original rights holders. |
|
|
|
- Reporting Violations: If you encounter any sample potentially breaching copyright or licensing rules, contact us at [email protected]. Verified violations will be removed promptly. |
|
|
|
The authors disclaim all liability for copyright infringement or misuse arising from the use of this dataset. Users assume full legal responsibility for their actions. |
|
|
|
## License |
|
|
|
- Text Data: Licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) |
|
- Images: Subject to custom terms (see Disclaimer above). |
|
|
|
## Citation |
|
|
|
**BibTeX:** |
|
```bibtex |
|
@article{wang2025visualsimpleqa, |
|
title={VisualSimpleQA: A Benchmark for Decoupled Evaluation of Large Vision-Language Models in Fact-Seeking Question Answering}, |
|
author={Yanling Wang and Yihan Zhao and Xiaodong Chen and Shasha Guo and Lixin Liu and Haoyang Li and Yong Xiao and Jing Zhang and Qi Li and Ke Xu}, |
|
journal={arXiv preprint arXiv: 2503.06492}, |
|
year={2025} |
|
} |
|
``` |