|
--- |
|
license: mit |
|
task_categories: |
|
- visual-question-answering |
|
language: |
|
- en |
|
tags: |
|
- spatial |
|
- multimodal |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
# Dataset Card for TOPVIEWRS |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
The TOPVIEWRS (Top-View Reasoning in Space) benchmark is a multimodal benchmark intended to evaluate the spatial reasoning ability of current Vision-Language Models. |
|
It consists of 11,384 multiple-choice questions with either realistic or semantic top-view map as visual input, across 4 perception and reasoning tasks with different levels of complexity. |
|
For details, please refer to the [project page](https://topviewrs.github.io/) and the [paper](https://arxiv.org/pdf/2406.02537). |
|
|
|
## Dataset Description |
|
|
|
- **Homepage/Repository:** [https://topviewrs.github.io/](https://topviewrs.github.io/) |
|
- **Paper:** [TOPVIEWRS: Vision-Language Models as Top-View Spatial Reasoners](https://arxiv.org/pdf/2406.02537) |
|
- **Point of Contact:** [[email protected]](mailto:[email protected]) |
|
|
|
## Dataset Details |
|
|
|
### Dataset Features |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
- **Multi-Scale Top-View Maps**: Multi-scale top-view maps of single rooms and full houses add divergence in the granularity of the entities (objects or rooms) in spatial reasoning. |
|
- **Realistic Environmental Scenarios with Rich Object Sets**: Real-world environments from indoor scenes, with 80 objects per scene on average. |
|
- **Structured Question Framework**: Four tasks including 9 sub-tasks in total, allowing for a fine-grained evaluation and analysis of models’ capabilities from various perspectives and levels of granularity. |
|
|
|
### Dataset Statistics |
|
|
|
The TOPVIEWRS evaluation dataset comprises a total of 11,384 multiple-choice questions after human verification, with |
|
5,539 questions associated with realistic top-view |
|
maps, and 5,845 with semantic top-view maps. |
|
The choices are uniformly distributed over choices A(25.5%), B (24.6%), C (24.5%) and D (25.4%). |
|
|
|
The maps are collected from Matterport3D dataset, which includes 90 building-scale scenes with instance-level semantic and room-level region annotations in 3D meshes. |
|
We filter these to exclude multi-floor and low-quality scenes, selecting 7 scenes with an average of 80 objects and 12 rooms each. |
|
|
|
**Note**: *We only release part of the benchmark (2 different scenarios covering all the tasks of the benchmark) in this dataset card to avoid data contamination. |
|
For full access to the benchmark, please get in touch with [Chengzu Li](chengzu-li.github.io) via email: [[email protected]](mailto:[email protected])* |
|
|
|
### Uses |
|
|
|
``` |
|
data = load_datasets( |
|
"chengzu/topviewrs", |
|
trust_remote_code=True, |
|
map_type=MAP_TYPE, |
|
task_split=TASK_SPLIT, |
|
image_save_dir=IMAGE_SAVE_DIR |
|
) |
|
``` |
|
|
|
To use the dataset, you have to specify several arguments when calling `load_datasets`: |
|
- `map_type`: should be one of `['realistic', 'semantic']` |
|
- `task_split`: should be one of `['top_view_recognition', 'top_view_localization', 'static_spatial_reasoning', 'dynamic_spatial_reasoning']` |
|
- `image_save_dir`: specify the directory where you would like the images to be saved |
|
|
|
### Data Instances |
|
|
|
For example an instance from the `top_view_recognition` task is: |
|
|
|
``` |
|
{ |
|
'index': 0, |
|
'scene_id': '17DRP5sb8fy', |
|
'question': 'Which of the following objects are in the room?', |
|
'choices': ['shelving', 'bed', 'toilet', 'seating'], |
|
'labels': ['bed'], |
|
'choice_type': '<OBJECT>', |
|
'map_path': '<IMAGE_SAVE_DIR>/data/mp3d/17DRP5sb8fy/semantic/17DRP5sb8fy_0_0.png', |
|
'question_ability': 'object_recognition' |
|
} |
|
``` |
|
|
|
### Data Fields |
|
|
|
Every example has the following fields |
|
- `idx`: an `int` feature |
|
- `scene_id`: a `string` feature, unique id for the scene from Matterport3D |
|
- `question`: a `string` feature |
|
- `choices`: a sequence of `string` feature, choices for multiple-choice question |
|
- `labels`: a sequence of `string` feature, answer for multiple-choice question. The label's position in the `choices` can be used to determine whether it is A, B, C, or D. |
|
- `choice_type`: a `string` feature |
|
- `map_path`: a `string` feature, the path of the input image |
|
- `question_ability`: a `string` feature, sub-tasks for fine-grained evaluation and analysis |
|
|
|
For `dynamic_spatial_reasoning` task, there would be one more data field: |
|
- `reference_path`: a sequence of `list[int]` feature, the coordinate sequence of the navigation path on the top-view map. |
|
|
|
|
|
## Citation |
|
|
|
``` |
|
@misc{li2024topviewrs, |
|
title={TopViewRS: Vision-Language Models as Top-View Spatial Reasoners}, |
|
author={Chengzu Li and Caiqi Zhang and Han Zhou and Nigel Collier and Anna Korhonen and Ivan Vulić}, |
|
year={2024}, |
|
eprint={2406.02537}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|