Datasets:
Tasks:
Visual Question Answering
Modalities:
Image
Languages:
English
Size:
1K<n<10K
ArXiv:
License:
license: mit | |
task_categories: | |
- visual-question-answering | |
language: | |
- en | |
tags: | |
- visual reason | |
- transformation | |
- benchmark | |
- computer vision | |
size_categories: | |
- 1K<n<10K | |
# VisualTrans: A Benchmark for Real-World Visual Transformation Reasoning | |
[](http://arxiv.org/abs/2508.04043) | |
## Dataset Description | |
VisualTrans is the first comprehensive benchmark specifically designed for Visual Transformation Reasoning (VTR) in real-world human-object interaction scenarios. The benchmark encompasses 12 semantically diverse manipulation tasks and systematically evaluates three essential reasoning dimensions through 6 well-defined subtask types. | |
## Dataset Statistics | |
- **Total samples**: 497 | |
- **Number of manipulation scenarios**: 12 | |
- **Task types**: 6 | |
### Task Type Distribution | |
- **count**: 63 samples (12.7%) | |
- **procedural_causal**: 86 samples (17.3%) | |
- **procedural_interm**: 88 samples (17.7%) | |
- **procedural_plan**: 42 samples (8.5%) | |
- **spatial_fine_grained**: 168 samples (33.8%) | |
- **spatial_global**: 50 samples (10.1%) | |
### Manipulation Scenarios | |
The benchmark covers 12 diverse manipulation scenarios: | |
- Add Remove Lid | |
- Assemble Disassemble Legos | |
- Build Unstack Lego | |
- Insert Remove Bookshelf | |
- Insert Remove Cups From Rack | |
- Make Sandwich | |
- Pick Place Food | |
- Play Reset Connect Four | |
- Screw Unscrew Fingers Fixture | |
- Setup Cleanup Table | |
- Sort Beads | |
- Stack Unstack Bowls | |
## Dataset Structure | |
### Files | |
- `VisualTrans.json`: Main benchmark file containing questions, answers, and image paths | |
- `images.zip`: Compressed archive containing all images used in the benchmark | |
### Data Format | |
Each sample in the benchmark contains: | |
```json | |
{ | |
"task_type": "what", | |
"images": [ | |
"scene_name/image1.jpg", | |
"scene_name/image2.jpg" | |
], | |
"scene": "scene_name", | |
"question": "Question about the transformation", | |
"label": "Ground truth answer" | |
} | |
``` | |
## Reasoning Dimensions | |
The framework evaluates three essential reasoning dimensions: | |
1. **Quantitative Reasoning** - Counting and numerical reasoning tasks | |
2. **Procedural Reasoning** | |
- **Intermediate State** - Understanding process states during transformation | |
- **Causal Reasoning** - Analyzing cause-effect relationships | |
- **Transformation Planning** - Multi-step planning and sequence reasoning | |
3. **Spatial Reasoning** | |
- **Fine-grained** - Precise spatial relationships and object positioning | |
- **Global** - Overall spatial configuration and scene understanding | |
## Usage | |
```python | |
import json | |
import zipfile | |
# Load the benchmark data | |
with open('VisualTrans.json', 'r') as f: | |
benchmark_data = json.load(f) | |
# Extract images | |
with zipfile.ZipFile('images.zip', 'r') as zip_ref: | |
zip_ref.extractall('images/') | |
# Access a sample | |
sample = benchmark_data[0] | |
print(f"Question: {sample['question']}") | |
print(f"Answer: {sample['label']}") | |
print(f"Images: {sample['images']}") | |
``` | |
## Citation | |
If you use this benchmark, please cite our work: | |
```bibtex | |
@misc{ji2025visualtransbenchmarkrealworldvisual, | |
title={VisualTrans: A Benchmark for Real-World Visual Transformation Reasoning}, | |
author={Yuheng Ji and Yipu Wang and Yuyang Liu and Xiaoshuai Hao and Yue Liu and Yuting Zhao and Huaihai Lyu and Xiaolong Zheng}, | |
year={2025}, | |
eprint={2508.04043}, | |
archivePrefix={arXiv}, | |
primaryClass={cs.CV}, | |
url={https://arxiv.org/abs/2508.04043}, | |
} | |
``` | |
## License | |
This dataset is released under the MIT License. | |
## Contact | |
For questions or issues, please open an issue on our [GitHub repository](https://github.com/WangYipu2002/VisualTrans) or contact the authors. |