File size: 1,768 Bytes
e59f7eb 02c073d e59f7eb 81decfb f717fa8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
task_categories:
- other
pretty_name: SURPRISE3D
tags:
- 3d
- spatial-reasoning
- segmentation
- vision-language
- embodied-ai
library_name: datasets
license: mit
size_categories:
- 100K<n<1M
language:
- en
---
# SURPRISE3D Dataset
📄 **Paper**: [SURPRISE3D: A Dataset for Spatial Understanding and Reasoning in Complex 3D Scenes](https://huggingface.co/papers/2507.07781)
🔗 **arXiv**: [arxiv:2507.07781](https://arxiv.org/abs/2507.07781)
💻 **Code**: [GitHub Repository](https://github.com/liziwennba/SUPRISE)
## Dataset Description
SURPRISE3D is a novel dataset designed to evaluate language-guided spatial reasoning segmentation in complex 3D scenes. As detailed in our [paper](https://huggingface.co/papers/2507.07781), this dataset addresses the critical gap in current 3D vision-language research where existing datasets often mix semantic cues with spatial context.
### Key Features:
- **200k+ vision-language pairs** across 900+ detailed indoor scenes from ScanNet++ v2
- **2.8k+ unique object classes**
- **89k+ human-annotated spatial queries** crafted without object names to mitigate shortcut biases
- Comprehensive coverage of spatial reasoning skills including:
- Relative position reasoning
- Narrative perspective understanding
- Parametric perspective analysis
- Absolute distance reasoning
## Citation
If you use SURPRISE3D in your research, please cite our paper:
```bibtex
@article{huang2025surprise3d,
title={SURPRISE3D: A Dataset for Spatial Understanding and Reasoning in Complex 3D Scenes},
author={Huang, Jiaxin and Li, Ziwen and Zhang, Hanlve and Chen, Runnan and He, Xiao and Guo, Yandong and Wang, Wenping and Liu, Tongliang and Gong, Mingming},
journal={arXiv preprint arXiv:2507.07781},
year={2025}
}
|