Datasets:
license: cc-by-nc-4.0
tags:
- image
annotations_creators:
- expert-generated
pretty_name: MVSCPS
size_categories:
- n<1K
task_categories:
- image-to-3d
Neural Multi-View Self-Calibrated Photometric Stereo without Photometric Stereo Cues
This dataset contains multi-view One-Light-at-a-Time (OLAT) images captured for multi-view 3D reconstruction and inverse rendering tasks.
It is first introduced and used for qualitative evaluation in our ICCV 2025 paper.
Dataset Structure
The dataset contains 6 scenes. Each scene directory is organized as follows:
{capture_date}_{material}_{name}/
├── ARW/
├── JPG/
├── mask/
├── CAM/
├── images_overview.jpeg
└── mesh_RC.obj
File and Folder Descriptions
🏞️ ARW/ and JPG/
Contain raw images in ARW format and corresponding JPEGs captured by a SONY A7R5 camera. No post-processing (e.g., tone mapping or white balance) has been applied. File names follow the format
V{view_idx}L{light_idx}
, where:view_idx
: index of the camera viewlight_idx
: index of the light direction
Images with the same
light_idx
are captured under the same light source fixed with respect to the camera.◘ mask/
Contains per-view foreground masks, automatically segmented using SAM2.📸 CAM/
Includes camera intrinsic and extrinsic parameters as 3×4 projection matrices $P=K[R \mid t]$ for each view. Calibrated using RealityCapture (Now renamed RealityScan).📱images_overview.jpeg
Provides a visual overview of all images in a scene.
Each sub-image is cropped from the JPEG image and gamma-corrected for improved visibility.🧱 mesh_RC.obj
A reference 3D mesh reconstructed from all JPEG images using RealityCapture.
⚠️ Note: This mesh is only intended for qualitative reference and should not be used for quantitative evaluation.
Because the photogrammetry process was applied to images captured under varying directional lighting, it violates the photo-consistency assumption and introduce reconstruction artifacts.
Scene Descriptions
20250205_ceramic_buddha
The first scene captured using the camera-light setup shown in the main paper's Fig. 10, and is used in the paper's teaser image. 143 out of 144 images were successfully calibrated using RealityCapture; thus, only 143 images were used in our method.
20250303 ~ 20250304
Captured right before the ICCV submission deadline for additional qualitative results. Each scene contains 144 successfully calibrated images.
20250515_ceramic_buddha
Captured during the rebuttal phase to demonstrate that a completely dark room is not strictly required for our method. All images were taken with a 1/200-second exposure under ambient lighting, in contrast to previous scenes captured at a 1/8-second exposure in darkness. Because the strobe flashlight emits a short burst of high-intensity light, reducing the exposure time from 1/8 sec to 1/200 sec does not affect the flashlight’s contribution to image appearance. However, it substantially attenuates the influence of ambient light, making it negligible.
Data Collection & Attribution
All images were captured, calibrated, segmented, and curated by Xu Cao.
How to Download
Step 1: Install the Hugging Face CLI
pip install -U huggingface_hub
Step 2: Download the dataset
hf download cyberagent/mvscps --repo-type dataset --local-dir ./mvscps_data
License
This dataset is available under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) License. You may use, share, and adapt the material for non-commercial purposes with appropriate credit.
Citation
If you use this dataset in your work, please cite:
@inproceedings{mvscps2025cao,
title = {Neural Multi-View Self-Calibrated Photometric Stereo without Photometric Stereo Cues},
author = {Cao, Xu and Taketomi, Takafumi},
year = {2025},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
}