Dataset Viewer
The dataset viewer is not available for this dataset.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

HELVIPAD: A Real-World Dataset for Omnidirectional Stereo Depth Estimation Project Page

The Helvipad dataset is a real-world stereo dataset designed for omnidirectional depth estimation. It comprises 39,553 paired equirectangular images captured using a top-bottom 360° camera setup and corresponding pixel-wise depth and disparity labels derived from LiDAR point clouds. The dataset spans diverse indoor and outdoor scenes under varying lighting conditions, including night-time environments.

News

  • [16/02/2025] Helvipad has been accepted to CVPR 2025! 🎉🎉

  • [CVPR Update – 16/03/2025] If you already downloaded the dataset, we have applied a small but important update:

    • train/val split: the previous train/ folder is now split into train/ and val/ subsets.
    • bottom image fix (images_bottom/): a minor horizontal shift correction has been applied to bottom images in train/, val/, and test/.
    • disparity and depth maps adjustment (disparity_maps/, depth_maps/, disparity_maps_augmented/, depth_maps_augmented/): a small vertical shift was corrected in both standard and augmented depth and disparity maps in train/, val/, and test/.

    We have re-run all experiments, and the updated dataset produces similar results.

Dataset Structure

The dataset is organized into training, validation and testing subsets with the following structure:

helvipad/
├── train/
│   ├── depth_maps                # Depth maps generated from LiDAR data
│   ├── depth_maps_augmented      # Augmented depth maps using depth completion
│   ├── disparity_maps            # Disparity maps computed from depth maps
│   ├── disparity_maps_augmented  # Augmented disparity maps using depth completion
│   ├── images_top                # Top-camera RGB images
│   ├── images_bottom             # Bottom-camera RGB images
│   ├── LiDAR_pcd                 # Original LiDAR point cloud data
├── val/
│   ├── depth_maps                # Depth maps generated from LiDAR data
│   ├── depth_maps_augmented      # Augmented depth maps using depth completion
│   ├── disparity_maps            # Disparity maps computed from depth maps
│   ├── disparity_maps_augmented  # Augmented disparity maps using depth completion
│   ├── images_top                # Top-camera RGB images
│   ├── images_bottom             # Bottom-camera RGB images
│   ├── LiDAR_pcd                 # Original LiDAR point cloud data
├── test/
│   ├── depth_maps                # Depth maps generated from LiDAR data
│   ├── depth_maps_augmented      # Augmented depth maps using depth completion (only for computing LRCE)
│   ├── disparity_maps            # Disparity maps computed from depth maps
│   ├── disparity_maps_augmented  # Augmented disparity maps using depth completion (only for computing LRCE)
│   ├── images_top                # Top-camera RGB images
│   ├── images_bottom             # Bottom-camera RGB images
│   ├── LiDAR_pcd                 # Original LiDAR point cloud data

The dataset repository also includes:

  • helvipad_utils.py: utility functions for reading depth and disparity maps, converting disparity to depth, and handling disparity values in pixels and degrees;
  • calibration.json: intrinsic and extrinsic calibration parameters for the stereo cameras and LiDAR sensor.

Benchmark

We evaluate the performance of multiple state-of-the-art and popular stereo matching methods, both for standard and 360° images. All models are trained on a single NVIDIA A100 GPU with the largest possible batch size to ensure comparable use of computational resources.

Method Stereo Setting Disp-MAE (°) Disp-RMSE (°) Disp-MARE Depth-MAE (m) Depth-RMSE (m) Depth-MARE Depth-LRCE (m)
PSMNet conventional 0.286 0.496 0.248 2.509 5.673 0.176 1.809
360SD-Net omnidirectional 0.224 0.419 0.191 2.122 5.077 0.152 0.904
IGEV-Stereo conventional 0.225 0.423 0.172 1.860 4.447 0.146 1.203
360-IGEV-Stereo omnidirectional 0.188 0.404 0.146 1.720 4.297 0.130 0.388

Project Page

For more information, visualizations, and updates, visit the project page.

License

This dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.

Acknowledgments

This work was supported by the EPFL Center for Imaging through a Collaborative Imaging Grant. We thank the VITA lab members for their valuable feedback, which helped to enhance the quality of this manuscript. We also express our gratitude to Dr. Simone Schaub-Meyer and Oliver Hahn for their insightful advice during the project's final stages.

Citation

If you use the Helvipad dataset in your research, please cite our paper:

@inproceedings{zayene2025helvipad,
  author        = {Zayene, Mehdi and Endres, Jannik and Havolli, Albias and Corbière, Charles and Cherkaoui, Salim and Ben Ahmed Kontouli, Alexandre and Alahi, Alexandre},
  title         = {Helvipad: A Real-World Dataset for Omnidirectional Stereo Depth Estimation},
  booktitle     = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  year          = {2025}
}
Downloads last month
11,485