|
--- |
|
task_categories: |
|
- robotics |
|
language: |
|
- en |
|
pretty_name: SLABIM |
|
size_categories: |
|
- 100B<n<1T |
|
--- |
|
<div align = "center"><h4><img src="assets/logo.png" width="5%" height="5%" /> SLABIM: </h4></div> |
|
<div align = "center"><h4>A SLAM-BIM Coupled Dataset in HKUST Main Building</h4></div> |
|
|
|
> Haoming Huang, [Zhijian Qiao](https://qiaozhijian.github.io/), Zehuan Yu, Chuhao Liu, [Shaojie Shen](https://uav.hkust.edu.hk/group/), Fumin Zhang and [Huan Yin](https://huanyin94.github.io/) |
|
> |
|
> Submitted to 2025 IEEE International Conference on Robotics & Automation |
|
|
|
### News |
|
* **`17 Feb 2025`:** Download Links Updated. |
|
* **`28 Jan 2025`:** Accepted by [ICRA 2025](https://arxiv.org/abs/2502.16856). |
|
* **`15 Sep 2024`:** We submit our paper to [IEEE ICRA](https://2025.ieee-icra.org/). |
|
|
|
## Abstract |
|
<div align="center"><h4>SLABIM is an open-sourced SLAM dataset that couples with BIM (Building Information Modeling).</h4></div> |
|
|
|
<div align = "center"><img src="assets/overview.png" width="95%" /> </div> |
|
|
|
**Features**: |
|
+ **Large-scale Building Information Modeling**: The BIM model of this dataset is a part of the digital twin project in HKUST, |
|
featuring various types of offices, classrooms, lounges, and corridors. |
|
+ **Multi-session & Multi-sensor Data**: We collect 12 sessions across different floors and regions. These sessions encompass various indoor scenarios. |
|
+ **Dataset Validation**: To demonstrate the practicality of SLABIM, we test three different tasks: (1) LiDAR-to-BIM registration, and (2) Robot pose tracking on BIM and (3) Semantic mapping. |
|
|
|
## Dataset Structure |
|
1. ```BIM/``` contains CAD files (.dxf) and mesh files (.ply) exported from the original BIM models, organized by storey and semantic tags. Users can sample |
|
the meshes at specific densities to obtain point clouds, offering flexibility for various robotic tasks. |
|
|
|
2. ```calibration files``` provide intrinsic camera parameters and the extrinsic parameters to the LiDAR. |
|
|
|
3. In ```sensor data/``` directory, each session is named |
|
```<X>F Region<Y>```, with X=1,3,4,5 and Y=1,2,3 |
|
indicating the storey and region of collection, such |
|
as ```3F Region1```. This directory contains the **images** |
|
and **points** produced by **camera** and **LiDAR**. |
|
|
|
4. ```data <x>.bag```, x=0,1,2... is the **rosbag** encoding the raw information, which can be parsed using ROS tools. |
|
|
|
5. ```sensor data/``` also contains the maps generated by SLAM, including **submap** for the LiDAR-to-BIM registration and **optimized map** by the offline mapping system. |
|
|
|
6. ```pose_frame_to_bim.txt```, ```pose_map_to_bim.txt``` and ```pose_submap_to_bim.txt``` contains the **ground truth poses** from LiDAR scans and maps to the BIM coordinate. These poses are finely tuned using a manually |
|
provided initial guess and local point cloud alignment. |
|
|
|
``` |
|
SLABIM |
|
βββ BIM |
|
βΒ Β βββ <X>F |
|
βΒ Β βββ CAD |
|
βΒ Β βΒ Β βββ <X>F.dxf |
|
βΒ Β βββ mesh |
|
βΒ Β βββ columns.ply |
|
βΒ Β βββ doors.ply |
|
βΒ Β βββ floors.ply |
|
βΒ Β βββ walls.ply |
|
βββ calibration_files |
|
βΒ Β βββ cam_intrinsics.txt |
|
βΒ Β βββ cam_to_lidar.txt |
|
βββ sensor_data |
|
βββ <X>F_Region<Y> |
|
βββ images |
|
βΒ Β βββ data |
|
βΒ Β βΒ Β βββ <frame_id>.png |
|
βΒ Β βββ timestamps.txt |
|
βββ map |
|
βΒ Β βββ data |
|
βΒ Β βΒ Β βββ colorized.las |
|
βΒ Β βΒ Β βββ uncolorized.ply |
|
βΒ Β βββ pose_map_to_bim.txt |
|
βββ points |
|
βΒ Β βββ data |
|
βΒ Β βΒ Β βββ <frame_id>.pcd |
|
βΒ Β βββ pose_frame_to_bim.txt |
|
βΒ Β βββ timestamps.txt |
|
βββ rosbag |
|
βΒ Β βββ data_<x>.bag |
|
βββ submap |
|
βββ data |
|
βΒ Β βββ <submap_id>.pcd |
|
βββ pose_submap_to_bim.txt |
|
``` |
|
|
|
|
|
<!-- ## Multi-session SLAM Dataset |
|
<div align="left"> |
|
<img src="assets/1F.png" width=28.6% /> |
|
<img src="assets/3Fto5F.png" width=30.6% /> |
|
<img src="assets/colormap.gif" width = 39.3% > |
|
</div> --> |
|
|
|
## Data Acquisition Platform |
|
The handheld sensor suite is illustrated in the Figure 1. A more detailed summary of the characteristics can be found in the Table 1. |
|
<div align="center"> |
|
<img src="assets/sensor_suite.png" width=38.3% /> |
|
<img src="assets/collection.gif" width = 60.6% > |
|
</div> |
|
|
|
## Qualitative Results on SLABIM |
|
### Global LiDAR-to-BIM Registration |
|
Global LiDAR-to-BIM registration aims to estimate a transformation from scratch between the LiDAR submap and the BIM coordinate system. A robot can localize itself globally by aligning the online built submap to the BIM. |
|
|
|
<div align = "center"><img src="assets/registration.gif" width="35%" /> </div> |
|
|
|
### Robot Pose Tracking on BIM |
|
Different from LiDAR-to-BIM, Pose tracking requires estimating poses given the initial state and sequential measurements. |
|
|
|
<div align = "center"><img src="assets/pose_tracking.gif" width="35%" /> </div> |
|
|
|
### Semantic Mapping |
|
We deploy [FM-Fusion](https://arxiv.org/abs/2402.04555)[1] on SLABIM. For the ground truth, we convert the HKUST BIM into semantic point cloud maps using the semantic tags in BIM. Both maps contain four semantic categories: floor, wall, door, and |
|
column, the common elements in indoor environments |
|
<div align = "center"><img src="assets/semantic_mapping.gif" width="35%" /> </div> |
|
|
|
[1] C. Liu, K. Wang, J. Shi, Z. Qiao, and S. Shen, βFm-fusion: Instance- |
|
aware semantic mapping boosted by vision-language foundation mod- |
|
els,β IEEE Robotics and Automation Letters, 2024 |
|
## Acknowledgements |
|
We sincerely thank Prof. Jack C. P. Cheng for generously |
|
providing the original HKUST BIM files. |
|
|
|
<!-- ## Citation |
|
If you find SLABIM is useful in your research or applications, please consider giving us a star π and citing it by the following BibTeX entry. --> |
|
<!-- ```bibtex |
|
@ARTICLE{qiao2024g3reg, |
|
author={Qiao, Zhijian and Yu, Zehuan and Jiang, Binqian and Yin, Huan and Shen, Shaojie}, |
|
journal={IEEE Transactions on Automation Science and Engineering}, |
|
title={G3Reg: Pyramid Graph-Based Global Registration Using Gaussian Ellipsoid Model}, |
|
year={2024}, |
|
volume={}, |
|
number={}, |
|
pages={1-17}, |
|
keywords={Point cloud compression;Three-dimensional displays;Laser radar;Ellipsoids;Robustness;Upper bound;Uncertainty;Global registration;point cloud;LiDAR;graph theory;robust estimation}, |
|
doi={10.1109/TASE.2024.3394519}} |
|
``` |
|
```bibtex |
|
@inproceedings{qiao2023pyramid, |
|
title={Pyramid Semantic Graph-based Global Point Cloud Registration with Low Overlap}, |
|
author={Qiao, Zhijian and Yu, Zehuan and Yin, Huan and Shen, Shaojie}, |
|
booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, |
|
pages={11202--11209}, |
|
year={2023}, |
|
organization={IEEE} |
|
} |
|
``` --> |