---
license: bsd-3-clause
language:
- en
tags:
- scene-flow
- point-cloud
- codebase
- 3d-vision
---
๐ If you find [*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) useful to your research, please cite [**our works** ๐](#cite-us) and give [a star ๐](https://github.com/KTH-RPL/OpenSceneFlow) as encouragement. (เฉญห๊ณโห)เฉญโง
[*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) is a codebase for point cloud scene flow estimation.
Please check the usage on [KTH-RPL/OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow).
Here we upload our demo data and checkpoint for the community.
## ๐ One repository, All methods!
You can try following methods in [our OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow) without any effort to make your own benchmark.
Officially:
- [x] [HiMo (SeFlow++)](https://arxiv.org/abs/2503.00803): T-RO 2025
- [x] [VoteFlow](https://arxiv.org/abs/2503.22328): CVPR 2025
- [x] [SSF](https://arxiv.org/abs/2501.17821) (Ours ๐): ICRA 2025
- [x] [Flow4D](https://ieeexplore.ieee.org/document/10887254): RA-L 2025
- [x] [SeFlow](https://arxiv.org/abs/2407.01702) (Ours ๐): ECCV 2024
- [x] [DeFlow](https://arxiv.org/abs/2401.16122) (Ours ๐): ICRA 2024
Reoriginse to our codebase:
- [x] [FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021
- [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](https://github.com/KTH-RPL/OpenSceneFlow/tools/zerof2ours.py).
- [x] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](https://github.com/KTH-RPL/OpenSceneFlow/assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
- [x] [FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. Done coding, public after review.
- [ ] ... more on the way
## Notes
The tree of uploaded files:
* [ModelName_best].ckpt: means the model evaluated in the public leaderboard page provided by authors or our retrained with the best parameters.
* [demo-data-v2.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 1.2GB, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/SeFlow?tab=readme-ov-file#1-run--train).
* [waymo_map.tar.gz](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/waymo_map.tar.gz): to successfully process waymo data with ground segmentation included to unified h5 file. Check usage in [this README](https://github.com/KTH-RPL/SeFlow/blob/main/dataprocess/README.md#waymo-dataset).
* [demo_data.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 1st version (will deprecated later) 613Mb, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/OpenSceneFlow?tab=readme-ov-file#1-run--train).
All test result reports can be found [v2 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/6)
and [v1 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/2).
## Cite Us
*OpenSceneFlow* is designed by [Qingwen Zhang](https://kin-zhang.github.io/) from DeFlow and SeFlow project. If you find it useful, please cite our works:
```bibtex
@inproceedings{zhang2024seflow,
author={Zhang, Qingwen and Yang, Yi and Li, Peizheng and Andersson, Olov and Jensfelt, Patric},
title={{SeFlow}: A Self-Supervised Scene Flow Method in Autonomous Driving},
booktitle={European Conference on Computer Vision (ECCV)},
year={2024},
pages={353โ369},
organization={Springer},
doi={10.1007/978-3-031-73232-4_20},
}
@inproceedings{zhang2024deflow,
author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric},
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving},
year={2024},
pages={2105-2111},
doi={10.1109/ICRA57147.2024.10610278}
}
@article{zhang2025himo,
title={HiMo: High-Speed Objects Motion Compensation in Point Clouds},
author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Sina, Sharif Mansouri and Andersson, Olov and Jensfelt, Patric},
year={2025},
journal={arXiv preprint arXiv:2503.00803},
}
```
And our excellent collaborators works as followings:
```bibtex
@inproceedings{lin2025voteflow,
title={VoteFlow: Enforcing Local Rigidity in Self-Supervised Scene Flow},
author={Lin, Yancong and Wang, Shiming and Nan, Liangliang and Kooij, Julian and Caesar, Holger},
booktitle={CVPR},
year={2025},
}
@article{kim2025flow4d,
author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon},
journal={IEEE Robotics and Automation Letters},
title={Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation},
year={2025},
volume={10},
number={4},
pages={3462-3469},
doi={10.1109/LRA.2025.3542327}
}
@article{khoche2025ssf,
title={SSF: Sparse Long-Range Scene Flow for Autonomous Driving},
author={Khoche, Ajinkya and Zhang, Qingwen and Sanchez, Laura Pereira and Asefaw, Aron and Mansouri, Sina Sharif and Jensfelt, Patric},
journal={arXiv preprint arXiv:2501.17821},
year={2025}
}
```
Feel free to contribute your method and add your bibtex here by pull request!