π If you find OpenSceneFlow useful to your research, please cite our works π and give a star π as encouragement. (ΰ©Λκ³βΛ)ΰ©β§
OpenSceneFlow is a codebase for point cloud scene flow estimation. Please check the usage on KTH-RPL/OpenSceneFlow. Here we upload our demo data and checkpoint for the community.
π One repository, All methods!
You can try following methods in our OpenSceneFlow without any effort to make your own benchmark.
Officially:
- HiMo (SeFlow++): T-RO 2025
- VoteFlow: CVPR 2025
- SSF (Ours π): ICRA 2025
- Flow4D: RA-L 2025
- SeFlow (Ours π): ECCV 2024
- DeFlow (Ours π): ICRA 2024
Reoriginse to our codebase:
- FastFlow3d: RA-L 2021
- ZeroFlow: ICLR 2024, their pre-trained weight can covert into our format easily through the script.
- NSFP: NeurIPS 2021, faster 3x than original version because of our CUDA speed up, same (slightly better) performance. Done coding, public after review.
- FastNSF: ICCV 2023. Done coding, public after review.
- ... more on the way
Notes
The tree of uploaded files:
- [ModelName_best].ckpt: means the model evaluated in the public leaderboard page provided by authors or our retrained with the best parameters.
- demo-data-v2.zip: 1.2GB, a mini-dataset for user to quickly run train/val code. Check usage in this section.
- waymo_map.tar.gz: to successfully process waymo data with ground segmentation included to unified h5 file. Check usage in this README.
- demo_data.zip: 1st version (will deprecated later) 613Mb, a mini-dataset for user to quickly run train/val code. Check usage in this section.
All test result reports can be found v2 leaderboard and v1 leaderboard.
Cite Us
OpenSceneFlow is designed by Qingwen Zhang from DeFlow and SeFlow project. If you find it useful, please cite our works:
@inproceedings{zhang2024seflow,
author={Zhang, Qingwen and Yang, Yi and Li, Peizheng and Andersson, Olov and Jensfelt, Patric},
title={{SeFlow}: A Self-Supervised Scene Flow Method in Autonomous Driving},
booktitle={European Conference on Computer Vision (ECCV)},
year={2024},
pages={353β369},
organization={Springer},
doi={10.1007/978-3-031-73232-4_20},
}
@inproceedings{zhang2024deflow,
author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric},
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving},
year={2024},
pages={2105-2111},
doi={10.1109/ICRA57147.2024.10610278}
}
@article{zhang2025himo,
title={HiMo: High-Speed Objects Motion Compensation in Point Clouds},
author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Sina, Sharif Mansouri and Andersson, Olov and Jensfelt, Patric},
year={2025},
journal={arXiv preprint arXiv:2503.00803},
}
And our excellent collaborators works as followings:
@inproceedings{lin2025voteflow,
title={VoteFlow: Enforcing Local Rigidity in Self-Supervised Scene Flow},
author={Lin, Yancong and Wang, Shiming and Nan, Liangliang and Kooij, Julian and Caesar, Holger},
booktitle={CVPR},
year={2025},
}
@article{kim2025flow4d,
author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon},
journal={IEEE Robotics and Automation Letters},
title={Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation},
year={2025},
volume={10},
number={4},
pages={3462-3469},
doi={10.1109/LRA.2025.3542327}
}
@article{khoche2025ssf,
title={SSF: Sparse Long-Range Scene Flow for Autonomous Driving},
author={Khoche, Ajinkya and Zhang, Qingwen and Sanchez, Laura Pereira and Asefaw, Aron and Mansouri, Sina Sharif and Jensfelt, Patric},
journal={arXiv preprint arXiv:2501.17821},
year={2025}
}
Feel free to contribute your method and add your bibtex here by pull request!