kin-zhang commited on
Commit
1712e47
Β·
verified Β·
1 Parent(s): f8843d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -21
README.md CHANGED
@@ -2,8 +2,12 @@
2
  license: bsd-3-clause
3
  language:
4
  - en
 
 
 
 
 
5
  ---
6
- # Scene Flow Models for Autonomous Driving Dataset
7
 
8
  <p align="center">
9
  <a href="https://github.com/KTH-RPL/OpenSceneFlow">
@@ -15,36 +19,43 @@ language:
15
 
16
  πŸ’ž If you find [*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) useful to your research, please cite [**our works** πŸ“–](#cite-us) and give [a star 🌟](https://github.com/KTH-RPL/OpenSceneFlow) as encouragement. (ΰ©­ΛŠκ’³β€‹Λ‹)੭✧
17
 
18
- OpenSceneFlow is a codebase for point cloud scene flow estimation.
19
- Please check the usage on [KTH-RPL/OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow).
 
20
 
21
- <!-- - [DeFlow](https://arxiv.org/abs/2401.16122): Supervised learning scene flow, model included is trained on Argoverse 2.
22
- - [SeFlow](https://arxiv.org/abs/2407.01702): **Self-Supervised** learning scene flow, model included is trained on Argoverse 2. Paper also reported Waymo result, the weight cannot be shared according to [Waymo Term](https://waymo.com/open/terms/). More detail discussion [issue 8](https://github.com/KTH-RPL/SeFlow/issues/8#issuecomment-2464224813).
23
- - [SSF](https://arxiv.org/abs/2501.17821): Supervised learning long-range scene flow, model included is trained on Argoverse 2.
24
- - [Flow4D](https://ieeexplore.ieee.org/document/10887254): Supervised learning 4D network scene flow, model included is trained on Argoverse 2. -->
25
-
26
- The files we included and all test result reports can be found [v2 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/6) and [v1 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/2).
27
- * [ModelName_best].ckpt: means the model evaluated in the public leaderboard page provided by authors or our retrained with the best parameters.
28
- * [demo_data.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 613Mb, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/SeFlow?tab=readme-ov-file#1-run--train).
29
- * [waymo_map.tar.gz](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/waymo_map.tar.gz): to successfully process waymo data with ground segmentation included to unified h5 file. Check usage in [this README](https://github.com/KTH-RPL/SeFlow/blob/main/dataprocess/README.md#waymo-dataset).
30
 
 
31
 
32
- <details> <summary>🎁 <b>One repository, All methods!</b> </summary>
33
- <!-- <br> -->
34
- You can try following methods in our code without any effort to make your own benchmark.
35
-
36
  - [x] [SSF](https://arxiv.org/abs/2501.17821) (Ours πŸš€): ICRA 2025
37
  - [x] [Flow4D](https://ieeexplore.ieee.org/document/10887254): RA-L 2025
38
  - [x] [SeFlow](https://arxiv.org/abs/2407.01702) (Ours πŸš€): ECCV 2024
39
  - [x] [DeFlow](https://arxiv.org/abs/2401.16122) (Ours πŸš€): ICRA 2024
 
 
 
40
  - [x] [FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021
41
- - [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](https://github.com/KTH-RPL/SeFlow/tools/zerof2ours.py).
42
- - [ ] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](https://github.com/KTH-RPL/SeFlow/assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
43
- - [ ] [FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. Done coding, public after review.
44
  - [ ] ... more on the way
45
 
46
  </details>
47
 
 
 
 
 
 
 
 
 
 
 
 
48
  ## Cite Us
49
 
50
  *OpenSceneFlow* is designed by [Qingwen Zhang](https://kin-zhang.github.io/) from DeFlow and SeFlow project. If you find it useful, please cite our works:
@@ -67,8 +78,8 @@ You can try following methods in our code without any effort to make your own be
67
  pages={2105-2111},
68
  doi={10.1109/ICRA57147.2024.10610278}
69
  }
70
- @article{zhang2025himu,
71
- title={HiMo: High-Speed Objects Motion Compensation in Point Cloud},
72
  author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Sina, Sharif Mansouri and Andersson, Olov and Jensfelt, Patric},
73
  year={2025},
74
  journal={arXiv preprint arXiv:2503.00803},
@@ -78,6 +89,12 @@ You can try following methods in our code without any effort to make your own be
78
  And our excellent collaborators works as followings:
79
 
80
  ```bibtex
 
 
 
 
 
 
81
  @article{kim2025flow4d,
82
  author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon},
83
  journal={IEEE Robotics and Automation Letters},
 
2
  license: bsd-3-clause
3
  language:
4
  - en
5
+ tags:
6
+ - scene-flow
7
+ - point-cloud
8
+ - codebase
9
+ - 3d-vision
10
  ---
 
11
 
12
  <p align="center">
13
  <a href="https://github.com/KTH-RPL/OpenSceneFlow">
 
19
 
20
  πŸ’ž If you find [*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) useful to your research, please cite [**our works** πŸ“–](#cite-us) and give [a star 🌟](https://github.com/KTH-RPL/OpenSceneFlow) as encouragement. (ΰ©­ΛŠκ’³β€‹Λ‹)੭✧
21
 
22
+ [*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) is a codebase for point cloud scene flow estimation.
23
+ Please check the usage on [KTH-RPL/OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow).
24
+ Here we upload our demo data and checkpoint for the community.
25
 
26
+ ## 🎁 One repository, All methods!
 
 
 
 
 
 
 
 
27
 
28
+ You can try following methods in [our OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow) without any effort to make your own benchmark.
29
 
30
+ Officially:
31
+ - [x] [HiMo (SeFlow++)](https://arxiv.org/abs/2503.00803): T-RO 2025
32
+ - [x] [VoteFlow](https://arxiv.org/abs/2503.22328): CVPR 2025
 
33
  - [x] [SSF](https://arxiv.org/abs/2501.17821) (Ours πŸš€): ICRA 2025
34
  - [x] [Flow4D](https://ieeexplore.ieee.org/document/10887254): RA-L 2025
35
  - [x] [SeFlow](https://arxiv.org/abs/2407.01702) (Ours πŸš€): ECCV 2024
36
  - [x] [DeFlow](https://arxiv.org/abs/2401.16122) (Ours πŸš€): ICRA 2024
37
+
38
+ <details> <summary> Reoriginse to our codebase:</summary>
39
+
40
  - [x] [FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021
41
+ - [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](https://github.com/KTH-RPL/OpenSceneFlow/tools/zerof2ours.py).
42
+ - [x] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](https://github.com/KTH-RPL/OpenSceneFlow/assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
43
+ - [x] [FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. Done coding, public after review.
44
  - [ ] ... more on the way
45
 
46
  </details>
47
 
48
+ ## Notes
49
+
50
+ The tree of uploaded files:
51
+ * [ModelName_best].ckpt: means the model evaluated in the public leaderboard page provided by authors or our retrained with the best parameters.
52
+ * [demo-data-v2.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 1.2GB, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/SeFlow?tab=readme-ov-file#1-run--train).
53
+ * [waymo_map.tar.gz](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/waymo_map.tar.gz): to successfully process waymo data with ground segmentation included to unified h5 file. Check usage in [this README](https://github.com/KTH-RPL/SeFlow/blob/main/dataprocess/README.md#waymo-dataset).
54
+ * [demo_data.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 1st version (will deprecated later) 613Mb, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/OpenSceneFlow?tab=readme-ov-file#1-run--train).
55
+
56
+ All test result reports can be found [v2 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/6)
57
+ and [v1 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/2).
58
+
59
  ## Cite Us
60
 
61
  *OpenSceneFlow* is designed by [Qingwen Zhang](https://kin-zhang.github.io/) from DeFlow and SeFlow project. If you find it useful, please cite our works:
 
78
  pages={2105-2111},
79
  doi={10.1109/ICRA57147.2024.10610278}
80
  }
81
+ @article{zhang2025himo,
82
+ title={HiMo: High-Speed Objects Motion Compensation in Point Clouds},
83
  author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Sina, Sharif Mansouri and Andersson, Olov and Jensfelt, Patric},
84
  year={2025},
85
  journal={arXiv preprint arXiv:2503.00803},
 
89
  And our excellent collaborators works as followings:
90
 
91
  ```bibtex
92
+ @inproceedings{lin2025voteflow,
93
+ title={VoteFlow: Enforcing Local Rigidity in Self-Supervised Scene Flow},
94
+ author={Lin, Yancong and Wang, Shiming and Nan, Liangliang and Kooij, Julian and Caesar, Holger},
95
+ booktitle={CVPR},
96
+ year={2025},
97
+ }
98
  @article{kim2025flow4d,
99
  author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon},
100
  journal={IEEE Robotics and Automation Letters},