Updata README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,77 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: robotics
|
3 |
+
library_name: transformers
|
4 |
+
license: cc-by-nc-sa-4.0
|
5 |
+
tags:
|
6 |
+
- vision-language-model
|
7 |
+
- video-language-model
|
8 |
+
- navigation
|
9 |
+
---
|
10 |
+
|
11 |
+
<div id="top" align="center">
|
12 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/64e6d9d229a548f66aff6e5b/4ZRvK6ySWCFj9mlpND791.gif" width=60% >
|
13 |
+
|
14 |
+
</div>
|
15 |
+
|
16 |
+
|
17 |
+
|
18 |
+
|
19 |
+
# InternVLA-N1: An Open Dual-System Navigation Foundation Model with Learned Latent Plans
|
20 |
+
|
21 |
+
[](https://github.com/InternRobotics/InternNav)
|
22 |
+
|
23 |
+
The technical report will be public in the coming open-source week. Please stay tuned!
|
24 |
+
|
25 |
+
|
26 |
+
|
27 |
+
## 🔔 Important Notice
|
28 |
+
|
29 |
+
* This repository hosts the **official release** of **InternVLA-N1**.
|
30 |
+
* The previously **InternVLA-N1** model has been renamed to **InternVLA-N1-Preview**. If you are looking for the **earlier preview version**, please check [InternVLA-N1-Preview](https://huggingface.co/InternRobotics/InternVLA-N1-Preview).
|
31 |
+
* We recommend using this official release for research and deployment, as it contains the most stable and up-to-date improvements.
|
32 |
+
|
33 |
+
### Key Difference: Preview vs Official
|
34 |
+
| Feature | InternVLA-N1-Preview | InternVLA-N1 (official) |
|
35 |
+
| ------------- | ----------------------------------------- | ------------------------------------------------------------------------ |
|
36 |
+
| System Design | Dual-System (synchronous) | Dual-System (asynchronous) |
|
37 |
+
| Training | System 1 trained only at System 2 inferrence step | System 1 trained on denser step (~25 cm), using latest System 2 hidden state |
|
38 |
+
| Inference | System 1, 2 infered at same frequency (~2 hz) | System 1, 2 infered asynchronously, allowing dynamic obstacle avoidance |
|
39 |
+
| Performance | Solid baseline in simulation & benchmarks | Improved smoothness, efficiency, and real-world zero-shot generalization |
|
40 |
+
| Status | Historical preview | Stable official release (recommended)
|
41 |
+
|
42 |
+
## Highlights
|
43 |
+
|
44 |
+
- Dual-System Framework
|
45 |
+
|
46 |
+
The first navigation foundation model that achieves joint-tuning and asychronous inference of System-2 reasoning and System-1 action, resulting in smooth and efficient execution during the instruction-followed navigation procedure.
|
47 |
+
|
48 |
+
- State-of-the-art
|
49 |
+
|
50 |
+
The whole navigation foundation model with each system achieves state-of-the-art performance on both mainstream and our new established challenging benchmarks, including VLN-CE R2R & RxR, GRScenes-100, VLN-PE, etc.
|
51 |
+
|
52 |
+
- Sim2Real Zero-shot Generalization
|
53 |
+
|
54 |
+
The training is based on simulation data InternData-N1 only, with diverse scenes, embodiments and other randomization, while achieving great zero-shot generalization capabilities in the real world.
|
55 |
+
|
56 |
+
## Usage
|
57 |
+
|
58 |
+
Please refer to [InternNav](https://github.com/InternRobotics/InternNav) for its inference, evaluation and gradio demo.
|
59 |
+
|
60 |
+
## Citation
|
61 |
+
|
62 |
+
If you find our work helpful, please consider starring this repo 🌟 and cite:
|
63 |
+
|
64 |
+
```bibtex
|
65 |
+
@misc{internvla-n1,
|
66 |
+
title = {{InternVLA-N1: An} Open Dual-System Navigation Foundation Model with Learned Latent Plans},
|
67 |
+
author = {InternVLA-N1 Team},
|
68 |
+
year = {2025},
|
69 |
+
booktitle={arXiv},
|
70 |
+
}
|
71 |
+
```
|
72 |
+
|
73 |
+
## License
|
74 |
+
This work is under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
|
75 |
+
|
76 |
+
## Acknowledgements
|
77 |
+
This repository is based on [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL).
|