Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ pipeline_tag: image-text-to-text
|
|
14 |
|
15 |
# OpenVLA v0.1 7B
|
16 |
|
17 |
-
|
18 |
|
19 |
OpenVLA v0.1 7B (`openvla-v01-7b`) is an open vision-language-action model trained on 800K robot manipulation episodes from the [Open X-Embodiment](https://robotics-transformer-x.github.io/) dataset (the same mixture used by [Octo](https://octo-models.github.io/)).
|
20 |
The model takes language instructions and camera images as input and generates robot actions. It supports controlling multiple robots out-of-the-box, and can be quickly adapted for new robot domains via (parameter-efficient) fine-tuning.
|
|
|
14 |
|
15 |
# OpenVLA v0.1 7B
|
16 |
|
17 |
+
*Note: OpenVLA v0.1 was an early model we trained for development purposes; for our best model, see [openvla/openvla-7b](https://huggingface.co/openvla/openvla-7b).*
|
18 |
|
19 |
OpenVLA v0.1 7B (`openvla-v01-7b`) is an open vision-language-action model trained on 800K robot manipulation episodes from the [Open X-Embodiment](https://robotics-transformer-x.github.io/) dataset (the same mixture used by [Octo](https://octo-models.github.io/)).
|
20 |
The model takes language instructions and camera images as input and generates robot actions. It supports controlling multiple robots out-of-the-box, and can be quickly adapted for new robot domains via (parameter-efficient) fine-tuning.
|