Image-Text-to-Text
Transformers
PyTorch
English
llava
image-to-text
1-bit
VLA
VLM
conversational

Improve model card: Add robotics pipeline tag and library name

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -1,21 +1,24 @@
1
  ---
2
- license: mit
 
3
  datasets:
4
  - MAmmoTH-VL/MAmmoTH-VL-Instruct-12M
5
  - liuhaotian/LLaVA-Pretrain
6
  language:
7
  - en
 
8
  metrics:
9
  - accuracy
10
- base_model:
11
- - microsoft/bitnet-b1.58-2B-4T
12
- pipeline_tag: image-text-to-text
13
  tags:
14
  - 1-bit
15
  - VLA
16
  - VLM
 
17
  ---
 
18
  # BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation
 
19
  [[paper]](https://arxiv.org/abs/2506.07530) [[model]](https://huggingface.co/collections/hongyuw/bitvla-68468fb1e3aae15dd8a4e36e) [[code]](https://github.com/ustcwhy/BitVLA)
20
 
21
  - June 2025: [BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation](https://arxiv.org/abs/2506.07530)
@@ -82,4 +85,4 @@ If you find this repository useful, please consider citing our work:
82
 
83
  ### Contact Information
84
 
85
- For help or issues using models, please submit a GitHub issue.
 
1
  ---
2
+ base_model:
3
+ - microsoft/bitnet-b1.58-2B-4T
4
  datasets:
5
  - MAmmoTH-VL/MAmmoTH-VL-Instruct-12M
6
  - liuhaotian/LLaVA-Pretrain
7
  language:
8
  - en
9
+ license: mit
10
  metrics:
11
  - accuracy
12
+ pipeline_tag: robotics
 
 
13
  tags:
14
  - 1-bit
15
  - VLA
16
  - VLM
17
+ library_name: transformers
18
  ---
19
+
20
  # BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation
21
+
22
  [[paper]](https://arxiv.org/abs/2506.07530) [[model]](https://huggingface.co/collections/hongyuw/bitvla-68468fb1e3aae15dd8a4e36e) [[code]](https://github.com/ustcwhy/BitVLA)
23
 
24
  - June 2025: [BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation](https://arxiv.org/abs/2506.07530)
 
85
 
86
  ### Contact Information
87
 
88
+ For help or issues using models, please submit a GitHub issue.