Add link to paper
#10
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,15 +1,15 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
|
|
4 |
datasets:
|
5 |
- HuggingFaceM4/the_cauldron
|
6 |
- HuggingFaceM4/Docmatix
|
7 |
-
pipeline_tag: image-text-to-text
|
8 |
language:
|
9 |
- en
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
---
|
14 |
|
15 |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/SmolVLM_256_banner.png" width="800" height="auto" alt="Image description">
|
@@ -30,6 +30,7 @@ SmolVLM-500M is a tiny multimodal model, member of the SmolVLM family. It accept
|
|
30 |
|
31 |
- **Demo:** [SmolVLM-256 Demo](https://huggingface.co/spaces/HuggingFaceTB/SmolVLM-256M-Demo)
|
32 |
- **Blog:** [Blog post](https://huggingface.co/blog/smolvlm)
|
|
|
33 |
|
34 |
## Uses
|
35 |
|
@@ -39,10 +40,8 @@ To fine-tune SmolVLM on a specific task, you can follow [the fine-tuning tutoria
|
|
39 |
|
40 |
## Evaluation
|
41 |
|
42 |
-
|
43 |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smoller_vlm_benchmarks.png" alt="Benchmarks" style="width:90%;" />
|
44 |
|
45 |
-
|
46 |
### Technical Summary
|
47 |
|
48 |
SmolVLM leverages the lightweight SmolLM2 language model to provide a compact yet powerful multimodal experience. It introduces several changes compared to the larger SmolVLM 2.2B model:
|
@@ -112,7 +111,6 @@ In summary, the image showcases the Statue of Liberty, a symbol of freedom and d
|
|
112 |
"""
|
113 |
```
|
114 |
|
115 |
-
|
116 |
### Model optimizations
|
117 |
|
118 |
**Precision**: For better performance, load and run the model in half-precision (`torch.bfloat16`) if your hardware supports it.
|
@@ -143,7 +141,6 @@ model = AutoModelForVision2Seq.from_pretrained(
|
|
143 |
**Vision Encoder Efficiency**: Adjust the image resolution by setting `size={"longest_edge": N*512}` when initializing the processor, where N is your desired value. The default `N=4` works well, which results in input images of
|
144 |
size 2048×2048. Decreasing N can save GPU memory and is appropriate for lower-resolution images. This is also useful if you want to fine-tune on videos.
|
145 |
|
146 |
-
|
147 |
## Misuse and Out-of-scope Use
|
148 |
|
149 |
SmolVLM is not intended for high-stakes scenarios or critical decision-making processes that affect an individual's well-being or livelihood. The model may produce content that appears factual but may not be accurate. Misuse includes, but is not limited to:
|
@@ -180,5 +177,4 @@ You can cite us in the following way:
|
|
180 |
journal={arXiv preprint arXiv:2504.05299},
|
181 |
year={2025}
|
182 |
}
|
183 |
-
```
|
184 |
-
|
|
|
1 |
---
|
2 |
+
base_model:
|
3 |
+
- HuggingFaceTB/SmolLM2-360M-Instruct
|
4 |
+
- google/siglip-base-patch16-512
|
5 |
datasets:
|
6 |
- HuggingFaceM4/the_cauldron
|
7 |
- HuggingFaceM4/Docmatix
|
|
|
8 |
language:
|
9 |
- en
|
10 |
+
library_name: transformers
|
11 |
+
license: apache-2.0
|
12 |
+
pipeline_tag: image-text-to-text
|
13 |
---
|
14 |
|
15 |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/SmolVLM_256_banner.png" width="800" height="auto" alt="Image description">
|
|
|
30 |
|
31 |
- **Demo:** [SmolVLM-256 Demo](https://huggingface.co/spaces/HuggingFaceTB/SmolVLM-256M-Demo)
|
32 |
- **Blog:** [Blog post](https://huggingface.co/blog/smolvlm)
|
33 |
+
- **Paper:** [](https://huggingface.co/papers/2504.05299)
|
34 |
|
35 |
## Uses
|
36 |
|
|
|
40 |
|
41 |
## Evaluation
|
42 |
|
|
|
43 |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smoller_vlm_benchmarks.png" alt="Benchmarks" style="width:90%;" />
|
44 |
|
|
|
45 |
### Technical Summary
|
46 |
|
47 |
SmolVLM leverages the lightweight SmolLM2 language model to provide a compact yet powerful multimodal experience. It introduces several changes compared to the larger SmolVLM 2.2B model:
|
|
|
111 |
"""
|
112 |
```
|
113 |
|
|
|
114 |
### Model optimizations
|
115 |
|
116 |
**Precision**: For better performance, load and run the model in half-precision (`torch.bfloat16`) if your hardware supports it.
|
|
|
141 |
**Vision Encoder Efficiency**: Adjust the image resolution by setting `size={"longest_edge": N*512}` when initializing the processor, where N is your desired value. The default `N=4` works well, which results in input images of
|
142 |
size 2048×2048. Decreasing N can save GPU memory and is appropriate for lower-resolution images. This is also useful if you want to fine-tune on videos.
|
143 |
|
|
|
144 |
## Misuse and Out-of-scope Use
|
145 |
|
146 |
SmolVLM is not intended for high-stakes scenarios or critical decision-making processes that affect an individual's well-being or livelihood. The model may produce content that appears factual but may not be accurate. Misuse includes, but is not limited to:
|
|
|
177 |
journal={arXiv preprint arXiv:2504.05299},
|
178 |
year={2025}
|
179 |
}
|
180 |
+
```
|
|