Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,36 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
base_model:
|
| 4 |
+
- TIGER-Lab/VL-Rethinker-7B
|
| 5 |
+
base_model_relation: quantized
|
| 6 |
+
pipeline_tag: visual-question-answering
|
| 7 |
+
tags:
|
| 8 |
+
- chat
|
| 9 |
+
- mlx
|
| 10 |
+
- apple
|
| 11 |
+
- 8bit
|
| 12 |
+
- multimodal
|
| 13 |
+
language:
|
| 14 |
+
- en
|
| 15 |
+
library_name: mlx
|
| 16 |
+
---
|
| 17 |
+
# VL-Rethinker-7B 8-bit MLX
|
| 18 |
+
|
| 19 |
+
This model was converted to MLX format from [`TIGER-Lab/VL-Rethinker-7B`](https://huggingface.co/TIGER-Lab/VL-Rethinker-7B) using mlx-vlm version **0.1.23**.
|
| 20 |
+
|
| 21 |
+
Refer to the [original model card](https://huggingface.co/TIGER-Lab/VL-Rethinker-7B) for more details on the model.
|
| 22 |
+
|
| 23 |
+
### Important!
|
| 24 |
+
If you use LM Studio, do not update the MLX runtime to the latest version. The latest MLX runtime (0.13.1) has a bug, and the model crashes when you send images to it.
|
| 25 |
+
Use previous mlx runtime **0.12.1**.
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
## Use with mlx
|
| 29 |
+
|
| 30 |
+
```bash
|
| 31 |
+
pip install -U mlx-vlm
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
```bash
|
| 35 |
+
python -m mlx_vlm.generate --model TheCluster/VL-Rethinker-7B-mlx-8bit --max-tokens 512 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
|
| 36 |
+
```
|