Update README.md
Browse files
README.md
CHANGED
@@ -1,16 +1,18 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
language:
|
4 |
-
- en
|
5 |
-
pipeline_tag: image-text-to-text
|
6 |
-
tags:
|
7 |
-
- multimodal
|
8 |
-
- gui
|
9 |
-
- llama-cpp
|
10 |
-
- gguf-my-repo
|
11 |
-
library_name: transformers
|
12 |
-
base_model: bytedance-research/UI-TARS-72B-SFT
|
13 |
-
---
|
|
|
|
|
14 |
|
15 |
# main-horse/UI-TARS-72B-SFT-Q4_K_M-GGUF
|
16 |
This model was converted to GGUF format from [`bytedance-research/UI-TARS-72B-SFT`](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT) using llama.cpp.
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: image-text-to-text
|
6 |
+
tags:
|
7 |
+
- multimodal
|
8 |
+
- gui
|
9 |
+
- llama-cpp
|
10 |
+
- gguf-my-repo
|
11 |
+
library_name: transformers
|
12 |
+
base_model: bytedance-research/UI-TARS-72B-SFT
|
13 |
+
---
|
14 |
+
|
15 |
+
note: most qwen2 weights aren't divisible by 256, so this is really a q8/q5 quant.
|
16 |
|
17 |
# main-horse/UI-TARS-72B-SFT-Q4_K_M-GGUF
|
18 |
This model was converted to GGUF format from [`bytedance-research/UI-TARS-72B-SFT`](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT) using llama.cpp.
|