simuarc
commited on
Commit
·
b768b4e
1
Parent(s):
f166468
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,10 @@
|
|
1 |
---
|
2 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
**This model repository contains files in GGUF format for the Yi 34B LLaMA, compatible with LLaMA modeling, based on the work from the [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) repository.**
|
@@ -32,5 +37,4 @@ The following tables list the available Yi-34B-Llamafied model files with their
|
|
32 |
| Q6 | Yi-34B-Llama_Q6_K | Very Large | Extremely Low | |
|
33 |
| Q8 | Yi-34B-Llama_Q8_0 | Very Large | Extremely Low *(not recommended)* | |
|
34 |
|
35 |
-
Please choose the model that best suits your needs based on the size and quality loss trade-offs.
|
36 |
-
|
|
|
1 |
---
|
2 |
pipeline_tag: text-generation
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- zh
|
6 |
+
tags:
|
7 |
+
- llama
|
8 |
---
|
9 |
|
10 |
**This model repository contains files in GGUF format for the Yi 34B LLaMA, compatible with LLaMA modeling, based on the work from the [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) repository.**
|
|
|
37 |
| Q6 | Yi-34B-Llama_Q6_K | Very Large | Extremely Low | |
|
38 |
| Q8 | Yi-34B-Llama_Q8_0 | Very Large | Extremely Low *(not recommended)* | |
|
39 |
|
40 |
+
Please choose the model that best suits your needs based on the size and quality loss trade-offs.
|
|