Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ license: cc-by-nc-4.0
|
|
16 |
Starcannon-Unleashed-12B-v1.0-GGUF
|
17 |
==================================
|
18 |
|
19 |
-
Static
|
20 |
|
21 |
This model was converted to GGUF format from [VongolaChouko/Starcannon-Unleashed-12B-v1.0](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0) for more details on the model.
|
22 |
|
@@ -33,26 +33,16 @@ Recommended settings are here: [**Settings**](https://huggingface.co/VongolaChou
|
|
33 |
| [Starcannon-Unleashed-12B-v1.0-FP16.gguf](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/Starcannon-Unleashed-12B-v1.0-FP16.gguf) | f16 | 24.50GB | false | Full F16 weights. |
|
34 |
| [Mistral-Nemo-Instruct-2407-Q8_0.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q8_0.gguf) | Q8_0 | 13.02GB | false | Extremely high quality, generally unneeded but max available quant. |
|
35 |
| [Starcannon-Unleashed-12B-v1.0-Q6_K.gguf](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/Starcannon-Unleashed-12B-v1.0-Q6_K.gguf) | Q6_K | 10.06GB | false | Very high quality, near perfect, *recommended*. |
|
36 |
-
| [Mistral-Nemo-Instruct-2407-Q5_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_L.gguf) | Q5_K_L | 9.14GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
|
37 |
| [Mistral-Nemo-Instruct-2407-Q5_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_M.gguf) | Q5_K_M | 8.73GB | false | High quality, *recommended*. |
|
38 |
| [Mistral-Nemo-Instruct-2407-Q5_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_S.gguf) | Q5_K_S | 8.52GB | false | High quality, *recommended*. |
|
39 |
| [Mistral-Nemo-Instruct-2407-Q4_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_L.gguf) | Q4_K_L | 7.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
|
40 |
| [Mistral-Nemo-Instruct-2407-Q4_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_M.gguf) | Q4_K_M | 7.48GB | false | Good quality, default size for must use cases, *recommended*. |
|
41 |
-
| [Mistral-Nemo-Instruct-2407-Q3_K_XL.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_XL.gguf) | Q3_K_XL | 7.15GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
|
42 |
| [Mistral-Nemo-Instruct-2407-Q4_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_S.gguf) | Q4_K_S | 7.12GB | false | Slightly lower quality with more space savings, *recommended*. |
|
43 |
| [Mistral-Nemo-Instruct-2407-Q4_0.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_0.gguf) | Q4_0 | 7.09GB | false | Legacy format, generally not worth using over similarly sized formats |
|
44 |
-
| [Mistral-Nemo-Instruct-2407-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_0_8_8.gguf) | Q4_0_8_8 | 7.07GB | false | Optimized for ARM and CPU inference, much faster than Q4_0 at similar quality. |
|
45 |
-
| [Mistral-Nemo-Instruct-2407-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_0_4_8.gguf) | Q4_0_4_8 | 7.07GB | false | Optimized for ARM and CPU inference, much faster than Q4_0 at similar quality. |
|
46 |
-
| [Mistral-Nemo-Instruct-2407-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_0_4_4.gguf) | Q4_0_4_4 | 7.07GB | false | Optimized for ARM and CPU inference, much faster than Q4_0 at similar quality. |
|
47 |
-
| [Mistral-Nemo-Instruct-2407-IQ4_XS.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-IQ4_XS.gguf) | IQ4_XS | 6.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
48 |
| [Mistral-Nemo-Instruct-2407-Q3_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_L.gguf) | Q3_K_L | 6.56GB | false | Lower quality but usable, good for low RAM availability. |
|
49 |
| [Mistral-Nemo-Instruct-2407-Q3_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_M.gguf) | Q3_K_M | 6.08GB | false | Low quality. |
|
50 |
-
| [Mistral-Nemo-Instruct-2407-IQ3_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-IQ3_M.gguf) | IQ3_M | 5.72GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
51 |
| [Mistral-Nemo-Instruct-2407-Q3_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_S.gguf) | Q3_K_S | 5.53GB | false | Low quality, not recommended. |
|
52 |
-
| [Mistral-Nemo-Instruct-2407-Q2_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q2_K_L.gguf) | Q2_K_L | 5.45GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
|
53 |
-
| [Mistral-Nemo-Instruct-2407-IQ3_XS.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-IQ3_XS.gguf) | IQ3_XS | 5.31GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
54 |
| [Mistral-Nemo-Instruct-2407-Q2_K.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q2_K.gguf) | Q2_K | 4.79GB | false | Very low quality but surprisingly usable. |
|
55 |
-
| [Mistral-Nemo-Instruct-2407-IQ2_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-IQ2_M.gguf) | IQ2_M | 4.44GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
|
56 |
|
57 |
|
58 |
## Instruct
|
|
|
16 |
Starcannon-Unleashed-12B-v1.0-GGUF
|
17 |
==================================
|
18 |
|
19 |
+
Static Quantization of [**VongolaChouko/Starcannon-Unleashed-12B-v1.0**](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0).
|
20 |
|
21 |
This model was converted to GGUF format from [VongolaChouko/Starcannon-Unleashed-12B-v1.0](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0) for more details on the model.
|
22 |
|
|
|
33 |
| [Starcannon-Unleashed-12B-v1.0-FP16.gguf](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/Starcannon-Unleashed-12B-v1.0-FP16.gguf) | f16 | 24.50GB | false | Full F16 weights. |
|
34 |
| [Mistral-Nemo-Instruct-2407-Q8_0.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q8_0.gguf) | Q8_0 | 13.02GB | false | Extremely high quality, generally unneeded but max available quant. |
|
35 |
| [Starcannon-Unleashed-12B-v1.0-Q6_K.gguf](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0-GGUF/blob/main/Starcannon-Unleashed-12B-v1.0-Q6_K.gguf) | Q6_K | 10.06GB | false | Very high quality, near perfect, *recommended*. |
|
|
|
36 |
| [Mistral-Nemo-Instruct-2407-Q5_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_M.gguf) | Q5_K_M | 8.73GB | false | High quality, *recommended*. |
|
37 |
| [Mistral-Nemo-Instruct-2407-Q5_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_S.gguf) | Q5_K_S | 8.52GB | false | High quality, *recommended*. |
|
38 |
| [Mistral-Nemo-Instruct-2407-Q4_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_L.gguf) | Q4_K_L | 7.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
|
39 |
| [Mistral-Nemo-Instruct-2407-Q4_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_M.gguf) | Q4_K_M | 7.48GB | false | Good quality, default size for must use cases, *recommended*. |
|
|
|
40 |
| [Mistral-Nemo-Instruct-2407-Q4_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_S.gguf) | Q4_K_S | 7.12GB | false | Slightly lower quality with more space savings, *recommended*. |
|
41 |
| [Mistral-Nemo-Instruct-2407-Q4_0.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_0.gguf) | Q4_0 | 7.09GB | false | Legacy format, generally not worth using over similarly sized formats |
|
|
|
|
|
|
|
|
|
42 |
| [Mistral-Nemo-Instruct-2407-Q3_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_L.gguf) | Q3_K_L | 6.56GB | false | Lower quality but usable, good for low RAM availability. |
|
43 |
| [Mistral-Nemo-Instruct-2407-Q3_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_M.gguf) | Q3_K_M | 6.08GB | false | Low quality. |
|
|
|
44 |
| [Mistral-Nemo-Instruct-2407-Q3_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_S.gguf) | Q3_K_S | 5.53GB | false | Low quality, not recommended. |
|
|
|
|
|
45 |
| [Mistral-Nemo-Instruct-2407-Q2_K.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q2_K.gguf) | Q2_K | 4.79GB | false | Very low quality but surprisingly usable. |
|
|
|
46 |
|
47 |
|
48 |
## Instruct
|