Kquant03 commited on
Commit
d432b2e
·
verified ·
1 Parent(s): cad5879

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -26,8 +26,8 @@ A frankenMoE using only DPO models. To be used with Chat-instruct mode enabled.
26
  | [Q4_K_M](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 13.32 GB| 15.32 GB | medium, balanced quality - recommended |
27
  | [Q5_0](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 16.24 GB| 18.24 GB | legacy; large, balanced quality |
28
  | [Q5_K_M](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | ~16.24 GB| ~18.24 GB | large, balanced quality - recommended |
29
- | [Q6 XL](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 19.35 GB| 21.35 GB | very large, extremely low quality loss |
30
- | [Q8 XXL](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 25.1 GB| 27.1 GB | very large, extremely low quality loss - not recommended |
31
 
32
  - [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) - router
33
  - [udkai/Turdus](https://huggingface.co/udkai/Turdus) - expert #1
 
26
  | [Q4_K_M](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 13.32 GB| 15.32 GB | medium, balanced quality - recommended |
27
  | [Q5_0](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 16.24 GB| 18.24 GB | legacy; large, balanced quality |
28
  | [Q5_K_M](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | ~16.24 GB| ~18.24 GB | large, balanced quality - recommended |
29
+ | [Q6 XL](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 19.35 GB| 21.35 GB | very large, extremely minor degradation |
30
+ | [Q8 XXL](https://huggingface.co/Kquant03/FrankenDPO-4x7B-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 25.1 GB| 27.1 GB | very large, extremely minor degradation - not recommended |
31
 
32
  - [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) - router
33
  - [udkai/Turdus](https://huggingface.co/udkai/Turdus) - expert #1