--- duplicated_from: localmodels/LLM --- # LLaMA 13B ggml From Meta: https://ai.meta.com/blog/large-language-model-llama-meta-ai --- ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48. ### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387. --- ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | llama-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.43 GB| 7.93 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | llama-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB| 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | llama-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB| 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | llama-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB| 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | llama-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. | | llama-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | llama-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB| 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | llama-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB| 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | llama-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | llama-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | llama-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB| 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | llama-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB| 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | llama-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization | | llama-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |