Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,8 @@ tags:
|
|
9 |
quantized_by: bartowski
|
10 |
---
|
11 |
|
|
|
|
|
12 |
## Exllama v2 Quantizations of internlm2-math-7b-llama
|
13 |
|
14 |
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
|
|
|
9 |
quantized_by: bartowski
|
10 |
---
|
11 |
|
12 |
+
#### Special thanks to <a href="https://huggingface.co/chargoddard">Charles Goddard</a> for the conversion script to create llama models from internlm
|
13 |
+
|
14 |
## Exllama v2 Quantizations of internlm2-math-7b-llama
|
15 |
|
16 |
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
|