Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -20,14 +20,11 @@ tags:
|
|
20 |
|
21 |
# Quant Infos
|
22 |
|
23 |
-
|
24 |
-
|
25 |
-
-
|
26 |
-
-
|
27 |
-
|
28 |
-
<!-- - gguf & imatrix generated from bf16 for "optimal" accuracy loss (some say this is snake oil, but it can't hurt) -->
|
29 |
-
<!-- - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S -->
|
30 |
-
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) WIP [branch](https://github.com/ggerganov/llama.cpp/pull/7225)
|
31 |
- Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset.
|
32 |
```
|
33 |
./imatrix -c 512 -m $model_name-bf16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-bf16-gmerged.dat
|
|
|
20 |
|
21 |
# Quant Infos
|
22 |
|
23 |
+
- Requires latest llama.cpp master; Requires the latest version of the phi3 128k [branch](https://github.com/ggerganov/llama.cpp/pull/7225)
|
24 |
+
- quants done with an importance matrix for improved quantization loss
|
25 |
+
- gguf & imatrix generated from bf16 for "optimal" accuracy loss (some say this is snake oil, but it can't hurt)
|
26 |
+
- [WIP] Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S (in progress)
|
27 |
+
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [201cc11afa0a1950e1f632390b2ac6c937a0d8f0](https://github.com/ggerganov/llama.cpp/commit/201cc11afa0a1950e1f632390b2ac6c937a0d8f0)
|
|
|
|
|
|
|
28 |
- Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset.
|
29 |
```
|
30 |
./imatrix -c 512 -m $model_name-bf16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-bf16-gmerged.dat
|