Commit
·
5dbf27b
0
Parent(s):
Duplicate from localmodels/LLM
Browse files- .gitattributes +36 -0
- README.md +36 -0
- llama-65B.ggmlv3.q6_K.z01 +3 -0
- llama-65B.ggmlv3.q6_K.zip +3 -0
- llama-65B.ggmlv3.q8_0.z01 +3 -0
- llama-65B.ggmlv3.q8_0.zip +3 -0
- llama-65b.ggmlv3.q2_K.bin +3 -0
- llama-65b.ggmlv3.q3_K_L.bin +3 -0
- llama-65b.ggmlv3.q3_K_M.bin +3 -0
- llama-65b.ggmlv3.q3_K_S.bin +3 -0
- llama-65b.ggmlv3.q4_0.bin +3 -0
- llama-65b.ggmlv3.q4_1.bin +3 -0
- llama-65b.ggmlv3.q4_K_M.bin +3 -0
- llama-65b.ggmlv3.q4_K_S.bin +3 -0
- llama-65b.ggmlv3.q5_0.bin +3 -0
- llama-65b.ggmlv3.q5_1.bin +3 -0
- llama-65b.ggmlv3.q5_K_M.bin +3 -0
- llama-65b.ggmlv3.q5_K_S.bin +3 -0
.gitattributes
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
35 |
+
llama-65B.ggmlv3.q6_K.z01 filter=lfs diff=lfs merge=lfs -text
|
36 |
+
llama-65B.ggmlv3.q8_0.z01 filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
duplicated_from: localmodels/LLM
|
3 |
+
---
|
4 |
+
# LLaMA 65B ggml
|
5 |
+
|
6 |
+
From Meta: https://ai.meta.com/blog/large-language-model-llama-meta-ai
|
7 |
+
|
8 |
+
---
|
9 |
+
|
10 |
+
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
|
11 |
+
|
12 |
+
Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
|
13 |
+
|
14 |
+
### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
15 |
+
|
16 |
+
Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
|
17 |
+
|
18 |
+
---
|
19 |
+
|
20 |
+
## Provided files
|
21 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
22 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
23 |
+
| llama-65b.ggmlv3.q2_K.bin | q2_K | 2 | 27.33 GB| 29.83 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
24 |
+
| llama-65b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 34.55 GB| 37.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
25 |
+
| llama-65b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 31.40 GB| 33.90 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
26 |
+
| llama-65b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 28.06 GB| 30.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
27 |
+
| llama-65b.ggmlv3.q4_0.bin | q4_0 | 4 | 36.73 GB| 39.23 GB | Original quant method, 4-bit. |
|
28 |
+
| llama-65b.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB| 43.31 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
29 |
+
| llama-65b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 39.28 GB| 41.78 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
30 |
+
| llama-65b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 36.73 GB| 39.23 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
31 |
+
| llama-65b.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB| 47.39 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
32 |
+
| llama-65b.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB| 51.47 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
33 |
+
| llama-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.20 GB| 48.70 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
34 |
+
| llama-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.89 GB| 47.39 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
35 |
+
| llama-65b.ggmlv3.q6_K.bin | q6_K |6 | 53.56 GB| 56.06 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
36 |
+
| llama-65b.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
llama-65B.ggmlv3.q6_K.z01
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:40a9e3e3724947d9a5a445776b96aa07a5c27222d71ce51f8faf58e833211284
|
3 |
+
size 41943040000
|
llama-65B.ggmlv3.q6_K.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:455129fbee1084c165e6bf174ddb90d0da81663f430d7f964a5d98750b02844b
|
3 |
+
size 11616276416
|
llama-65B.ggmlv3.q8_0.z01
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cc7a45df9f58bc6efcbb04cb30bbcc4d2cee1261d602ba1d304ff676bb388e66
|
3 |
+
size 41943040000
|
llama-65B.ggmlv3.q8_0.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c83975dbddf0a567ee1dab831b0e57bdc5cd1a9524715d32638807b837746eb9
|
3 |
+
size 27427327936
|
llama-65b.ggmlv3.q2_K.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:43b58561af47f3a0c98a22f11cc7c6cad3c818cd72135afdea4da7cae090c4ea
|
3 |
+
size 27325419136
|
llama-65b.ggmlv3.q3_K_L.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4a609fa3c64510a65bf95152e392d47aa8b052167b32dee21554b8e5d254e28b
|
3 |
+
size 34545684096
|
llama-65b.ggmlv3.q3_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a3d1ff40a2ce213ba374591192a923c8fb4f1b32a3cd6afff18b437395632a90
|
3 |
+
size 31399956096
|
llama-65b.ggmlv3.q3_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:26ddf57e17a716c56d881bf916b8694ffb836042fb62d285a48ab0e0a0b0066b
|
3 |
+
size 28057620096
|
llama-65b.ggmlv3.q4_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cde053439fa4910ae454407e2717cc46cc2c2b4995c00c93297a2b52e790fa92
|
3 |
+
size 36728196736
|
llama-65b.ggmlv3.q4_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2ac9681fd98c6baa1bff23b6b816c2edf6d10395cb1eb1e55ff9676b456010f0
|
3 |
+
size 40808468096
|
llama-65b.ggmlv3.q4_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3291af38093349355eb62e8ffb7c72c2b8c41abd7dbbf99698df7ab076d6112c
|
3 |
+
size 39280168576
|
llama-65b.ggmlv3.q4_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:51f28956a8c3f39cf6c832aa4031615d3bbb8b3b66c3baa51c38be766deb7f65
|
3 |
+
size 36728196736
|
llama-65b.ggmlv3.q5_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c1c4b887decdce79457a781cb4850d1a95531d49e92e3044ebbc4f8081c69e48
|
3 |
+
size 44888739456
|
llama-65b.ggmlv3.q5_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:430e30e407997cfadfc011b862b64cf9a916e06fa2d928243e83c26378880e52
|
3 |
+
size 48969010816
|
llama-65b.ggmlv3.q5_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1504aefd75452349fe04271434cdf4f8007312f6484e1bdfbdadfde2683db7f4
|
3 |
+
size 46203391616
|
llama-65b.ggmlv3.q5_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e95373f11af1f6e6d5c490dba98946a55be8baffed8e87b28c1eb9e83aacbc1f
|
3 |
+
size 44888739456
|