morriszms commited on
Commit
5fb420e
·
verified ·
1 Parent(s): f8e2481

Upload folder using huggingface_hub

Browse files
Llama-3-8B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:444ba5282b30f0e94d89677440275e6640ddb1c25e393440552e388e322ff666
3
+ size 3179131424
Llama-3-8B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6842a0b771af0c711ed646bd403dacbdcfce632e2a258ebdaf6af6d0790501d
3
+ size 4321956384
Llama-3-8B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42eb619563f8d66033b7b8ae283f83b9829269dd4b25490c6b9b6f1e4daae6b2
3
+ size 4018917920
Llama-3-8B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eef3e810b04b7a8e3c9ce18b9164eee5df3352a4badfd6386e8f72b9e1dc7ab1
3
+ size 3664499232
Llama-3-8B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d69bf2cadd6b5c17ae8f542afe5f80505d5d040f5bd4fd4c20206b44af7e4c1
3
+ size 4661211680
Llama-3-8B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4671fb525e802307805170d604a19991055be8f5b0348403b43b311c25b0423
3
+ size 4920734240
Llama-3-8B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62b91cf4cfbd463ff217809dadc7f785287c6437ef71474f0b8a7d083a4339e7
3
+ size 4692668960
Llama-3-8B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:116c7f987337193fa8f60d6891d591d2649403e166dbdcac37fd69cc17e95d5d
3
+ size 5599293984
Llama-3-8B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:101ae028b15ee0658bdbe4b699be8544080f87cb85e3d01ae0737bcbdf6d611d
3
+ size 5732987424
Llama-3-8B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c934d82a897de73bd99ab7fe099823303842cef4de2d08be1058a56e2e7eb2b4
3
+ size 5599293984
Llama-3-8B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:409b50862efefd851637fcf38fcb8b0c0235ea8454b6b237741f2a45db60278f
3
+ size 6596006432
Llama-3-8B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6dde64fd17ebd7ca57fe776feb4e82818e1998016160b278390f4b9d0cc13050
3
+ size 8540770848
README.md CHANGED
@@ -1,16 +1,21 @@
1
  ---
2
  language:
3
- - en
4
- library_name: transformers
 
5
  license: llama3
6
  tags:
7
- - unsloth
8
- - transformers
9
  - llama
10
  - llama-3
 
11
  - TensorBlock
12
  - GGUF
13
- base_model: unsloth/llama-3-8b
 
 
 
 
14
  ---
15
 
16
  <div style="width: auto; margin-left: auto; margin-right: auto">
@@ -24,12 +29,11 @@ base_model: unsloth/llama-3-8b
24
  </div>
25
  </div>
26
 
27
- ## unsloth/llama-3-8b - GGUF
28
 
29
- This repo contains GGUF format model files for [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b).
30
-
31
- The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
32
 
 
33
 
34
  <div style="text-align: left; margin: 20px 0;">
35
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
@@ -39,7 +43,6 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
39
 
40
  ## Prompt template
41
 
42
-
43
  ```
44
 
45
  ```
@@ -48,18 +51,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
48
 
49
  | Filename | Quant type | File Size | Description |
50
  | -------- | ---------- | --------- | ----------- |
51
- | [llama-3-8b-Q2_K.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q2_K.gguf) | Q2_K | 2.961 GB | smallest, significant quality loss - not recommended for most purposes |
52
- | [llama-3-8b-Q3_K_S.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q3_K_S.gguf) | Q3_K_S | 3.413 GB | very small, high quality loss |
53
- | [llama-3-8b-Q3_K_M.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q3_K_M.gguf) | Q3_K_M | 3.743 GB | very small, high quality loss |
54
- | [llama-3-8b-Q3_K_L.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q3_K_L.gguf) | Q3_K_L | 4.025 GB | small, substantial quality loss |
55
- | [llama-3-8b-Q4_0.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q4_0.gguf) | Q4_0 | 4.341 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
56
- | [llama-3-8b-Q4_K_S.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q4_K_S.gguf) | Q4_K_S | 4.370 GB | small, greater quality loss |
57
- | [llama-3-8b-Q4_K_M.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q4_K_M.gguf) | Q4_K_M | 4.583 GB | medium, balanced quality - recommended |
58
- | [llama-3-8b-Q5_0.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q5_0.gguf) | Q5_0 | 5.215 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
59
- | [llama-3-8b-Q5_K_S.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q5_K_S.gguf) | Q5_K_S | 5.215 GB | large, low quality loss - recommended |
60
- | [llama-3-8b-Q5_K_M.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q5_K_M.gguf) | Q5_K_M | 5.339 GB | large, very low quality loss - recommended |
61
- | [llama-3-8b-Q6_K.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q6_K.gguf) | Q6_K | 6.143 GB | very large, extremely low quality loss |
62
- | [llama-3-8b-Q8_0.gguf](https://huggingface.co/tensorblock/llama-3-8b-GGUF/blob/main/llama-3-8b-Q8_0.gguf) | Q8_0 | 7.954 GB | very large, extremely low quality loss - not recommended |
63
 
64
 
65
  ## Downloading instruction
@@ -75,11 +78,11 @@ pip install -U "huggingface_hub[cli]"
75
  Then, downoad the individual model file the a local directory
76
 
77
  ```shell
78
- huggingface-cli download tensorblock/llama-3-8b-GGUF --include "llama-3-8b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
79
  ```
80
 
81
  If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
82
 
83
  ```shell
84
- huggingface-cli download tensorblock/llama-3-8b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
85
  ```
 
1
  ---
2
  language:
3
+ - sv
4
+ - da
5
+ - 'no'
6
  license: llama3
7
  tags:
8
+ - pytorch
 
9
  - llama
10
  - llama-3
11
+ - ai-sweden
12
  - TensorBlock
13
  - GGUF
14
+ base_model: AI-Sweden-Models/Llama-3-8B
15
+ pipeline_tag: text-generation
16
+ inference:
17
+ parameters:
18
+ temperature: 0.6
19
  ---
20
 
21
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
29
  </div>
30
  </div>
31
 
32
+ ## AI-Sweden-Models/Llama-3-8B - GGUF
33
 
34
+ This repo contains GGUF format model files for [AI-Sweden-Models/Llama-3-8B](https://huggingface.co/AI-Sweden-Models/Llama-3-8B).
 
 
35
 
36
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
37
 
38
  <div style="text-align: left; margin: 20px 0;">
39
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
 
43
 
44
  ## Prompt template
45
 
 
46
  ```
47
 
48
  ```
 
51
 
52
  | Filename | Quant type | File Size | Description |
53
  | -------- | ---------- | --------- | ----------- |
54
+ | [Llama-3-8B-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
55
+ | [Llama-3-8B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
56
+ | [Llama-3-8B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
57
+ | [Llama-3-8B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
58
+ | [Llama-3-8B-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
59
+ | [Llama-3-8B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
60
+ | [Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
61
+ | [Llama-3-8B-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
62
+ | [Llama-3-8B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
63
+ | [Llama-3-8B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
64
+ | [Llama-3-8B-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
65
+ | [Llama-3-8B-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-3-8B-GGUF/blob/main/Llama-3-8B-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
66
 
67
 
68
  ## Downloading instruction
 
78
  Then, downoad the individual model file the a local directory
79
 
80
  ```shell
81
+ huggingface-cli download tensorblock/Llama-3-8B-GGUF --include "Llama-3-8B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
82
  ```
83
 
84
  If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
85
 
86
  ```shell
87
+ huggingface-cli download tensorblock/Llama-3-8B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
88
  ```