morriszms commited on
Commit
98ac327
·
verified ·
1 Parent(s): 5b1eadf

Upload folder using huggingface_hub

Browse files
Qwen2.5-3B-Q2_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b41783a9d64374acffe1b650294d5c88308faaca18f1a4a4398c9bddc8736286
3
- size 1274755616
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7514076ac1ea0d62a2cc8b0985d223056137ecb90bf897df936d5cb2488fd182
3
+ size 1274753280
Qwen2.5-3B-Q3_K_L.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:27b08956e13d07ee9fe26e654df34adf12ec40d2a5bca07e966ab799d790f8fc
3
- size 1707391520
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ee4ce06df941d1e48c33d452f488af49ba1a5da36a9f218970a71f8af0899d9
3
+ size 1707389184
Qwen2.5-3B-Q3_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:455c198d0368634f0f945a5576c650489f92526bdb06cf46c51eebd2712f9013
3
- size 1590475296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b6c08f5c12075cb7705d1a1f2620b6b03aeb3d479265531ecc2e47bc9861246
3
+ size 1590472960
Qwen2.5-3B-Q3_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:41959884a322242a44b5a4b8df47cb4cd93025e2ad78601400b919bd71ad4aa0
3
- size 1454357024
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:952f6770250b6f6aff8a5a8c09216e8c50c7d1fc03d07bfe2232be3fccc16913
3
+ size 1454354688
Qwen2.5-3B-Q4_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b019bfcf1b658a97e6d1302672cc83fff03e8546ef0b2639f2aa338e05dfed0a
3
- size 1822849568
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71252fe8f31368e45f4d5e586c53dba9d57e3f036e2f5451d243b8fa69864902
3
+ size 1822847232
Qwen2.5-3B-Q4_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:222f536efb65fa78d47ccc402dc41dabb982a20f88f97feba57675ef1ff72b77
3
- size 1929902624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2851fd1cca4b0bdbfc17608deedaef98d6fdee8bbc60f348455e0070ce4d3a56
3
+ size 1929900288
Qwen2.5-3B-Q4_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:14df0fc053248beecdc2d0e3770e39bf8b5c80bd20d59157a85913225759d5fc
3
- size 1834383904
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:282d95058b77661e79cd084ea6eadd9eef0e549afb97776f00fed9689061df11
3
+ size 1834381568
Qwen2.5-3B-Q5_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3e0953436c190e805b942fc7ba6731f18368bd884e9d90524b2bffbd5e3c2391
3
- size 2169666080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24ee1065e37544d28b55dd7fe40db22de60a14a6ca2e0341f3eeebf4fa1b9754
3
+ size 2169663744
Qwen2.5-3B-Q5_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:51b36ef476cd96df27ab09cf50fef9476567b96e077784095d32fe35d6ad3e80
3
- size 2224814624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ff612358ef4f432f705d55914242d32fe3c36e4bb2366fcb1979cbaae8f8a54
3
+ size 2224812288
Qwen2.5-3B-Q5_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:926716da8372d021d7bb8dee44da59920d3ffc21cead588624de13b51910dcd4
3
- size 2169666080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ece7ecdcacac14b413068857ead8d7178533df38b0a3393b93eb5c15077c3d1d
3
+ size 2169663744
Qwen2.5-3B-Q6_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9930ed5e4acd4f874892e37c9a40aee452b6975818305b723a15ddcebea2b902
3
- size 2538158624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc3dde5b4ed32f5a33d1ed127d3615c0fec5ff692007c722b8969e1ff0fb4b1a
3
+ size 2538156288
Qwen2.5-3B-Q8_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9da169571026bdf99cd6eadf09769c80c45f0d27ecfb56e3056db13bd0a96c82
3
- size 3285475872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:166504c09d1f2b9d0393b858817bf13982ccc50d1e97810752e14d403fb32685
3
+ size 3285473536
README.md CHANGED
@@ -1,12 +1,12 @@
1
  ---
2
- license: other
3
- license_name: qwen-research
4
- license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
5
  language:
6
  - en
7
- pipeline_tag: text-generation
8
- base_model: Qwen/Qwen2.5-3B
9
  tags:
 
 
10
  - TensorBlock
11
  - GGUF
12
  ---
@@ -22,13 +22,12 @@ tags:
22
  </div>
23
  </div>
24
 
25
- ## Qwen/Qwen2.5-3B - GGUF
26
 
27
- This repo contains GGUF format model files for [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
28
 
29
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
30
 
31
-
32
  <div style="text-align: left; margin: 20px 0;">
33
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
34
  Run them on the TensorBlock client using your local machine ↗
@@ -37,31 +36,26 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
37
 
38
  ## Prompt template
39
 
40
-
41
  ```
42
- <|im_start|>system
43
- {system_prompt}<|im_end|>
44
- <|im_start|>user
45
- {prompt}<|im_end|>
46
- <|im_start|>assistant
47
  ```
48
 
49
  ## Model file specification
50
 
51
  | Filename | Quant type | File Size | Description |
52
  | -------- | ---------- | --------- | ----------- |
53
- | [Qwen2.5-3B-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q2_K.gguf) | Q2_K | 1.187 GB | smallest, significant quality loss - not recommended for most purposes |
54
- | [Qwen2.5-3B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q3_K_S.gguf) | Q3_K_S | 1.354 GB | very small, high quality loss |
55
- | [Qwen2.5-3B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q3_K_M.gguf) | Q3_K_M | 1.481 GB | very small, high quality loss |
56
- | [Qwen2.5-3B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q3_K_L.gguf) | Q3_K_L | 1.590 GB | small, substantial quality loss |
57
- | [Qwen2.5-3B-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q4_0.gguf) | Q4_0 | 1.698 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
58
- | [Qwen2.5-3B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q4_K_S.gguf) | Q4_K_S | 1.708 GB | small, greater quality loss |
59
- | [Qwen2.5-3B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q4_K_M.gguf) | Q4_K_M | 1.797 GB | medium, balanced quality - recommended |
60
- | [Qwen2.5-3B-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q5_0.gguf) | Q5_0 | 2.021 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
61
- | [Qwen2.5-3B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q5_K_S.gguf) | Q5_K_S | 2.021 GB | large, low quality loss - recommended |
62
- | [Qwen2.5-3B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q5_K_M.gguf) | Q5_K_M | 2.072 GB | large, very low quality loss - recommended |
63
- | [Qwen2.5-3B-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q6_K.gguf) | Q6_K | 2.364 GB | very large, extremely low quality loss |
64
- | [Qwen2.5-3B-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q8_0.gguf) | Q8_0 | 3.060 GB | very large, extremely low quality loss - not recommended |
65
 
66
 
67
  ## Downloading instruction
 
1
  ---
2
+ base_model: unsloth/Qwen2.5-3B
 
 
3
  language:
4
  - en
5
+ library_name: transformers
6
+ license: other
7
  tags:
8
+ - unsloth
9
+ - transformers
10
  - TensorBlock
11
  - GGUF
12
  ---
 
22
  </div>
23
  </div>
24
 
25
+ ## unsloth/Qwen2.5-3B - GGUF
26
 
27
+ This repo contains GGUF format model files for [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B).
28
 
29
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
30
 
 
31
  <div style="text-align: left; margin: 20px 0;">
32
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
33
  Run them on the TensorBlock client using your local machine ↗
 
36
 
37
  ## Prompt template
38
 
 
39
  ```
40
+
 
 
 
 
41
  ```
42
 
43
  ## Model file specification
44
 
45
  | Filename | Quant type | File Size | Description |
46
  | -------- | ---------- | --------- | ----------- |
47
+ | [Qwen2.5-3B-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q2_K.gguf) | Q2_K | 1.275 GB | smallest, significant quality loss - not recommended for most purposes |
48
+ | [Qwen2.5-3B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q3_K_S.gguf) | Q3_K_S | 1.454 GB | very small, high quality loss |
49
+ | [Qwen2.5-3B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q3_K_M.gguf) | Q3_K_M | 1.590 GB | very small, high quality loss |
50
+ | [Qwen2.5-3B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q3_K_L.gguf) | Q3_K_L | 1.707 GB | small, substantial quality loss |
51
+ | [Qwen2.5-3B-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q4_0.gguf) | Q4_0 | 1.823 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
52
+ | [Qwen2.5-3B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q4_K_S.gguf) | Q4_K_S | 1.834 GB | small, greater quality loss |
53
+ | [Qwen2.5-3B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q4_K_M.gguf) | Q4_K_M | 1.930 GB | medium, balanced quality - recommended |
54
+ | [Qwen2.5-3B-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q5_0.gguf) | Q5_0 | 2.170 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
55
+ | [Qwen2.5-3B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q5_K_S.gguf) | Q5_K_S | 2.170 GB | large, low quality loss - recommended |
56
+ | [Qwen2.5-3B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q5_K_M.gguf) | Q5_K_M | 2.225 GB | large, very low quality loss - recommended |
57
+ | [Qwen2.5-3B-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q6_K.gguf) | Q6_K | 2.538 GB | very large, extremely low quality loss |
58
+ | [Qwen2.5-3B-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-GGUF/blob/main/Qwen2.5-3B-Q8_0.gguf) | Q8_0 | 3.285 GB | very large, extremely low quality loss - not recommended |
59
 
60
 
61
  ## Downloading instruction