Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,19 @@ base_model:
|
|
4 |
- Qwen/QwQ-32B
|
5 |
base_model_relation: quantized
|
6 |
pipeline_tag: text-generation
|
7 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- Qwen/QwQ-32B
|
5 |
base_model_relation: quantized
|
6 |
pipeline_tag: text-generation
|
7 |
+
---
|
8 |
+
Disclamer: I don't know what I'm doing. I am not an expert at quantizing.
|
9 |
+
|
10 |
+
Original Model: https://huggingface.co/Qwen/QwQ-32B
|
11 |
+
| QwQ 32B EXL2 | Size |
|
12 |
+
| --- | --- |
|
13 |
+
| <a href="https://huggingface.co/cshared/Qwen-QwQ-32B-8.0bpw-exl2">**8.0bpw**</a> | 33.5 GB |
|
14 |
+
| ~7.0bpw~ | WIP |
|
15 |
+
| ~6.5bpw~ | WIP |
|
16 |
+
| ~6.0bpw~ | WIP |
|
17 |
+
| ~5.5bpw~ | WIP |
|
18 |
+
| ~5.0bpw~ | WIP |
|
19 |
+
| ~4.5bpw~ | WIP |
|
20 |
+
| ~4.0bpw~ | WIP |
|
21 |
+
| ~3.75bpw~ | WIP |
|
22 |
+
| ~3.5bpw~ | WIP |
|