Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,13 +1,6 @@
|
|
1 |
---
|
2 |
quantized_by: bartowski
|
3 |
pipeline_tag: text-generation
|
4 |
-
license: apache-2.0
|
5 |
-
license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
|
6 |
-
base_model: Qwen/QwQ-32B
|
7 |
-
tags:
|
8 |
-
- chat
|
9 |
-
language:
|
10 |
-
- en
|
11 |
---
|
12 |
## 💫 Community Model> QwQ 32B by Qwen
|
13 |
|
@@ -23,7 +16,6 @@ Supports a context length of 128k tokens.
|
|
23 |
|
24 |
Qwen's full release of their QwQ reasoning model.
|
25 |
|
26 |
-
|
27 |
## Special thanks
|
28 |
|
29 |
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
|
|
1 |
---
|
2 |
quantized_by: bartowski
|
3 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
## 💫 Community Model> QwQ 32B by Qwen
|
6 |
|
|
|
16 |
|
17 |
Qwen's full release of their QwQ reasoning model.
|
18 |
|
|
|
19 |
## Special thanks
|
20 |
|
21 |
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|