File size: 1,643 Bytes
4e742d9
 
 
 
 
17e393e
4e742d9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- cortex.cpp
- featured
---

## Overview  

**QwQ** is the reasoning model of the **Qwen** series. Unlike conventional instruction-tuned models, **QwQ** is designed to think and reason, achieving significantly enhanced performance in downstream tasks, especially challenging problem-solving scenarios.  

**QwQ-32B** is the **medium-sized** reasoning model in the QwQ family, capable of **competitive performance** against state-of-the-art reasoning models, such as **DeepSeek-R1** and **o1-mini**. It is optimized for tasks requiring logical deduction, multi-step reasoning, and advanced comprehension.  

The model is well-suited for **AI research, automated theorem proving, advanced dialogue systems, and high-level decision-making applications**.  

## Variants  

| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [QwQ-32B](https://huggingface.co/cortexso/qwen-qwq/tree/main) | `cortex run qwen-qwq:32b` |  

## Use it with Jan (UI)  

1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)  
2. Use in Jan model Hub:  
    ```bash
    cortexso/qwen-qwq
    ```  

## Use it with Cortex (CLI)  

1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)  
2. Run the model with command:  
    ```bash
    cortex run qwen-qwq
    ```  

## Credits  

- **Author:** Qwen Team  
- **Converter:** [Homebrew](https://www.homebrew.ltd/)  
- **Original License:** [License](https://choosealicense.com/licenses/apache-2.0/)  
- **Paper:** [Introducing QwQ-32B: The Medium-Sized Reasoning Model](https://qwenlm.github.io/blog/qwq-32b/)