QwQ 32B GGUF

Original model: QwQ 32B

Model creator: Qwen

QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.

This repo contains GGUF format model files for Qwen’s QwQ 32B. Learn more on Qwen’s QwQ 32B blog post.

What is GGUF?

GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023.

Converted with llama.cpp build b4831 (revision 5e43f10), using autogguf-rs.

Prompt template: ChatML (with <think> tokens)

<|im_start|>system
{{system_message}}<|im_end|>
<|im_start|>user
{{prompt}}<|im_end|>
<|im_start|>assistant
<think>

Note: re: split f16 model files

To merge the split model files for the f16 precision GGUFs, you can run the llama-gguf-split command that comes included when you build llama.cpp & its examples.

It accepts the path to the first of the downloaded splits, assuming the rest are alongside it, and an output path. For example:

# from your llama.cpp directory:
$ cmake -B build
$ cmake --build build --config Release
$ ./build/bin/llama-gguf-split --merge \
    ~/Downloads/qwq-32b.f16.split-00001-of-00002.gguf \
    ~/Downloads/qwq-32b.f16.gguf

Download & run with cnvrs on iPhone, iPad, and Mac!

cnvrs.ai

cnvrs is the best app for private, local AI on your device:

  • create & save Characters with custom system prompts & temperature settings
  • download and experiment with any GGUF model you can find on HuggingFace!
    • or, use an API key with the chat completions-compatible model provider of your choice -- ChatGPT, Claude, Gemini, DeepSeek, & more!
  • make it your own with custom Theme colors
  • powered by Metal ⚡️ & Llama.cpp, with haptics during response streaming!
  • try it out yourself today, on Testflight!
  • follow cnvrs on twitter to stay up to date

QwQ 32B in cnvrs on macOS

qwq-32b in cnvrs


Original Model Evaluation

qwq-32b-evals

Downloads last month
964
GGUF
Model size
32.8B params
Architecture
qwen2

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for brittlewis12/QwQ-32B-GGUF

Base model

Qwen/Qwen2.5-32B
Finetuned
Qwen/QwQ-32B
Quantized
(93)
this model