base_model: Qwen/QwQ-32B
pipeline_tag: text-generation
inference: true
language:
- en
license: apache-2.0
model_creator: Qwen
model_name: QwQ-32B
model_type: qwen2
quantized_by: brittlewis12
tags:
- reasoning
- qwen2
QwQ 32B GGUF
Original model: QwQ 32B
Model creator: Qwen
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
This repo contains GGUF format model files for Qwen’s QwQ 32B. Learn more on Qwen’s QwQ 32B blog post.
What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023.
Converted with llama.cpp build b4831 (revision 5e43f10), using autogguf-rs.
Prompt template: ChatML (with <think>
tokens)
<|im_start|>system
{{system_message}}<|im_end|>
<|im_start|>user
{{prompt}}<|im_end|>
<|im_start|>assistant
<think>
Download & run with cnvrs on iPhone, iPad, and Mac!
cnvrs is the best app for private, local AI on your device:
- create & save Characters with custom system prompts & temperature settings
- download and experiment with any GGUF model you can find on HuggingFace!
- or, use an API key with the chat completions-compatible model provider of your choice -- ChatGPT, Claude, Gemini, DeepSeek, & more!
- make it your own with custom Theme colors
- powered by Metal ⚡️ & Llama.cpp, with haptics during response streaming!
- try it out yourself today, on Testflight!
- if you already have the app, download QwQ 32B now!
- cnvrsai:///models/search/hf?id=brittlewis12/QwQ-32B-GGUF
- follow cnvrs on twitter to stay up to date