QwQ-32B-GGUF / README.md
brittlewis12's picture
Update README.md
f62d902 verified
metadata
base_model: Qwen/QwQ-32B
pipeline_tag: text-generation
inference: true
language:
  - en
license: apache-2.0
model_creator: Qwen
model_name: QwQ-32B
model_type: qwen2
quantized_by: brittlewis12
tags:
  - reasoning
  - qwen2

QwQ 32B GGUF

Original model: QwQ 32B

Model creator: Qwen

QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.

This repo contains GGUF format model files for Qwen’s QwQ 32B. Learn more on Qwen’s QwQ 32B blog post.

What is GGUF?

GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023.

Converted with llama.cpp build b4831 (revision 5e43f10), using autogguf-rs.

Prompt template: ChatML (with <think> tokens)

<|im_start|>system
{{system_message}}<|im_end|>
<|im_start|>user
{{prompt}}<|im_end|>
<|im_start|>assistant
<think>

Note: re: split f16 model files

To merge the split model files for the f16 precision GGUFs, you can run the llama-gguf-split command that comes included when you build llama.cpp & its examples.

It accepts the path to the first of the downloaded splits, assuming the rest are alongside it, and an output path. For example:

# from your llama.cpp directory:
$ cmake -B build
$ cmake --build build --config Release
$ ./build/bin/llama-gguf-split --merge \
    ~/Downloads/qwq-32b.f16.split-00001-of-00002.gguf \
    ~/Downloads/qwq-32b.f16.gguf

Download & run with cnvrs on iPhone, iPad, and Mac!

cnvrs.ai

cnvrs is the best app for private, local AI on your device:

  • create & save Characters with custom system prompts & temperature settings
  • download and experiment with any GGUF model you can find on HuggingFace!
    • or, use an API key with the chat completions-compatible model provider of your choice -- ChatGPT, Claude, Gemini, DeepSeek, & more!
  • make it your own with custom Theme colors
  • powered by Metal ⚡️ & Llama.cpp, with haptics during response streaming!
  • try it out yourself today, on Testflight!
  • follow cnvrs on twitter to stay up to date

QwQ 32B in cnvrs on macOS

qwq-32b in cnvrs


Original Model Evaluation

qwq-32b-evals