Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

huihui-ai/QwQ-32B-Coder-Fusion-7030

Overview

QwQ-32B-Coder-Fusion-78030 is a mixed model that combines the strengths of two powerful Qwen-based models: huihui-ai/QwQ-32B-Preview-abliterated and huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated.
The weights are blended in a 7:3 ratio, with 70% of the weights from QwQ-32B-Preview-abliterated and 30% from the abliterated Qwen2.5-Coder-32B-Instruct-abliterated model. Although it's a simple mix, the model is usable, and no gibberish has appeared. This is an experiment. I test the 9:1, 8:2, and 7:3 ratios separately to see how much impact they have on the model.

Please refer to the mixed source code.

Model Details

ollama

You can use huihui_ai/qwq-fusion:32b-7030 directly,

ollama run huihui_ai/qwq-fusion:32b-7030

Other proportions can be obtained by visiting huihui_ai/qwq-fusion.

Downloads last month
8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for matatonic/QwQ-32B-Coder-Fusion-7030-4.25bpw-exl2

Base model

Qwen/Qwen2.5-32B
Quantized
(23)
this model