Model Details

This is a AWQ GEMV quant of magnum-v3-34b: https://huggingface.co/anthracite-org/magnum-v3-34b

Model Description

Model has been quantized on 6xRTX4090, here are quantization parameters:

"zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMV"

Downloads last month
12
Safetensors
Model size
5.4B params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for andriadze/anthracite-magnum-v3-34b-awq-gemv

Quantized
(8)
this model