Quant Infos

  • Requires llama.cpp b4875
  • LLM ONLY (No vision support)
  • quants done with an importance matrix for improved quantization loss
  • Quantized ggufs & imatrix from hf bf16, through bf16. safetensors bf16 -> gguf bf16 -> quant for optimal quant loss.
  • Wide coverage of different gguf quant types from Q_8_0 down to IQ1_S (WIP)
  • Imatrix generated with this multi-purpose dataset by bartowski.
    ./imatrix -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix
    
Downloads last month
1,099
GGUF
Model size
27B params
Architecture
gemma3

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for qwp4w3hyb/gemma-3-27b-it-iMat-GGUF

Quantized
(25)
this model