Quant Infos
- Requires llama.cpp b4875
- LLM ONLY (No vision support)
- quants done with an importance matrix for improved quantization loss
- Quantized ggufs & imatrix from hf bf16, through bf16.
safetensors bf16 -> gguf bf16 -> quant
for optimal quant loss. - Wide coverage of different gguf quant types from Q_8_0 down to IQ1_S (WIP)
- Imatrix generated with this multi-purpose dataset by bartowski.
./imatrix -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix
- Downloads last month
- 1,099
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.