qwp4w3hyb's picture
Update README.md
8688024 verified
metadata
license: gemma
base_model:
  - google/gemma-3-27b-it
tags:
  - gemma
  - google
  - instruct
  - llm
  - gguf
  - GGUF

Quant Infos

  • Requires llama.cpp b4875
  • LLM ONLY (No vision support)
  • quants done with an importance matrix for improved quantization loss
  • Quantized ggufs & imatrix from hf bf16, through bf16. safetensors bf16 -> gguf bf16 -> quant for optimal quant loss.
  • Wide coverage of different gguf quant types from Q_8_0 down to IQ1_S (WIP)
  • Imatrix generated with this multi-purpose dataset by bartowski.
    ./imatrix -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix