BitNet b1.58 is a quantization method for boosting the efficiency of LLMs. It uses ternary weights (-1, 0, 1), giving it 1.58 bits of precision. No extra libraries are needed for this. However, BitNet models can’t be quantized on the fly—they need to be pre-trained or fine-tuned with the quantization applied (it’s a Quantization aware training technique). Once trained, these models are already quantized and available as packed versions on the hub.
A quantized model can be load :
from transformers import AutoModelForCausalLM
path = "/path/to/model"
model = AutoModelForCausalLM.from_pretrained(path, device_map="auto")