Try VPTQ on Hugging Face! Try VPTQ on Google Colab! Know more about VPTQ on ArXiv!
Vector Post-Training Quantization (VPTQ) is a novel Post-Training Quantization method that leverages Vector Quantization to high accuracy on LLMs at an extremely low bit-width (<2-bit). VPTQ can compress 70B, even the 405B model, to 1-2 bits without retraining and maintain high accuracy.
Inference support for VPTQ is released in the vptq
library. Make sure to install it to run the models:
pip install vptq
The library provides efficient kernels for NVIDIA/AMD GPU inference.
To run VPTQ models simply load a model that has been quantized with VPTQ:
Run Llama 3.1 70b on RTX4090 (24G @ ~2bits) in real time
from transformers import AutoTokenizer, AutoModelForCausalLM
quantized_model = AutoModelForCausalLM.from_pretrained(
"VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-65536-woft",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("VPTQ-community/Meta-Llama-3.1-70B-Instruct-v16-k65536-65536-woft")
input_ids = tokenizer("hello, it's me", return_tensors="pt").to("cuda")
out = model.generate(**input_ids, max_new_tokens=32, do_sample=False)
VPTQ algorithm early-released at VPTQ , and checkout the tutorial.
VPTQ achieves better accuracy and higher throughput with lower quantization overhead across models of different sizes. The following experimental results are for reference only; VPTQ can achieve better outcomes under reasonable parameters, especially in terms of model accuracy and inference speed.
Model | bitwidth | W2↓ | C4↓ | AvgQA↑ | tok/s↑ | mem(GB) | cost/h↓ |
---|---|---|---|---|---|---|---|
LLaMA-2 7B | 2.02 | 6.13 | 8.07 | 58.2 | 39.9 | 2.28 | 2 |
2.26 | 5.95 | 7.87 | 59.4 | 35.7 | 2.48 | 3.1 | |
LLaMA-2 13B | 2.02 | 5.32 | 7.15 | 62.4 | 26.9 | 4.03 | 3.2 |
2.18 | 5.28 | 7.04 | 63.1 | 18.5 | 4.31 | 3.6 | |
LLaMA-2 70B | 2.07 | 3.93 | 5.72 | 68.6 | 9.7 | 19.54 | 19 |
2.11 | 3.92 | 5.71 | 68.7 | 9.7 | 20.01 | 19 |
⚠️ The repository only provides a method of model quantization algorithm.
⚠️ The open-source community VPTQ-community provides models based on the technical report and quantization algorithm.
Quick Estimation of Model Bitwidth (Excluding Codebook Overhead):
Model Naming Convention: The model’s name includes the vector length $v$, codebook (lookup table) size, and residual codebook size. For example, “Meta-Llama-3.1-70B-Instruct-v8-k65536-256-woft” is “Meta-Llama-3.1-70B-Instruct”, where:
Equivalent Bitwidth Calculation:
Model Size Estimation: 70B * 3 bits / 8 bits per Byte = 26.25 GB
Note: This estimate does not include the size of the codebook (lookup table), other parameter overheads, and the padding overhead for storing indices. For the detailed calculation method, please refer to Tech Report Appendix C.2.