Model Details

This is meta-llama/Meta-Llama-3.1-8B quantized with AutoRound (symmetric quantization) to 4-bit. The model has been created, tested, and evaluated by The Kaitchup. It is compatible with the main inference frameworks, e.g., TGI and vLLM.

Details on quantization process and evaluation: Mistral-NeMo: 4.1x Smaller with Quantized Minitron

Downloads last month
4
Safetensors
Model size
1.99B params
Tensor type
FP16
·
I32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Collection including kaitchup/Meta-Llama-3.1-8B-AutoRound-GPTQ-sym-4bit