metadata
base_model: AtlaAI/Selene-1-Mini-Llama-3.1-8B
library_name: transformers
language:
- en
- de
- fr
- it
- pt
- es
pipeline_tag: text-generation
tags:
- llama
- atla
- evaluation
- llm-as-a-judge
- meta
- conversational
- lm-judge
- llama-cpp
- gptq
license: llama3.1
π Playground | π Technical report | π» GitHub | π Sign up for the API
AtlaAI/Selene-1-Mini-Llama-3.1-8B-GPTQ-W4A16
This model was quantised into a 4-bit (W4A16) format using GPTQ from AtlaAI/Selene-1-Mini-Llama-3.1-8B
.
This was done using vLLM's llm-compressor library (https://docs.vllm.ai/en/latest/features/quantization/int4.html)
Refer to the original model card for more details on the model.
This quantisation was calibrated using a sample of 512 datapoints from the data used to train Selene-1-Mini. As a result, our quantised models show minimal performance degradation, losing <0.5% overall across benchmarks!
For reference, a GPTQ quantized 8-bit Llama-3.1-8B shows ~1.5% degradation across benchmarks.