File size: 820 Bytes
5a3f220
 
 
 
 
 
 
 
 
 
 
 
 
 
02d0223
31ae1b5
5a3f220
 
 
 
 
 
ddf5ad2
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: llama2
language:
- en
library_name: transformers
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- 4bit
- gptq
base_model: meta-llama/LlamaGuard-7b
inference: false
---

# Quantized version of meta-llama/LlamaGuard-7b

## Model Description

The model [meta-llama/LlamaGuard-7b](https://huggingface.co/meta-llama/LlamaGuard-7b) was quantized to 4bit, group_size 128, and act-order=True with auto-gptq integration in transformers (https://huggingface.co/blog/gptq-integration).

## Evaluation

To evaluate the qunatized model and compare it with the full precision model, I performed binary classification on the "toxicity" label from the ~5k samples test set of lmsys/toxic-chat.

๐Ÿ“Š Full Precision Model:

Average Precision Score: 0.3625

๐Ÿ“Š 4-bit Quantized Model:

Average Precision Score: 0.3450