Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,229 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
-
|
25 |
-
|
26 |
-
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
[
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
[
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
[
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
[
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
[
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
[
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
[
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
[
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
-
|
148 |
-
-
|
149 |
-
-
|
150 |
-
-
|
151 |
-
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
[
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
[
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
[
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
[
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
[
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
[
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
[
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
[
|
196 |
-
|
197 |
-
|
198 |
-
|
199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- aqlm
|
5 |
+
base_model:
|
6 |
+
- codellama/CodeLlama-7b-hf
|
7 |
+
base_model_relation: quantized
|
8 |
---
|
9 |
|
10 |
+
# Quantizing Large Language Models for Code Generation: A Differentiated Replication
|
11 |
+
|
12 |
+
## Table of Contents
|
13 |
+
1. [Introduction](#1-introduction)
|
14 |
+
2. [Model details](#2-model-details)
|
15 |
+
3. [Experiments](#3-experiments)
|
16 |
+
4. [Replication](#4-replication)
|
17 |
+
|
18 |
+
## 1. Introduction
|
19 |
+
HuggingFace repository containing the quantized models from the paper _"Quantizing Large Language Models for Code Generation: A Differentiated Replication."_.
|
20 |
+
|
21 |
+
In this study, we evaluate the performance of compressed Deep Learning models on the code generation task. Specifically, we quantize code models such as CodeLlama and DeepSeek Coder at different levels of precision, namely 8, 4, 3, and 2 bits per model parameter, using a SOTA quantization technique for extreme model compression, that is [AQLM](https://github.com/Vahe1994/AQLM) (Additive Quantization of Language Models).
|
22 |
+
|
23 |
+
## 2. Model details
|
24 |
+
The complete list of models used in this study is available in our [model collection](https://huggingface.co/collections/Devy1/quantization-for-code-generation-67c9b83b34ed9a5a84fb714d), which is organized by order of appearance in the paper discussion.
|
25 |
+
|
26 |
+
More specifically, we named the models as follows:
|
27 |
+
|
28 |
+
**\<base-model\>**-AQLM-**\<precision\>**-**\<calibration\>**-**\<finetuned?\>**-**\<hyperparameters\>**
|
29 |
+
|
30 |
+
- **\<base-model\>**: define the starting model that was used for quantization.
|
31 |
+
- **\<precision\>**: the average number of bits per model weight.
|
32 |
+
- **\<calibration\>**: the type of calibration performed. It can be 'rnd' (random), 'code' (code-specific), and 'mixed' (using both code and technical language).
|
33 |
+
- **\<finetuned?\>**: if the model was fine-tuned after quantization, this tag will appear as "-finetuned". Otherwise it will not be present.
|
34 |
+
- **\<hyperparameters\>**: number of codebooks and codebook size used for quantization. Expressed in the format **\<codebooks\>**x**\<bits\>**.
|
35 |
+
|
36 |
+
For example, the model **Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15** has the following features:
|
37 |
+
|
38 |
+
1. This model is a compressed version of [CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf).
|
39 |
+
2. On average, each parameter is represented by **2 bits**.
|
40 |
+
3. We used a (**random**) sample of the RedPajama dataset for the calibration process.
|
41 |
+
4. The model was **not fine-tuned** after quantization (because the -finetuned tag does not appear after the calibration dataset type).
|
42 |
+
5. We used **1** codebook of **15** bits to quantize the model. The default group size used for each model is 8.
|
43 |
+
|
44 |
+
More information about the quantization process and hyperparameters can be found in our paper and in the config.json file from this repository.
|
45 |
+
|
46 |
+
## 3. Experiments
|
47 |
+
Below, we present the code generation performance of each quantized model across different experiments. Performance is computed on Python and Java languages using [MultiPL-E](https://github.com/nuprl/MultiPL-E) and [McEval](https://mceval.github.io/) benchmarks. More details on the research approach can be found in our paper.
|
48 |
+
|
49 |
+
Results are listed by research question and benchmark. By clicking on the "precision" value, you will be redirected to the corresponding model.
|
50 |
+
|
51 |
+
### RQ1. How does low-bit quantization affect the model’s code generation ability?
|
52 |
+
|
53 |
+
#### MultiPL-E benchmark
|
54 |
+
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|
55 |
+
|----------------------|--------:|-------------------------|-----------:|---------------:|-------------:|
|
56 |
+
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 29.8 | 32.2 |
|
57 |
+
| | | [8-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-rnd-4x15) | 7.47 GB | 29.7 | 31.6 |
|
58 |
+
| | | [4-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-rnd-2x15) | 4.00 GB | 29.1 | 30.7 |
|
59 |
+
| | | [3-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 24.3 | 26.5 |
|
60 |
+
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 16.4 | 14.1 |
|
61 |
+
| DeepSeek-Coder - Base| 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 45.8 | 41.4 |
|
62 |
+
| | | [8-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-rnd-4x15) | 7.48 GB | 46.2 | 41.9 |
|
63 |
+
| | | [4-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-rnd-2x15) | 4.00 GB | 45.2 | 41.4 |
|
64 |
+
| | | [3-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 41.1 | 37.7 |
|
65 |
+
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 27.6 | 23.2 |
|
66 |
+
|
67 |
+
|
68 |
+
#### McEval benchmark
|
69 |
+
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|
70 |
+
|----------------------|--------:|-------------------------|-----------:|---------------:|-------------:|
|
71 |
+
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 12.9 | 29.3 |
|
72 |
+
| | | [8-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-rnd-4x15) | 7.47 GB | 12.9 | 29.2 |
|
73 |
+
| | | [4-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-rnd-2x15) | 4.00 GB | 15.2 | 25.3 |
|
74 |
+
| | | [3-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 10.0 | 21.3 |
|
75 |
+
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 5.6 | 11.4 |
|
76 |
+
| DeepSeek-Coder - Base| 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 41.8 | 42.6 |
|
77 |
+
| | | [8-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-rnd-4x15) | 7.48 GB | 42.5 | 42.8 |
|
78 |
+
| | | [4-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-rnd-2x15) | 4.00 GB | 40.7 | 45.9 |
|
79 |
+
| | | [3-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 36.2 | 34.5 |
|
80 |
+
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 13.7 | 23.6 |
|
81 |
+
|
82 |
+
|
83 |
+
|
84 |
+
### RQ1. Impact of end-to-end fine-tuning after quantization
|
85 |
+
|
86 |
+
#### MultiPL-E benchmark
|
87 |
+
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|
88 |
+
|----------------------|--------:|-------------------------|-----------:|---------------:|-------------:|
|
89 |
+
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 29.8 | 32.2 |
|
90 |
+
| | | [3-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 24.3 | 26.5 |
|
91 |
+
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 16.4 | 14.1 |
|
92 |
+
| | | [3-bit + Fine-tuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-finetuned-2x12) | 3.80 GB | <span style="color:red;">▼</span> **24.0** | <span style="color:green;">▲</span> **27.8** |
|
93 |
+
| | | [2-bit + Fine-tuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-finetuned-1x15) | 2.26 GB | <span style="color:green;">▲</span> **19.9** | <span style="color:green;">▲</span> **19.0** |
|
94 |
+
| DeepSeek-Coder - Base| 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 45.8 | 41.4 |
|
95 |
+
| | | [3-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 41.1 | 37.7 |
|
96 |
+
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 27.6 | 23.2 |
|
97 |
+
| | | [3-bit + Fine-tuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-finetuned-2x12) | 3.80 GB | <span style="color:green;">▲</span> **41.8** | <span style="color:red;">▼</span> **37.7** |
|
98 |
+
| | | [2-bit + Fine-tuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-finetuned-1x15) | 2.27 GB | <span style="color:green;">▲</span> **33.0** | <span style="color:green;">▲</span> **26.8** |
|
99 |
+
|
100 |
+
#### McEval benchmark
|
101 |
+
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|
102 |
+
|----------------------|--------:|-------------------------|-----------:|---------------:|-------------:|
|
103 |
+
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 12.9 | 29.3 |
|
104 |
+
| | | [3-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 10.0 | 21.3 |
|
105 |
+
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 5.6 | 11.4 |
|
106 |
+
| | | [3-bit + Fine-tuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-finetuned-2x12) | 3.80 GB | <span style="color:green;">▲</span> **10.8** | <span style="color:green;">▲</span> **22.0** |
|
107 |
+
| | | [2-bit + Fine-tuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-finetuned-1x15) | 2.26 GB | <span style="color:green;">▲</span> **7.6** | <span style="color:green;">▲</span> **14.3** |
|
108 |
+
| DeepSeek-Coder - Base| 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 41.8 | 42.6 |
|
109 |
+
| | | [3-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 36.2 | 34.5 |
|
110 |
+
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 13.7 | 23.6 |
|
111 |
+
| | | [3-bit + Fine-tuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-finetuned-2x12) | 3.80 GB | <span style="color:red;">▼</span> **35.6** | <span style="color:red;">▼</span> **32.4** |
|
112 |
+
| | | [2-bit + Fine-tuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-finetuned-1x15) | 2.27 GB | <span style="color:green;">▲</span> **20.2** | <span style="color:green;">▲</span> **27.0** |
|
113 |
+
|
114 |
+
|
115 |
+
### RQ2. Which impact does the calibration dataset have on model performance?
|
116 |
+
|
117 |
+
#### MultiPL-E benchmark
|
118 |
+
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|
119 |
+
|----------------------|--------|-------------------------|----------:|--------------:|------------:|
|
120 |
+
| CodeLlama - Base | 7B | [Float16 - Baseline](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 29.8 | 32.2 |
|
121 |
+
| | | [8-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-rnd-4x15) | 7.47 GB | 29.7 | 31.6 |
|
122 |
+
| | | [8-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-mixed-4x15) | 7.47 GB | <span style="color:red;">▼</span> 29.7 | <span style="color:green;">▲</span> 32.3 |
|
123 |
+
| | | [8-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-code-4x15) | 7.47 GB | <span style="color:red;">▼</span> 29.2 | <span style="color:green;">▲</span> 32.0 |
|
124 |
+
| | | [4-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-rnd-2x15) | 4.00 GB | 29.1 | 30.7 |
|
125 |
+
| | | [4-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-mixed-2x15) | 4.00 GB | <span style="color:red;">▼</span> 29.0 | <span style="color:green;">▲</span> 31.4 |
|
126 |
+
| | | [4-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-code-2x15) | 4.00 GB | <span style="color:green;">▲</span> 30.2 | <span style="color:red;">▼</span> 29.8 |
|
127 |
+
| | | [3-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 24.3 | 26.5 |
|
128 |
+
| | | [3-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-mixed-2x12) | 3.80 GB | <span style="color:green;">▲</span> 28.2 | <span style="color:green;">▲</span> 28.4 |
|
129 |
+
| | | [3-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-code-2x12) | 3.80 GB | <span style="color:green;">▲</span> 27.0 | <span style="color:green;">▲</span> 28.0 |
|
130 |
+
| | | [2-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 16.4 | 14.1 |
|
131 |
+
| | | [2-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-1x15) | 2.26 GB | <span style="color:green;">▲</span> 23.9 | <span style="color:green;">▲</span> 21.5 |
|
132 |
+
| | | [2-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-code-1x15) | 2.26 GB | <span style="color:green;">▲</span> 24.1 | <span style="color:green;">▲</span> 19.4 |
|
133 |
+
| DeepSeek-Coder - Base| 7B | [Float16 - Baseline](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 45.8 | 41.4 |
|
134 |
+
| | | [8-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-rnd-4x15) | 7.48 GB | 46.2 | 41.9 |
|
135 |
+
| | | [8-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-mixed-4x15) | 7.48 GB | <span style="color:red;">▼</span> 45.4 | <span style="color:green;">▲</span> 43.2 |
|
136 |
+
| | | [8-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-code-4x15) | 7.48 GB | <span style="color:red;">▼</span> 45.9 | <span style="color:red;">▼</span> 41.7 |
|
137 |
+
| | | [4-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-rnd-2x15) | 4.00 GB | 45.2 | 41.4 |
|
138 |
+
| | | [4-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-mixed-2x15) | 4.00 GB | <span style="color:red;">▼</span> 44.5 | <span style="color:green;">▲</span> 41.8 |
|
139 |
+
| | | [4-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-code-2x15) | 4.00 GB | <span style="color:red;">▼</span> 44.2 | <span style="color:red;">▼</span> 40.6 |
|
140 |
+
| | | [3-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 41.1 | 37.7 |
|
141 |
+
| | | [3-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-mixed-2x12) | 3.80 GB | <span style="color:green;">▲</span> 43.7 | <span style="color:green;">▲</span> 39.1 |
|
142 |
+
| | | [3-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-code-2x12) | 3.80 GB | <span style="color:green;">▲</span> 42.5 | <span style="color:green;">▲</span> 38.7 |
|
143 |
+
| | | [2-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 27.6 | 23.2 |
|
144 |
+
| | | [2-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-1x15) | 2.27 GB | <span style="color:green;">▲</span> 35.7 | <span style="color:green;">▲</span> 27.4 |
|
145 |
+
| | | [2-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-code-1x15) | 2.27 GB | <span style="color:green;">▲</span> 34.8 | <span style="color:green;">▲</span> 27.5 |
|
146 |
+
|
147 |
+
#### McEval benchmark
|
148 |
+
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|
149 |
+
|----------------------|--------|-------------------------|----------:|--------------:|------------:|
|
150 |
+
| CodeLlama - Base | 7B | [Float16 - Baseline](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 12.9 | 29.3 |
|
151 |
+
| | | [8-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-rnd-4x15) | 7.47 GB | 12.9 | 29.2 |
|
152 |
+
| | | [8-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-mixed-4x15) | 7.47 GB | <span style="color:green;">▲</span> 13.7 | <span style="color:red;">▼</span> 28.6 |
|
153 |
+
| | | [8-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-code-4x15) | 7.47 GB | <span style="color:red;">▼</span> 12.3 | <span style="color:green;">▲</span> 29.5 |
|
154 |
+
| | | [4-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-rnd-2x15) | 4.00 GB | 15.2 | 25.3 |
|
155 |
+
| | | [4-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-mixed-2x15) | 4.00 GB | <span style="color:red;">▼</span> 13.0 | <span style="color:green;">▲</span> 30.3 |
|
156 |
+
| | | [4-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-code-2x15) | 4.00 GB | <span style="color:red;">▼</span> 11.1 | <span style="color:green;">▲</span> 25.8 |
|
157 |
+
| | | [3-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 10.0 | 21.3 |
|
158 |
+
| | | [3-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-mixed-2x12) | 3.80 GB | <span style="color:green;">▲</span> 12.3 | <span style="color:green;">▲</span> 25.5 |
|
159 |
+
| | | [3-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-code-2x12) | 3.80 GB | <span style="color:green;">▲</span> 10.8 | <span style="color:red;">▼</span> 19.9 |
|
160 |
+
| | | [2-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 5.6 | 11.4 |
|
161 |
+
| | | [2-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-1x15) | 2.26 GB | <span style="color:green;">▲</span> 11.1 | <span style="color:green;">▲</span> 12.8 |
|
162 |
+
| | | [2-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-code-1x15) | 2.26 GB | <span style="color:green;">▲</span> 6.1 | <span style="color:green;">▲</span> 12.8 |
|
163 |
+
| DeepSeek-Coder - Base| 7B | [Float16 - Baseline](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 41.8 | 42.6 |
|
164 |
+
| | | [8-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-rnd-4x15) | 7.48 GB | 42.5 | 42.8 |
|
165 |
+
| | | [8-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-mixed-4x15) | 7.48 GB | <span style="color:green;">▲</span> 42.7 | <span style="color:red;">▼</span> 42.5 |
|
166 |
+
| | | [8-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-code-4x15) | 7.48 GB | <span style="color:red;">▼</span> 41.3 | <span style="color:red;">▼</span> 42.7 |
|
167 |
+
| | | [4-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-rnd-2x15) | 4.00 GB | 40.7 | 45.9 |
|
168 |
+
| | | [4-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-mixed-2x15) | 4.00 GB | <span style="color:red;">▼</span> 39.0 | <span style="color:red;">▼</span> 42.8 |
|
169 |
+
| | | [4-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-code-2x15) | 4.00 GB | <span style="color:red;">▼</span> 39.8 | <span style="color:green;">▲</span> 46.3 |
|
170 |
+
| | | [3-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 36.2 | 34.5 |
|
171 |
+
| | | [3-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-mixed-2x12) | 3.80 GB | <span style="color:red;">▼</span> 35.5 | <span style="color:green;">▲</span> 42.8 |
|
172 |
+
| | | [3-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-code-2x12) | 3.80 GB | <span style="color:green;">▲</span> 36.5 | <span style="color:green;">▲</span> 45.6 |
|
173 |
+
| | | [2-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 13.7 | 23.6 |
|
174 |
+
| | | [2-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-1x15) | 2.27 GB | <span style="color:green;">▲</span> 26.2 | <span style="color:green;">▲</span> 29.1 |
|
175 |
+
| | | [2-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-code-1x15) | 2.27 GB | <span style="color:green;">▲</span> 24.6 | <span style="color:green;">▲</span> 28.0 |
|
176 |
+
|
177 |
+
|
178 |
+
### RQ3. How does extreme quantization affect model accuracy across different model sizes?
|
179 |
+
|
180 |
+
|
181 |
+
#### MultiPL-E benchmark
|
182 |
+
| Model | Params | Precision | Size (GB) | Python pass@1 | Dec (%) | Java pass@1 | Dec (%) |
|
183 |
+
|----------------------|--------|-------------------------|----------:|--------------:|--------:|------------:|--------:|
|
184 |
+
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 | 29.8 | --- | 32.2 | --- |
|
185 |
+
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-1x15) | 2.26 | 23.9 | -19.8 | 21.5 | -33.2 |
|
186 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-finetuned-1x15) | 2.26 | 25.5 | -14.4 | 26.5 | -17.7 |
|
187 |
+
| | 13B | [Float16](https://huggingface.co/codellama/CodeLlama-13b-hf) | 24.25 | 34.3 | --- | 38.3 | --- |
|
188 |
+
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-13b-hf-AQLM-2bit-mixed-1x15) | 3.98 | 30.9 | -9.9 | 27.7 | -27.7 |
|
189 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-13b-hf-AQLM-2bit-mixed-finetuned-1x15) | 3.98 | 30.1 | -12.2 | 32.8 | -14.4 |
|
190 |
+
| | 34B | [Float16](https://huggingface.co/codellama/CodeLlama-34b-hf) | 62.74 | 41.9 | --- | 44.1 | --- |
|
191 |
+
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-34b-hf-AQLM-2bit-mixed-1x15) | 9.54 | 37.1 | -11.5 | 32.7 | -25.9 |
|
192 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-34b-hf-AQLM-2bit-mixed-finetuned-1x15) | 9.54 | 36.0 | -14.1 | 36.1 | -18.1 |
|
193 |
+
| DeepSeek-Coder - Base| 1B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) | 2.57 | 28.4 | --- | 28.8 | --- |
|
194 |
+
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-1.3b-base-AQLM-2bit-mixed-1x14) | 0.61 | 13.9 | -51.1 | 6.6 | -77.1 |
|
195 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-1.3b-base-AQLM-2bit-mixed-finetuned-1x14) | 0.61 | 21.7 | -23.6 | 14.7 | -49.0 |
|
196 |
+
| | 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 | 45.8 | --- | 41.4 | --- |
|
197 |
+
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-1x15) | 2.27 | 35.7 | -22.1 | 27.4 | -33.8 |
|
198 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-finetuned-1x15) | 2.27 | 36.4 | -20.5 | 32.8 | -20.8 |
|
199 |
+
| | 33B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-33b-base) | 62.16 | 52.1 | --- | 47.3 | --- |
|
200 |
+
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-33b-base-AQLM-2bit-mixed-1x15) | 9.38 | 43.4 | -16.7 | 34.5 | -27.1 |
|
201 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-33b-base-AQLM-2bit-mixed-finetuned-1x15) | 9.38 | 43.0 | -17.5 | 38.7 | -18.2 |
|
202 |
+
|
203 |
+
#### McEval benchmark
|
204 |
+
| Model | Params | Precision | Size (GB) | Python pass@1 | Dec (%) | Java pass@1 | Dec (%) |
|
205 |
+
|----------------------|--------|-------------------------|----------:|--------------:|--------:|------------:|--------:|
|
206 |
+
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 | 12.9 | --- | 29.3 | --- |
|
207 |
+
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-1x15) | 2.26 | 11.1 | -14.0 | 12.8 | -56.3 |
|
208 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-finetuned-1x15) | 2.26 | 13.0 | -0.8 | 18.3 | -37.5 |
|
209 |
+
| | 13B | [Float16](https://huggingface.co/codellama/CodeLlama-13b-hf) | 24.25 | 18.9 | --- | 40.9 | --- |
|
210 |
+
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-13b-hf-AQLM-2bit-mixed-1x15) | 3.98 | 9.4 | -50.3 | 22.3 | -45.5 |
|
211 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-13b-hf-AQLM-2bit-mixed-finetuned-1x15) | 3.98 | 10.4 | -45.0 | 27.8 | -32.0 |
|
212 |
+
| | 34B | [Float16](https://huggingface.co/codellama/CodeLlama-34b-hf) | 62.74 | 29.0 | --- | 39.2 | --- |
|
213 |
+
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-34b-hf-AQLM-2bit-mixed-1x15) | 9.54 | 17.6 | -39.3 | 25.2 | -35.7 |
|
214 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-34b-hf-AQLM-2bit-mixed-finetuned-1x15) | 9.54 | 19.0 | -34.5 | 31.6 | -19.4 |
|
215 |
+
| DeepSeek-Coder - Base| 1B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) | 2.57 | 23.8 | --- | 42.0 | --- |
|
216 |
+
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-1.3b-base-AQLM-2bit-mixed-1x14) | 0.61 | 4.4 | -81.5 | 8.5 | -79.8 |
|
217 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-1.3b-base-AQLM-2bit-mixed-finetuned-1x14) | 0.61 | 6.9 | -71.0 | 15.5 | -63.1 |
|
218 |
+
| | 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 | 41.8 | --- | 42.6 | --- |
|
219 |
+
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-1x15) | 2.27 | 26.2 | -37.3 | 29.1 | -31.7 |
|
220 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-finetuned-1x15) | 2.27 | 30.1 | -28.0 | 31.0 | -27.2 |
|
221 |
+
| | 33B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-33b-base) | 62.16 | 55.5 | --- | 57.0 | --- |
|
222 |
+
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-33b-base-AQLM-2bit-mixed-1x15) | 9.38 | 36.9 | -33.5 | 39.2 | -31.2 |
|
223 |
+
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-33b-base-AQLM-2bit-mixed-finetuned-1x15) | 9.38 | 39.8 | -28.3 | 44.0 | -22.8 |
|
224 |
+
|
225 |
+
|
226 |
+
## 4. Replication
|
227 |
+
The scripts used to quantize and evaluate the models are available in our GitHub repository ([link](https://github.com/Devy99/lowbit-quantization)).
|
228 |
+
|
229 |
+
Model predictions, statistical results, and datasets are instead available in our Zenodo repository ([link](https://doi.org/10.5281/zenodo.13752774)).
|