Quantizing Large Language Models for Code Generation: A Differentiated Replication
Table of Contents
1. Introduction
HuggingFace repository containing the quantized models from the paper "Quantizing Large Language Models for Code Generation: A Differentiated Replication.".
In this study, we evaluate the performance of compressed Deep Learning models on the code generation task. Specifically, we quantize code models such as CodeLlama and DeepSeek Coder at different levels of precision, namely 8, 4, 3, and 2 bits per model parameter, using a SOTA quantization technique for extreme model compression, that is AQLM (Additive Quantization of Language Models).
2. Model details
The complete list of models used in this study is available in our model collection, which is organized by order of appearance in the paper discussion.
More specifically, we named the models as follows:
<base-model>-AQLM-<precision>-<calibration>-<finetuned?>-<hyperparameters>
- <base-model>: define the starting model that was used for quantization.
- <precision>: the average number of bits per model weight.
- <calibration>: the type of calibration performed. It can be 'rnd' (random), 'code' (code-specific), and 'mixed' (using both code and technical language).
- <finetuned?>: if the model was fine-tuned after quantization, this tag will appear as "-finetuned". Otherwise it will not be present.
- <hyperparameters>: number of codebooks and codebook size used for quantization. Expressed in the format <codebooks>x<bits>.
For example, the model Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15 has the following features:
- This model is a compressed version of CodeLlama-7b-hf.
- On average, each parameter is represented by 2 bits.
- We used a (random) sample of the RedPajama dataset for the calibration process.
- The model was not fine-tuned after quantization (because the -finetuned tag does not appear after the calibration dataset type).
- We used 1 codebook of 15 bits to quantize the model. The default group size used for each model is 8.
More information about the quantization process and hyperparameters can be found in our paper and in the config.json file from this repository.
3. Experiments
Below, we present the code generation performance of each quantized model across different experiments. Performance is computed on Python and Java languages using MultiPL-E and McEval benchmarks. More details on the research approach can be found in our paper.
Results are listed by research question and benchmark. By clicking on the "precision" value, you will be redirected to the corresponding model.
RQ1. How does low-bit quantization affect the model’s code generation ability?
MultiPL-E benchmark
Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
---|---|---|---|---|---|
CodeLlama - Base | 7B | Float16 | 13.48 GB | 29.8 | 32.2 |
8-bit | 7.47 GB | 29.7 | 31.6 | ||
4-bit | 4.00 GB | 29.1 | 30.7 | ||
3-bit | 3.80 GB | 24.3 | 26.5 | ||
2-bit | 2.26 GB | 16.4 | 14.1 | ||
DeepSeek-Coder - Base | 7B | Float16 | 13.48 GB | 45.8 | 41.4 |
8-bit | 7.48 GB | 46.2 | 41.9 | ||
4-bit | 4.00 GB | 45.2 | 41.4 | ||
3-bit | 3.80 GB | 41.1 | 37.7 | ||
2-bit | 2.27 GB | 27.6 | 23.2 |
McEval benchmark
Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
---|---|---|---|---|---|
CodeLlama - Base | 7B | Float16 | 13.48 GB | 12.9 | 29.3 |
8-bit | 7.47 GB | 12.9 | 29.2 | ||
4-bit | 4.00 GB | 15.2 | 25.3 | ||
3-bit | 3.80 GB | 10.0 | 21.3 | ||
2-bit | 2.26 GB | 5.6 | 11.4 | ||
DeepSeek-Coder - Base | 7B | Float16 | 13.48 GB | 41.8 | 42.6 |
8-bit | 7.48 GB | 42.5 | 42.8 | ||
4-bit | 4.00 GB | 40.7 | 45.9 | ||
3-bit | 3.80 GB | 36.2 | 34.5 | ||
2-bit | 2.27 GB | 13.7 | 23.6 |
RQ1. Impact of end-to-end fine-tuning after quantization
MultiPL-E benchmark
Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
---|---|---|---|---|---|
CodeLlama - Base | 7B | Float16 | 13.48 GB | 29.8 | 32.2 |
3-bit | 3.80 GB | 24.3 | 26.5 | ||
2-bit | 2.26 GB | 16.4 | 14.1 | ||
3-bit + Fine-tuning | 3.80 GB | ▼ 24.0 | ▲ 27.8 | ||
2-bit + Fine-tuning | 2.26 GB | ▲ 19.9 | ▲ 19.0 | ||
DeepSeek-Coder - Base | 7B | Float16 | 13.48 GB | 45.8 | 41.4 |
3-bit | 3.80 GB | 41.1 | 37.7 | ||
2-bit | 2.27 GB | 27.6 | 23.2 | ||
3-bit + Fine-tuning | 3.80 GB | ▲ 41.8 | ▼ 37.7 | ||
2-bit + Fine-tuning | 2.27 GB | ▲ 33.0 | ▲ 26.8 |
McEval benchmark
Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
---|---|---|---|---|---|
CodeLlama - Base | 7B | Float16 | 13.48 GB | 12.9 | 29.3 |
3-bit | 3.80 GB | 10.0 | 21.3 | ||
2-bit | 2.26 GB | 5.6 | 11.4 | ||
3-bit + Fine-tuning | 3.80 GB | ▲ 10.8 | ▲ 22.0 | ||
2-bit + Fine-tuning | 2.26 GB | ▲ 7.6 | ▲ 14.3 | ||
DeepSeek-Coder - Base | 7B | Float16 | 13.48 GB | 41.8 | 42.6 |
3-bit | 3.80 GB | 36.2 | 34.5 | ||
2-bit | 2.27 GB | 13.7 | 23.6 | ||
3-bit + Fine-tuning | 3.80 GB | ▼ 35.6 | ▼ 32.4 | ||
2-bit + Fine-tuning | 2.27 GB | ▲ 20.2 | ▲ 27.0 |
RQ2. Which impact does the calibration dataset have on model performance?
MultiPL-E benchmark
Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
---|---|---|---|---|---|
CodeLlama - Base | 7B | Float16 - Baseline | 13.48 GB | 29.8 | 32.2 |
8-bit with Random samples | 7.47 GB | 29.7 | 31.6 | ||
8-bit with Mixed samples | 7.47 GB | ▼ 29.7 | ▲ 32.3 | ||
8-bit with Code samples | 7.47 GB | ▼ 29.2 | ▲ 32.0 | ||
4-bit with Random samples | 4.00 GB | 29.1 | 30.7 | ||
4-bit with Mixed samples | 4.00 GB | ▼ 29.0 | ▲ 31.4 | ||
4-bit with Code samples | 4.00 GB | ▲ 30.2 | ▼ 29.8 | ||
3-bit with Random samples | 3.80 GB | 24.3 | 26.5 | ||
3-bit with Mixed samples | 3.80 GB | ▲ 28.2 | ▲ 28.4 | ||
3-bit with Code samples | 3.80 GB | ▲ 27.0 | ▲ 28.0 | ||
2-bit with Random samples | 2.26 GB | 16.4 | 14.1 | ||
2-bit with Mixed samples | 2.26 GB | ▲ 23.9 | ▲ 21.5 | ||
2-bit with Code samples | 2.26 GB | ▲ 24.1 | ▲ 19.4 | ||
DeepSeek-Coder - Base | 7B | Float16 - Baseline | 13.48 GB | 45.8 | 41.4 |
8-bit with Random samples | 7.48 GB | 46.2 | 41.9 | ||
8-bit with Mixed samples | 7.48 GB | ▼ 45.4 | ▲ 43.2 | ||
8-bit with Code samples | 7.48 GB | ▼ 45.9 | ▼ 41.7 | ||
4-bit with Random samples | 4.00 GB | 45.2 | 41.4 | ||
4-bit with Mixed samples | 4.00 GB | ▼ 44.5 | ▲ 41.8 | ||
4-bit with Code samples | 4.00 GB | ▼ 44.2 | ▼ 40.6 | ||
3-bit with Random samples | 3.80 GB | 41.1 | 37.7 | ||
3-bit with Mixed samples | 3.80 GB | ▲ 43.7 | ▲ 39.1 | ||
3-bit with Code samples | 3.80 GB | ▲ 42.5 | ▲ 38.7 | ||
2-bit with Random samples | 2.27 GB | 27.6 | 23.2 | ||
2-bit with Mixed samples | 2.27 GB | ▲ 35.7 | ▲ 27.4 | ||
2-bit with Code samples | 2.27 GB | ▲ 34.8 | ▲ 27.5 |
McEval benchmark
Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
---|---|---|---|---|---|
CodeLlama - Base | 7B | Float16 - Baseline | 13.48 GB | 12.9 | 29.3 |
8-bit with Random samples | 7.47 GB | 12.9 | 29.2 | ||
8-bit with Mixed samples | 7.47 GB | ▲ 13.7 | ▼ 28.6 | ||
8-bit with Code samples | 7.47 GB | ▼ 12.3 | ▲ 29.5 | ||
4-bit with Random samples | 4.00 GB | 15.2 | 25.3 | ||
4-bit with Mixed samples | 4.00 GB | ▼ 13.0 | ▲ 30.3 | ||
4-bit with Code samples | 4.00 GB | ▼ 11.1 | ▲ 25.8 | ||
3-bit with Random samples | 3.80 GB | 10.0 | 21.3 | ||
3-bit with Mixed samples | 3.80 GB | ▲ 12.3 | ▲ 25.5 | ||
3-bit with Code samples | 3.80 GB | ▲ 10.8 | ▼ 19.9 | ||
2-bit with Random samples | 2.26 GB | 5.6 | 11.4 | ||
2-bit with Mixed samples | 2.26 GB | ▲ 11.1 | ▲ 12.8 | ||
2-bit with Code samples | 2.26 GB | ▲ 6.1 | ▲ 12.8 | ||
DeepSeek-Coder - Base | 7B | Float16 - Baseline | 13.48 GB | 41.8 | 42.6 |
8-bit with Random samples | 7.48 GB | 42.5 | 42.8 | ||
8-bit with Mixed samples | 7.48 GB | ▲ 42.7 | ▼ 42.5 | ||
8-bit with Code samples | 7.48 GB | ▼ 41.3 | ▼ 42.7 | ||
4-bit with Random samples | 4.00 GB | 40.7 | 45.9 | ||
4-bit with Mixed samples | 4.00 GB | ▼ 39.0 | ▼ 42.8 | ||
4-bit with Code samples | 4.00 GB | ▼ 39.8 | ▲ 46.3 | ||
3-bit with Random samples | 3.80 GB | 36.2 | 34.5 | ||
3-bit with Mixed samples | 3.80 GB | ▼ 35.5 | ▲ 42.8 | ||
3-bit with Code samples | 3.80 GB | ▲ 36.5 | ▲ 45.6 | ||
2-bit with Random samples | 2.27 GB | 13.7 | 23.6 | ||
2-bit with Mixed samples | 2.27 GB | ▲ 26.2 | ▲ 29.1 | ||
2-bit with Code samples | 2.27 GB | ▲ 24.6 | ▲ 28.0 |
RQ3. How does extreme quantization affect model accuracy across different model sizes?
MultiPL-E benchmark
Model | Params | Precision | Size (GB) | Python pass@1 | Dec (%) | Java pass@1 | Dec (%) |
---|---|---|---|---|---|---|---|
CodeLlama - Base | 7B | Float16 | 13.48 | 29.8 | --- | 32.2 | --- |
2-bit | 2.26 | 23.9 | -19.8 | 21.5 | -33.2 | ||
2-bit + Finetuning | 2.26 | 25.5 | -14.4 | 26.5 | -17.7 | ||
13B | Float16 | 24.25 | 34.3 | --- | 38.3 | --- | |
2-bit | 3.98 | 30.9 | -9.9 | 27.7 | -27.7 | ||
2-bit + Finetuning | 3.98 | 30.1 | -12.2 | 32.8 | -14.4 | ||
34B | Float16 | 62.74 | 41.9 | --- | 44.1 | --- | |
2-bit | 9.54 | 37.1 | -11.5 | 32.7 | -25.9 | ||
2-bit + Finetuning | 9.54 | 36.0 | -14.1 | 36.1 | -18.1 | ||
DeepSeek-Coder - Base | 1B | Float16 | 2.57 | 28.4 | --- | 28.8 | --- |
2-bit | 0.61 | 13.9 | -51.1 | 6.6 | -77.1 | ||
2-bit + Finetuning | 0.61 | 21.7 | -23.6 | 14.7 | -49.0 | ||
7B | Float16 | 13.48 | 45.8 | --- | 41.4 | --- | |
2-bit | 2.27 | 35.7 | -22.1 | 27.4 | -33.8 | ||
2-bit + Finetuning | 2.27 | 36.4 | -20.5 | 32.8 | -20.8 | ||
33B | Float16 | 62.16 | 52.1 | --- | 47.3 | --- | |
2-bit | 9.38 | 43.4 | -16.7 | 34.5 | -27.1 | ||
2-bit + Finetuning | 9.38 | 43.0 | -17.5 | 38.7 | -18.2 |
McEval benchmark
Model | Params | Precision | Size (GB) | Python pass@1 | Dec (%) | Java pass@1 | Dec (%) |
---|---|---|---|---|---|---|---|
CodeLlama - Base | 7B | Float16 | 13.48 | 12.9 | --- | 29.3 | --- |
2-bit | 2.26 | 11.1 | -14.0 | 12.8 | -56.3 | ||
2-bit + Finetuning | 2.26 | 13.0 | -0.8 | 18.3 | -37.5 | ||
13B | Float16 | 24.25 | 18.9 | --- | 40.9 | --- | |
2-bit | 3.98 | 9.4 | -50.3 | 22.3 | -45.5 | ||
2-bit + Finetuning | 3.98 | 10.4 | -45.0 | 27.8 | -32.0 | ||
34B | Float16 | 62.74 | 29.0 | --- | 39.2 | --- | |
2-bit | 9.54 | 17.6 | -39.3 | 25.2 | -35.7 | ||
2-bit + Finetuning | 9.54 | 19.0 | -34.5 | 31.6 | -19.4 | ||
DeepSeek-Coder - Base | 1B | Float16 | 2.57 | 23.8 | --- | 42.0 | --- |
2-bit | 0.61 | 4.4 | -81.5 | 8.5 | -79.8 | ||
2-bit + Finetuning | 0.61 | 6.9 | -71.0 | 15.5 | -63.1 | ||
7B | Float16 | 13.48 | 41.8 | --- | 42.6 | --- | |
2-bit | 2.27 | 26.2 | -37.3 | 29.1 | -31.7 | ||
2-bit + Finetuning | 2.27 | 30.1 | -28.0 | 31.0 | -27.2 | ||
33B | Float16 | 62.16 | 55.5 | --- | 57.0 | --- | |
2-bit | 9.38 | 36.9 | -33.5 | 39.2 | -31.2 | ||
2-bit + Finetuning | 9.38 | 39.8 | -28.3 | 44.0 | -22.8 |
4. Replication
The scripts used to quantize and evaluate the models are available in our GitHub repository (link).
Model predictions, statistical results, and datasets are instead available in our Zenodo repository (link).
- Downloads last month
- 9
Model tree for Devy1/CodeLlama-7b-hf-AQLM-8bit-code-4x15
Base model
codellama/CodeLlama-7b-hf