nm-research commited on
Commit
0c1ec85
·
verified ·
1 Parent(s): 6461fb3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +291 -0
README.md ADDED
@@ -0,0 +1,291 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - deepseek
5
+ - int8
6
+ - vllm
7
+ - llmcompressor
8
+ base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
9
+ library_name: transformers
10
+ ---
11
+
12
+ # DeepSeek-R1-Distill-Llama-70B-quantized.w8a8
13
+
14
+ ## Model Overview
15
+ - **Model Architecture:** LlamaForCausalLM
16
+ - **Input:** Text
17
+ - **Output:** Text
18
+ - **Model Optimizations:**
19
+ - **Weight quantization:** INT8
20
+ - **Activation quantization:** INT8
21
+ - **Release Date:** 2/3/2025
22
+ - **Version:** 1.0
23
+ - **Model Developers:** Neural Magic
24
+
25
+ Quantized version of [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B).
26
+
27
+
28
+ ### Model Optimizations
29
+
30
+ This model was obtained by quantizing the weights and activations of [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) to INT8 data type.
31
+ This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
32
+ Weight quantization also reduces disk size requirements by approximately 50%.
33
+
34
+ Only the weights and activations of the linear operators within transformers blocks are quantized.
35
+ Weights are quantized using a symmetric per-channel scheme, whereas quantizations are quantized using a symmetric per-token scheme.
36
+ The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
37
+
38
+
39
+ ## Use with vLLM
40
+
41
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
42
+
43
+ ```python
44
+ from transformers import AutoTokenizer
45
+ from vllm import LLM, SamplingParams
46
+
47
+ number_gpus = 2
48
+ model_name = "neuralmagic/DeepSeek-R1-Distill-Llama-70B-quantized.w8a8"
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
51
+ sampling_params = SamplingParams(temperature=0.6, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
52
+ llm = LLM(model=model_name, tensor_parallel_size=number_gpus, trust_remote_code=True)
53
+
54
+ messages_list = [
55
+ [{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
56
+ ]
57
+
58
+ prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
59
+
60
+ outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
61
+
62
+ generated_text = [output.outputs[0].text for output in outputs]
63
+ print(generated_text)
64
+ ```
65
+
66
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
67
+
68
+ ## Creation
69
+
70
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
71
+
72
+
73
+ ```python
74
+ from transformers import AutoModelForCausalLM, AutoTokenizer
75
+ from llmcompressor.modifiers.quantization import QuantizationModifier
76
+ from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
77
+ from llmcompressor.transformers import oneshot
78
+ from llmcompressor.transformers.compression.helpers import calculate_offload_device_map
79
+ import os
80
+
81
+ # Load model
82
+ model_stub = "deepseek-ai/DeepSeek-R1-Distill-Llama-70B"
83
+ model_name = model_stub.split("/")[-1]
84
+
85
+ num_samples = 1024
86
+ max_seq_len = 8192
87
+
88
+ tokenizer = AutoTokenizer.from_pretrained(model_stub)
89
+
90
+ device_map = calculate_offload_device_map(
91
+ model_stub,
92
+ reserve_for_hessians=True,
93
+ num_gpus=2,
94
+ torch_dtype="auto",
95
+ )
96
+
97
+ model = AutoModelForCausalLM.from_pretrained(
98
+ model_stub,
99
+ device_map=device_map,
100
+ torch_dtype="auto",
101
+ )
102
+
103
+ def preprocess_fn(example):
104
+ return {"text": tokenizer.apply_chat_template(example["messages"], add_generation_prompt=False, tokenize=False)}
105
+
106
+ ds = load_dataset("neuralmagic/LLM_compression_calibration", split="train")
107
+ ds = ds.map(preprocess_fn)
108
+
109
+ # Configure the quantization algorithm and scheme
110
+ recipe = [
111
+ SmoothQuantModifier(smoothing_strength=0.7),
112
+ QuantizationModifier(
113
+ targets="Linear",
114
+ scheme="W8A8",
115
+ ignore=["lm_head"],
116
+ dampening_frac=0.1,
117
+ ),
118
+ ]
119
+
120
+ # Apply quantization
121
+ oneshot(
122
+ model=model,
123
+ dataset=ds,
124
+ recipe=recipe,
125
+ max_seq_length=max_seq_len,
126
+ num_calibration_samples=num_samples,
127
+ )
128
+
129
+ # Save to disk in compressed-tensors format
130
+ save_path = model_name + "-quantized.w8a8
131
+ model.save_pretrained(save_path)
132
+ tokenizer.save_pretrained(save_path)
133
+ print(f"Model and tokenizer saved to: {save_path}")
134
+ ```
135
+
136
+ ## Evaluation
137
+
138
+ The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard) and [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/), using the following commands:
139
+
140
+ OpenLLM Leaderboard V1:
141
+ ```
142
+ lm_eval \
143
+ --model vllm \
144
+ --model_args pretrained="neuralmagic/DeepSeek-R1-Distill-Llama-70B-quantized.w8a8",dtype=auto,max_model_len=4096,tensor_parallel_size=2,enable_chunked_prefill=True \
145
+ --tasks openllm \
146
+ --write_out \
147
+ --batch_size auto \
148
+ --output_path output_dir \
149
+ --show_config
150
+ ```
151
+
152
+ OpenLLM Leaderboard V2:
153
+ ```
154
+ lm_eval \
155
+ --model vllm \
156
+ --model_args pretrained="neuralmagic/DeepSeek-R1-Distill-Llama-70B-quantized.w8a8",dtype=auto,max_model_len=4096,tensor_parallel_size=2,enable_chunked_prefill=True \
157
+ --apply_chat_template \
158
+ --fewshot_as_multiturn \
159
+ --tasks leaderboard \
160
+ --write_out \
161
+ --batch_size auto \
162
+ --output_path output_dir \
163
+ --show_config
164
+ ```
165
+
166
+ ### Accuracy
167
+
168
+ <table>
169
+ <thead>
170
+ <tr>
171
+ <th>Category</th>
172
+ <th>Metric</th>
173
+ <th>deepseek-ai/DeepSeek-R1-Distill-Llama-70B</th>
174
+ <th>neuralmagic/DeepSeek-R1-Distill-Llama-70B-quantized.w8a8</th>
175
+ <th>Recovery</th>
176
+ </tr>
177
+ </thead>
178
+ <tbody>
179
+ <tr>
180
+ <td rowspan="7"><b>OpenLLM V1</b></td>
181
+ <td>ARC-Challenge (Acc-Norm, 25-shot)</td>
182
+ <td></td>
183
+ <td></td>
184
+ <td>%</td>
185
+ </tr>
186
+ <tr>
187
+ <td>GSM8K (Strict-Match, 5-shot)</td>
188
+ <td></td>
189
+ <td></td>
190
+ <td>%</td>
191
+ </tr>
192
+ <tr>
193
+ <td>HellaSwag (Acc-Norm, 10-shot)</td>
194
+ <td></td>
195
+ <td></td>
196
+ <td>%</td>
197
+ </tr>
198
+ <tr>
199
+ <td>MMLU (Acc, 5-shot)</td>
200
+ <td></td>
201
+ <td></td>
202
+ <td>%</td>
203
+ </tr>
204
+ <tr>
205
+ <td>TruthfulQA (MC2, 0-shot)</td>
206
+ <td></td>
207
+ <td></td>
208
+ <td>%</td>
209
+ </tr>
210
+ <tr>
211
+ <td>Winogrande (Acc, 5-shot)</td>
212
+ <td></td>
213
+ <td></td>
214
+ <td>%</td>
215
+ </tr>
216
+ <tr>
217
+ <td><b>Average Score</b></td>
218
+ <td><b></b></td>
219
+ <td><b></b></td>
220
+ <td><b>%</b></td>
221
+ </tr>
222
+ <tr>
223
+ <td rowspan="7"><b>OpenLLM V2</b></td>
224
+ <td>IFEval (Inst Level Strict Acc, 0-shot)</td>
225
+ <td></td>
226
+ <td></td>
227
+ <td>%</td>
228
+ </tr>
229
+ <tr>
230
+ <td>BBH (Acc-Norm, 3-shot)</td>
231
+ <td></td>
232
+ <td></td>
233
+ <td>%</td>
234
+ </tr>
235
+ <tr>
236
+ <td>Math-Hard (Exact-Match, 4-shot)</td>
237
+ <td></td>
238
+ <td></td>
239
+ <td>%</td>
240
+ </tr>
241
+ <tr>
242
+ <td>GPQA (Acc-Norm, 0-shot)</td>
243
+ <td></td>
244
+ <td></td>
245
+ <td>%</td>
246
+ </tr>
247
+ <tr>
248
+ <td>MUSR (Acc-Norm, 0-shot)</td>
249
+ <td></td>
250
+ <td></td>
251
+ <td>%</td>
252
+ </tr>
253
+ <tr>
254
+ <td>MMLU-Pro (Acc, 5-shot)</td>
255
+ <td></td>
256
+ <td></td>
257
+ <td>%</td>
258
+ </tr>
259
+ <tr>
260
+ <td><b>Average Score</b></td>
261
+ <td><b></b></td>
262
+ <td><b></b></td>
263
+ <td><b>%</b></td>
264
+ </tr>
265
+ <tr>
266
+ <td rowspan="4"><b>Coding</b></td>
267
+ <td>HumanEval (pass@1)</td>
268
+ <td></td>
269
+ <td></td>
270
+ <td><b>%</b></td>
271
+ </tr>
272
+ <tr>
273
+ <td>HumanEval (pass@10)</td>
274
+ <td></td>
275
+ <td></td>
276
+ <td>%</td>
277
+ </tr>
278
+ <tr>
279
+ <td>HumanEval+ (pass@10)</td>
280
+ <td></td>
281
+ <td></td>
282
+ <td>%</td>
283
+ </tr>
284
+ <tr>
285
+ <td>HumanEval+ (pass@10)</td>
286
+ <td></td>
287
+ <td></td>
288
+ <td>%</td>
289
+ </tr>
290
+ </tbody>
291
+ </table>