liusong123 commited on
Commit
a10d7b8
·
verified ·
1 Parent(s): 49fe35f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +361 -0
README.md ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - hunyuan
4
+ - eagle3
5
+ - eagle
6
+ ---
7
+
8
+ <p align="center">
9
+ <picture>
10
+ <source media="(prefers-color-scheme: dark)" srcset="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo_light.png?raw=true">
11
+ <img alt="AngelSlim" src="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo.png?raw=true" width=55%>
12
+ </picture>
13
+ </p>
14
+
15
+ <h3 align="center">
16
+ Dedicated to building a more intuitive, comprehensive, and efficient LLMs compression toolkit.
17
+ </h3>
18
+
19
+ <p align="center">
20
+ 📖 <a href="https://angelslim.readthedocs.io/">Documentation</a>&nbsp&nbsp | &nbsp&nbsp🤗 <a href="https://huggingface.co/AngelSlim">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/AngelSlim">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp💬 <a href="./docs/source/assets/angel_slim_wechat.png">WeChat</a>
21
+ <br>
22
+ </p>
23
+
24
+
25
+ ## Table of Contents
26
+
27
+ - [Latest Updates](#latest-updates)
28
+ - [Key Features](#key-features)
29
+ - [Supported Models](#supported-models)
30
+ - [How to Use](#how-to-use)
31
+ - [Install AngelSlim](#install-angelslim)
32
+ - [Quick Start](#quick-start)
33
+ - [deployment & Evaluation](#deployment)
34
+ - [Benchmark](#benchmark)
35
+ - [License](#license)
36
+ - [Citation](#citation)
37
+ - [Technical Discussion](#technical-discussion)
38
+
39
+ ## 📣Latest Updates
40
+
41
+ - [25/07/04] We now support quantization for Hunyuan/Qwen2.5/Qwen3/DeepSeek-R1-Distill-Qwen and other models, including INT8/FP8/INT4 algorithms.
42
+ We also opensource Qwen3-8B`s Eagle3 model weight.
43
+
44
+ Coming soon:
45
+
46
+ - [ ] Support W4A8 quantization for DeepSeek-R1.
47
+ - [ ] Support quantization for multimodal models like Qwen-VL.
48
+ - [ ] Release of new algorithm for speculative sampling.
49
+
50
+ ## 🌟Key Features
51
+
52
+ - **Highly Integrated**: This toolkit integrates mainstream compression algorithms into a unified framework, offering developers one-click access with exceptional ease of use.
53
+ - **Continuous Innovation**: Beyond integrating widely-used industry algorithms, we are continuously researching better compression algorithms, which will be gradually open-sourced in the future.
54
+ - **Performance-Driven**: We continuously optimize end-to-end performance in model compression workflows and algorithm deployment, such as enabling quantization of models like Qwen3-235B and DeepSeek-R1 on a single GPU.
55
+
56
+ ## 💼Supported Models
57
+
58
+ ### Quantization
59
+ Currently supports the following LLMs, including Hunyuan-Dense, Hunyuan-MoE, Qwen3-Dense, Qwen3-MoE, Qwen2.5, DeepSeek-R1 distilled Qwen models, and QwQ::
60
+
61
+ | Model | FP8-Dynamic | FP8-Static | INT8-Dynamic | INT4-GPTQ | INT4-AWQ |
62
+ | --------------------------------------------------------------------------------------------------------------------------- | ----------- | ---------- | ------------ | --------- | -------- |
63
+ | [Hunyuan-Dense](https://huggingface.co/tencent/Hunyuan-7B-Instruct) | ✅ | ✅ | ✅ | ✅ | ✅ |
64
+ | [Hunyuan-MoE](https://huggingface.co/collections/tencent/hunyuan-a13b-685ec38e5b46321e3ea7c4be) | ✅ | ✅ | ✅ | ✅ | ✅ |
65
+ | [Qwen3-Dense](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
66
+ | [Qwen3-MoE](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
67
+ | [Qwen2.5](https://huggingface.co/collections/AngelSlim/qwen2-25-quant-68652d6cbdf5c0d4b1c4499a) | ✅ | ✅ | ✅ | ✅ | ✅ |
68
+ | [DeepSeek-R1-Distill-Qwen](https://huggingface.co/collections/AngelSlim/deepseek-r1-distill-quant-68652f16a9c206b030b05f7f) | ✅ | ✅ | ✅ | ✅ | ✅ |
69
+ | [QwQ](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
70
+
71
+ ### Speculative Decoding
72
+ The Eagle3 weights for the Qwen3 series model are now available.
73
+
74
+ | Qwen3 Models | Hunyuan Models |
75
+ | ----------|----------|
76
+ | ✅ [Qwen3-1.7B](https://huggingface.co/AngelSlim/Qwen3-1.7B_eagle3) |✅ [Hunyuan-1.8B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-1.8B-Instruct_eagle3) |
77
+ | ✅ [Qwen3-4B](https://huggingface.co/AngelSlim/Qwen3-4B_eagle3) |✅ [Hunyuan-4B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-4B-Instruct_eagle3) |
78
+ | ✅ [Qwen3-8B](https://huggingface.co/AngelSlim/Qwen3-8B_eagle3) |✅ [Hunyuan-7B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-7B-Instruct_eagle3) |
79
+ | ✅ [Qwen3-14B](https://huggingface.co/AngelSlim/Qwen3-14B_eagle3) |
80
+ | ✅ [Qwen3-32B](https://huggingface.co/AngelSlim/Qwen3-32B_eagle3) |
81
+ | ✅ [Qwen3-30B-A3B](https://huggingface.co/AngelSlim/Qwen3-a3B_eagle3) |
82
+
83
+ ## 🛎️How to Use
84
+
85
+ ### Install AngelSlim
86
+
87
+ We recommend using `pip` to install the latest stable version of `AngelSlim`:
88
+
89
+ ```shell
90
+ pip install angelslim
91
+ ```
92
+
93
+ Alternatively, you can clone the repository and install from source in editable mode:
94
+
95
+ ```shell
96
+ cd AngelSlim && python setup.py install
97
+ ```
98
+
99
+ For more detailed installation instructions, please refer to the [Installation Documentation](https://angelslim.readthedocs.io/zh-cn/latest/getting_started/installation.html).
100
+
101
+ ### Quick Start
102
+
103
+ After installing `AngelSlim`, you can quickly start by running the following script to perform static `FP8` quantization on the `Qwen3-1.7B` model:
104
+
105
+ * One-click Start
106
+
107
+ ```shell
108
+ python3 tools/run.py -c configs/qwen3/fp8_static/qwen3-1_7b_fp8_static.yaml
109
+ ```
110
+
111
+ This example will load the HuggingFace model and perform activation value calibration using the `dataset` specified in the config file, saving the quantized model weights.
112
+
113
+ * Code-based Start
114
+
115
+ To perform dynamic `FP8` quantization on `Qwen3-1.7B`:
116
+
117
+ ```python
118
+ from angelslim.engine import Engine
119
+
120
+ slim_engine = Engine()
121
+ # Prepare model
122
+ slim_engine.prepare_model(model_name="Qwen", model_path="Qwen/Qwen3-1.7B",)
123
+ # Initialize compressor
124
+ slim_engine.prepare_compressor("PTQ", default_method="fp8_dynamic")
125
+ # Compress model
126
+ slim_engine.run()
127
+ # Save compressed model
128
+ slim_engine.save("./output")
129
+ ```
130
+
131
+ For more details, please refer to the [Quick Start Documentation](https://angelslim.readthedocs.io/zh-cn/latest/getting_started/quickstrat.html).
132
+
133
+ ### 🖥️ Deployment and Testing
134
+
135
+ #### 1. API Service Deployment
136
+
137
+ After specifying the quantized model path `MODEL_PATH`, you can deploy an OpenAI-compatible API service using the following LLMs inference frameworks:
138
+
139
+ **vLLM**
140
+
141
+ Use the following script to launch a [vLLM](https://github.com/vllm-project/vllm) server, recommended version `vllm>=0.8.5.post1`. For MOE INT8 quantized models, vllm>=0.9.0 is required.
142
+
143
+
144
+ ```shell
145
+ bash deploy/run_vllm.sh $MODEL_PATH
146
+ ```
147
+
148
+ **SGLang**
149
+
150
+
151
+ Use the following script to launch a [SGLang](https://github.com/sgl-project/sglang) server, recommended version `sglang>=0.4.6.post1`.
152
+
153
+ ```shell
154
+ bash deploy/run_sglang.sh $MODEL_PATH
155
+ ```
156
+
157
+ #### 2. Service Invocation
158
+
159
+ Invoke requests via [OpenAI's API format](https://platform.openai.com/docs/api-reference/introduction):
160
+
161
+ ```shell
162
+ bash deploy/openai.sh $MODEL_PATH
163
+ ```
164
+
165
+ #### 3. Performance Evaluation
166
+
167
+ Evaluate the performance of quantized model using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), recommended version`lm-eval>=0.4.8`:
168
+
169
+ ```shell
170
+ bash deploy/lm_eval.sh $MODEL_PATH
171
+ ```
172
+
173
+ For more detaileds, please refer to the [Deployment Documentation](https://angelslim.readthedocs.io/zh-cn/latest/deployment/deploy.html).
174
+
175
+
176
+ ## 📈 Benchmark
177
+
178
+ ### (1) Quantization
179
+
180
+ The performance test results for selected models are shown below. For the complete benchmark, refer to the [Benchmark documentation](https://angelslim.readthedocs.io/zh-cn/latest/performance/quantization/benchmarks.html)
181
+
182
+ #### Hunyuan Series Models
183
+
184
+ Benchmark results for the `Hunyuan-A13B-Instruct` model with `FP8` and `INT4-GPTQ` quantization algorithms on datasets including `AIME 2024`, `GSM8K`, `BBH`, and `DROP`:
185
+
186
+ | Bench | Hunyuan-A13B-Instruct | Hunyuan-A13B-Instruct-FP8 | Hunyuan-A13B-Instruct-Int4-GPTQ |
187
+ |:---------:|:---------------------:|:-------------------------:|:-------------------------------:|
188
+ | AIME 2024 | 87.3 | 86.7 | 86.7 |
189
+ | GSM8K | 94.39 | 94.01 | 94.24 |
190
+ | BBH | 89.1 | 88.34 | 87.91 |
191
+ | DROP | 91.1 | 91.1 | 91.05 |
192
+
193
+ #### Qwen3 Series Models
194
+
195
+ Benchmark results for Qwen3 series models with `FP8-Static`, `FP8-Dynamic`, `INT4-GPTQ`, and `INT4-AWQ` quantization algorithms on datasets including `CEVAL`, `MMLU`, `GSM8K`, and `HUMANEVAL`:
196
+
197
+ <table>
198
+ <thead>
199
+ <tr><th>Model</th><th>Quantization</th><th>CEVAL</th><th>MMLU</th><th>GSM8K</th><th>HUMANEVAL</th></tr>
200
+ </thead>
201
+ <tbody>
202
+ <tr><td rowspan="4">Qwen3-0.6B</td><td>BF16</td><td>45.84</td><td>47.21</td><td>42.99</td><td>19.51</td></tr>
203
+ <tr><td>FP8-Static</td><td>45.99</td><td>46.87</td><td>38.06</td><td>18.90</td></tr>
204
+ <tr><td>FP8-Dynamic</td><td>45.99</td><td>46.93</td><td>38.29</td><td>20.73</td></tr>
205
+ <tr><td>INT8-Dynamic</td><td>45.17</td><td>46.95</td><td>41.17</td><td>21.34</td></tr>
206
+ <tr><td rowspan="6">Qwen3-8B</td><td>BF16</td><td>79.27</td><td>74.78</td><td>87.79</td><td>63.41</td></tr>
207
+ <tr><td>FP8-Static</td><td>78.23</td><td>74.79</td><td>86.96</td><td>62.20</td></tr>
208
+ <tr><td>FP8-Dynamic</td><td>78.45</td><td>74.75</td><td>87.64</td><td>62.80</td></tr>
209
+ <tr><td>INT8-Dynamic</td><td>78.01</td><td>74.84</td><td>86.96</td><td>67.07</td></tr>
210
+ <tr><td>INT4-GPTQ</td><td>77.19</td><td>73.26</td><td>86.43</td><td>62.20</td></tr>
211
+ <tr><td>INT4-AWQ</td><td>76.15</td><td>73.59</td><td>86.96</td><td>63.41</td></tr>
212
+ <tr><td rowspan="6">Qwen3-14B</td><td>BF16</td><td>83.06</td><td>78.90</td><td>88.40</td><td>55.49</td></tr>
213
+ <tr><td>FP8-Static</td><td>82.62</td><td>78.57</td><td>89.46</td><td>57.32</td></tr>
214
+ <tr><td>FP8-Dynamic</td><td>82.24</td><td>78.92</td><td>88.32</td><td>52.44</td></tr>
215
+ <tr><td>INT8-Dynamic</td><td>81.87</td><td>78.13</td><td>86.28</td><td>56.10</td></tr>
216
+ <tr><td>INT4-GPTQ</td><td>81.05</td><td>78.02</td><td>87.34</td><td>57.93</td></tr>
217
+ <tr><td>INT4-AWQ</td><td>82.02</td><td>77.68</td><td>84.23</td><td>61.59</td></tr>
218
+ <tr><td rowspan="5">Qwen3-32B</td><td>BF16</td><td>86.55</td><td>82.00</td><td>74.53</td><td>37.80</td></tr>
219
+ <tr><td>FP8-Static</td><td>86.92</td><td>81.78</td><td>70.20</td><td>39.63</td></tr>
220
+ <tr><td>FP8-Dynamic</td><td>86.55</td><td>81.89</td><td>70.43</td><td>38.41</td></tr>
221
+ <tr><td>INT4-GPTQ</td><td>86.18</td><td>81.01</td><td>-</td><td>43.29</td></tr>
222
+ <tr><td>INT4-AWQ</td><td>86.18</td><td>81.54</td><td>-</td><td>36.59</td></tr>
223
+ <tr><td rowspan="4">Qwen3-30B-A3B</td><td>BF16</td><td>83.66</td><td>79.36</td><td>89.99</td><td>31.71</td></tr>
224
+ <tr><td>FP8-Static</td><td>83.95</td><td>79.47</td><td>89.01</td><td>31.10</td></tr>
225
+ <tr><td>FP8-Dynamic</td><td>84.10</td><td>79.40</td><td>89.16</td><td>32.93</td></tr>
226
+ <tr><td>INT8-Dynamic</td><td>83.36</td><td>79.48</td><td>89.16</td><td>34.15</td></tr>
227
+ <tr><td rowspan="4">Qwen3-235B-A22B</td><td>BF16</td><td>89.60</td><td>86.28</td><td>85.29</td><td>27.44</td></tr>
228
+ <tr><td>FP8-Static</td><td>89.67</td><td>86.19</td><td>86.96</td><td>27.44</td></tr>
229
+ <tr><td>FP8-Dynamic</td><td>89.67</td><td>86.18</td><td>85.22</td><td>28.05</td></tr>
230
+ <tr><td>INT8-Dynamic</td><td>88.93</td><td>86.20</td><td>86.20</td><td>23.78</td></tr>
231
+ <tr><td rowspan="5">QwQ-32B</td><td>BF16</td><td>85.74</td><td>82.03</td><td>73.31</td><td>42.68</td></tr>
232
+ <tr><td>FP8-Static</td><td>85.44</td><td>81.91</td><td>75.36</td><td>42.68</td></tr>
233
+ <tr><td>FP8-Dynamic</td><td>85.07</td><td>81.93</td><td>75.66</td><td>42.07</td></tr>
234
+ <tr><td>INT4-GPTQ</td><td>84.03</td><td>81.26</td><td>68.23</td><td>45.73</td></tr>
235
+ <tr><td>INT4-AWQ</td><td>83.58</td><td>81.01</td><td>68.69</td><td>43.29</td></tr>
236
+ </tbody>
237
+ </table>
238
+
239
+ #### Other Models
240
+
241
+ Benchmark results for other models with `FP8-Static`, `FP8-Dynamic`, `INT4-GPTQ`, and `INT4-AWQ` quantization algorithms on datasets including `CEVAL`, `MMLU` and `GSM8K`:
242
+
243
+ <table>
244
+ <thead>
245
+ <tr><th>Model</th><th>Quantization</th><th>CEVAL</th><th>MMLU</th><th>GSM8K</th></tr>
246
+ </thead>
247
+ <tbody>
248
+ <tr><td rowspan="3">Qwen2.5-1.5B-Instruct</td><td>BF16</td><td>67.01</td><td>60.05</td><td>54.28</td></tr>
249
+ <tr><td>FP8-Static</td><td>66.27</td><td>60.23</td><td>-</td></tr>
250
+ <tr><td>FP8-Dynamic</td><td>66.79</td><td>60.08</td><td>51.71</td></tr>
251
+ <tr><td rowspan="5">Qwen2.5-7B-Instruct</td><td>BF16</td><td>81.20</td><td>74.55</td><td>79.98</td></tr>
252
+ <tr><td>FP8-Static</td><td>81.13</td><td>74.03</td><td>79.30</td></tr>
253
+ <tr><td>FP8-Dynamic</td><td>80.31</td><td>74.07</td><td>79.00</td></tr>
254
+ <tr><td>INT4-GPTQ</td><td>79.05</td><td>73.05</td><td>74.75</td></tr>
255
+ <tr><td>INT4-AWQ</td><td>79.35</td><td>73.22</td><td>79.38</td></tr>
256
+ <tr><td rowspan="5">Qwen2.5-32B-Instruct</td><td>BF16</td><td>87.30</td><td>83.21</td><td>81.73</td></tr>
257
+ <tr><td>FP8-Static</td><td>87.59</td><td>83.08</td><td>81.58</td></tr>
258
+ <tr><td>FP8-Dynamic</td><td>87.30</td><td>83.04</td><td>81.58</td></tr>
259
+ <tr><td>INT4-GPTQ</td><td>86.70</td><td>82.45</td><td>82.03</td></tr>
260
+ <tr><td>INT4-AWQ</td><td>87.00</td><td>82.64</td><td>-</td></tr>
261
+ <tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-7B</td><td>BF16</td><td>53.49</td><td>53.80</td><td>75.74</td></tr>
262
+ <tr><td>FP8-Static</td><td>53.57</td><td>54.17</td><td>76.19</td></tr>
263
+ <tr><td>FP8-Dynamic</td><td>52.97</td><td>54.13</td><td>74.15</td></tr>
264
+ <tr><td>INT4-GPTQ</td><td>51.86</td><td>52.44</td><td>75.89</td></tr>
265
+ <tr><td>INT4-AWQ</td><td>53.49</td><td>53.70</td><td>-</td></tr>
266
+ <tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-14B</td><td>BF16</td><td>77.71</td><td>74.28</td><td>85.67</td></tr>
267
+ <tr><td>FP8-Static</td><td>77.56</td><td>74.66</td><td>86.73</td></tr>
268
+ <tr><td>FP8-Dynamic</td><td>76.82</td><td>74.63</td><td>87.11</td></tr>
269
+ <tr><td>INT4-GPTQ</td><td>74.29</td><td>72.37</td><td>84.61</td></tr>
270
+ <tr><td>INT4-AWQ</td><td>74.81</td><td>73.00</td><td>86.05</td></tr>
271
+ <tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-32B</td><td>BF16</td><td>84.18</td><td>80.89</td><td>87.41</td></tr>
272
+ <tr><td>FP8-Static</td><td>83.43</td><td>80.90</td><td>87.57</td></tr>
273
+ <tr><td>FP8-Dynamic</td><td>83.73</td><td>81.10</td><td>86.43</td></tr>
274
+ <tr><td>INT4-GPTQ</td><td>84.10</td><td>79.80</td><td>86.73</td></tr>
275
+ <tr><td>INT4-AWQ</td><td>82.84</td><td>80.15</td><td>87.19</td></tr>
276
+ </tbody>
277
+ </table>
278
+
279
+ ### (2) Speculative Decoding
280
+
281
+ #### Qwen3 Series Models
282
+ Benchmark results for Qwen3 series models with `Eagle3` speculative decoding algorithm on datasets including `MT-bench`, `HunmanEval`, `GSM8K`, and `Alpaca`:
283
+
284
+ <table>
285
+ <thead>
286
+ <tr>
287
+ <th>&nbsp</th><th>&nbsp</th>
288
+ <th colspan="2" style="text-align: center; vertical-align: middle;">MT-bench</th>
289
+ <th colspan="2" style="text-align: center; vertical-align: middle;">HumanEval</th>
290
+ <th colspan="2" style="text-align: center; vertical-align: middle;">GSM8K</th>
291
+ <th colspan="2" style="text-align: center; vertical-align: middle;">Alpaca</th>
292
+ <th colspan="2" style="text-align: center; vertical-align: middle;">Mean</th></tr>
293
+ <tr><th>Temperature</th><th>Model</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th></tr>
294
+ </thead>
295
+ <tbody>
296
+ <!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=0</strong></td></tr> -->
297
+ <tr><td rowspan="6"><strong>T=0</strong></td>
298
+ <td>Qwen3-1.7B</td><td>2.05x</td><td>2.81</td><td>2.07x</td><td>2.93</td><td>2.11x</td><td>2.98</td><td>1.93x</td><td>2.69</td><td>2.04x</td><td>2.85</td></tr>
299
+ <tr> <td>Qwen3-4B</td><td>2.21x</td><td>3.01</td><td>2.36x</td><td>3.24</td><td>2.42x</td><td>3.13</td><td>2.32x</td><td>2.75</td><td>2.33x</td><td>3.03</td></tr>
300
+ <tr><td>Qwen3-8B</td><td>2.65x</td><td>3.87</td><td>2.64x</td><td>3.82</td><td>2.86x</td><td>4.10</td><td>2.58x</td><td>3.55</td><td>2.68x</td><td>3.83</td></tr>
301
+ <tr><td>Qwen3-14B</td><td>2.42x</td><td>3.38</td><td>2.57x</td><td>3.58</td><td>2.75x</td><td>3.77</td><td>2.27x</td><td>3.11</td><td>2.50x</td><td>3.46</td></tr>
302
+ <tr><td>Qwen3-32B</td><td>2.39x</td><td>2.78</td><td>2.37x</td><td>2.81</td><td>2.47x</td><td>2.92</td><td>2.42x</td><td>2.53</td><td>2.41x</td><td>2.76</td></tr>
303
+ <tr><td>Qwen3-30B-A3B</td><td>2.84x</td><td>3.63</td><td>2.27x</td><td>3.09</td><td>2.64x</td><td>3.42</td><td>2.83x</td><td>3.56</td><td>2.64x</td><td>3.42</td></tr>
304
+ <!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=1</strong></td></tr> -->
305
+ <tr><td rowspan="6"><strong>T=1</strong></td>
306
+ <td>Qwen3-1.7B</td><td>1.74x</td><td>2.53</td><td>1.86x</td><td>2.70</td><td>1.82x</td><td>2.69</td><td>1.72x</td><td>2.46</td><td>1.93x</td><td>2.60</td></tr>
307
+ <tr><td>Qwen3-4B</td><td>1.93x</td><td>2.60</td><td>2.00x</td><td>2.84</td><td>2.11x</td><td>2.82</td><td>2.34x</td><td>2.50</td><td>1.75x</td><td>2.69</td></tr>
308
+ <tr><td>Qwen3-8B</td><td>1.91x</td><td>2.84</td><td>2.07x</td><td>3.05</td><td>2.34x</td><td>3.26</td><td>2.09x</td><td>2.92</td><td>2.10x</td><td>3.02</td></tr>
309
+ <tr><td>Qwen3-14B</td><td>1.81x</td><td>2.58</td><td>1.96x</td><td>2.81</td><td>2.16x</td><td>3.09</td><td>1.76x</td><td>2.49</td><td>1.92x</td><td>2.74</td></tr>
310
+ <tr><td>Qwen3-32B</td><td>1.62x</td><td>1.91</td><td>1.71x</td><td>2.05</td><td>1.78x</td><td>2.10</td><td>1.80x</td><td>1.95</td><td>1.62x</td><td>2.00</td></tr>
311
+ <tr><td>Qwen3-30B-A3B</td><td>1.91x</td><td>2.46</td><td>2.00x</td><td>2.64</td><td>1.90x</td><td>2.53</td><td>1.80x</td><td>2.32</td><td>1.90x</td><td>2.48</td></tr>
312
+ </tbody>
313
+ </table>
314
+
315
+ #### Hunyuan Series Models
316
+ Benchmark results for Hunyuan series models with `Eagle3` speculative decoding algorithm on datasets including `MT-bench`, `HunmanEval`, `GSM8K`, and `Alpaca`:
317
+
318
+ <table>
319
+ <thead>
320
+ <tr>
321
+ <th>&nbsp</th><th>&nbsp</th>
322
+ <th colspan="2" style="text-align: center; vertical-align: middle;">MT-bench</th>
323
+ <th colspan="2" style="text-align: center; vertical-align: middle;">HumanEval</th>
324
+ <th colspan="2" style="text-align: center; vertical-align: middle;">GSM8K</th>
325
+ <th colspan="2" style="text-align: center; vertical-align: middle;">Alpaca</th>
326
+ <th colspan="2" style="text-align: center; vertical-align: middle;">Mean</th></tr>
327
+ <tr><th>Temperature</th><th>Model</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th></tr>
328
+ </thead>
329
+ <tbody>
330
+ <!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=0</strong></td></tr> -->
331
+ <tr><td rowspan="3"><strong>T=0</strong></td>
332
+ <td>Hunyuan-1.8B-Instruct</td><td>1.97x</td><td>2.90</td><td>2.58x</td><td>3.73</td><td>2.61x</td><td>3.71</td><td>1.71x</td><td>2.43</td><td>2.22x</td><td>3.19</td></tr>
333
+ <tr> <td>Hunyuan-4B-Instruct</td><td>1.77x</td><td>2.60</td><td>2.64x</td><td>3.35</td><td>2.14x</td><td>3.17</td><td>1.72x</td><td>2.57</td><td>2.07x</td><td>2.92</td></tr>
334
+ <tr><td>Hunyuan-7B-Instruct</td><td>2.22x</td><td>3.58</td><td>3.59x</td><td>5.47</td><td>2.96x</td><td>4.68</td><td>1.64x</td><td>2.56</td><td>2.60x</td><td>4.07</td></tr>
335
+ <!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=1</strong></td></tr> -->
336
+ <tr><td rowspan="3"><strong>T=1</strong></td>
337
+ <td>Hunyuan-1.8B-Instruct</td><td>1.58x</td><td>2.36</td><td>2.35x</td><td>3.56</td><td>2.23x</td><td>3.38</td><td>1.26x</td><td>1.87</td><td>1.86x</td><td>2.79</td></tr>
338
+ <tr><td>Hunyuan-4B-Instruct</td><td>1.36x</td><td>2.05</td><td>1.97x</td><td>2.86</td><td>1.72x</td><td>2.68</td><td>1.14x</td><td>1.76</td><td>1.55x</td><td>2.34</td></tr>
339
+ <tr><td>Hunyuan-7B-Instruct</td><td>1.90x</td><td>3.11</td><td>3.12x</td><td>5.09</td><td>2.74x</td><td>4.34</td><td>1.47x</td><td>2.39</td><td>2.31x</td><td>3.73</td></tr>
340
+ </tbody>
341
+ </table>
342
+
343
+ ## 📝 License
344
+
345
+ The code for this project is open-sourced under the [License for AngelSlim](LICENSE).
346
+
347
+ ## 🔗 Citation
348
+
349
+ ```
350
+ @software{AngelSlim2025,
351
+ title={{AngelSlim}},
352
+ author={Tencent AngelSlim Project Contributors},
353
+ year={2025},
354
+ month={6},
355
+ url={https://github.com/Tencent/AngelSlim},
356
+ }
357
+ ```
358
+
359
+ ## 💬 Technical Discussion
360
+
361
+ * AngelSlim is continuously iterating and new features will be released soon. If you have any questions or suggestions, please open an issue on GitHub or join our [WeChat technical discussion group](https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/angel_slim_wechat.png?raw=true).