ramblingpolymath commited on
Commit
c97ec59
·
verified ·
1 Parent(s): 5a3809e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +405 -53
README.md CHANGED
@@ -1,7 +1,6 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
5
  pipeline_tag: text-generation
6
  base_model: Qwen/Qwen3-32B
7
  tags:
@@ -10,94 +9,452 @@ tags:
10
  - llm-compressor
11
  ---
12
 
13
- # Qwen3-32B W8A8 Quantized
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- This is a W8A8 quantized version of [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) using [LLM-Compressor](https://github.com/vllm-project/llm-compressor).
 
 
 
 
 
 
 
 
 
16
 
17
- ## Quantization Details
18
 
19
- - **Base Model**: Qwen/Qwen3-32B
20
- - **Quantization Method**: W8A8 (8-bit weights, 8-bit activations)
21
- - **Quantization Framework**: LLM-Compressor
22
- - **Model Size**: Significantly reduced from original 32.8B parameters
23
- - **Precision**: INT8 for both weights and activations
24
 
25
- ## Performance Considerations
26
 
27
- - **Memory Usage**: Significantly reduced memory footprint compared to the original FP16/BF16 model
28
- - **Inference Speed**: Faster inference due to reduced precision and smaller model size
29
- - **Accuracy**: Minimal accuracy loss compared to the original model (specific benchmarks may vary)
 
 
 
 
 
30
 
31
- ## Hardware Requirements
32
 
33
- This quantized model has lower hardware requirements than the original:
34
- - **Memory**: Approximately 2x less GPU memory required
35
- - **Compute**: Compatible with INT8 tensor operations
36
- - **Recommended**: GPUs with tensor core support for optimal INT8 performance
 
 
 
 
37
 
38
- ## All Original Features Preserved
 
 
39
 
40
- This quantized model retains all the capabilities of the original Qwen3-32B:
41
 
42
- - **Thinking Mode Support**: Seamless switching between thinking and non-thinking modes
43
- - **Enhanced Reasoning**: Superior performance in mathematics, code generation, and logical reasoning
44
- - **Multilingual Support**: 100+ languages and dialects
45
- - **Agent Capabilities**: Tool calling and external integration
46
- - **Long Context**: Native 32,768 token support, extensible to 131,072 with YaRN
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ## Switching Between Thinking and Non-Thinking Mode
49
 
50
- The quantized model supports the same thinking mode controls as the original:
 
 
 
 
 
 
51
 
52
- ### `enable_thinking=True` (Default)
53
  ```python
54
  text = tokenizer.apply_chat_template(
55
  messages,
56
  tokenize=False,
57
  add_generation_prompt=True,
58
- enable_thinking=True
59
  )
60
  ```
61
 
 
 
 
 
 
 
62
  ### `enable_thinking=False`
 
 
 
63
  ```python
64
  text = tokenizer.apply_chat_template(
65
  messages,
66
  tokenize=False,
67
  add_generation_prompt=True,
68
- enable_thinking=False
69
  )
70
  ```
71
 
72
- ## Best Practices
73
 
74
- Follow the same best practices as the original model:
 
75
 
76
- 1. **Sampling Parameters**:
77
- - Thinking mode: `Temperature=0.6`, `TopP=0.95`, `TopK=20`, `MinP=0`
78
- - Non-thinking mode: `Temperature=0.7`, `TopP=0.8`, `TopK=20`, `MinP=0`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
 
80
- 2. **Output Length**: Use 32,768 tokens for most queries, 38,912 for complex problems
 
 
81
 
82
- 3. **Avoid Greedy Decoding**: Do not use greedy decoding in thinking mode
 
 
83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
 
85
- ## Original Model Information
86
 
87
- For complete documentation, benchmarks, and detailed usage instructions, please refer to the original [Qwen3-32B model card](https://huggingface.co/Qwen/Qwen3-32B).
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
- ### Key Specifications (from original model):
90
- - **Type**: Causal Language Models
91
- - **Parameters**: 32.8B total, 31.2B non-embedding
92
- - **Layers**: 64
93
- - **Attention Heads**: 64 for Q, 8 for KV (GQA)
94
- - **Context Length**: 32,768 tokens natively, 131,072 with YaRN
95
 
96
- ## Citation
 
 
 
97
 
98
- If you use this quantized model, please cite the original Qwen3 work.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
- ```bibtex
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
  @misc{qwen3technicalreport,
102
  title={Qwen3 Technical Report},
103
  author={Qwen Team},
@@ -107,9 +464,4 @@ If you use this quantized model, please cite the original Qwen3 work.
107
  primaryClass={cs.CL},
108
  url={https://arxiv.org/abs/2505.09388},
109
  }
110
-
111
- ```
112
-
113
- ## Disclaimer
114
-
115
- This is an unofficial quantized version. For the official model and support, please refer to the original [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) repository.
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
 
4
  pipeline_tag: text-generation
5
  base_model: Qwen/Qwen3-32B
6
  tags:
 
9
  - llm-compressor
10
  ---
11
 
12
+ ```
13
+ ██╗ ██╗ █████╗ █████╗ █████╗
14
+ ██║ ██║██╔══██╗ ██╔══██╗██╔══██╗
15
+ ██║ █╗ ██║╚█████╔╝ ███████║╚█████╔╝
16
+ ██║███╗██║██╔══██╗ ██╔══██║██╔══██╗
17
+ ╚███╔███╔╝╚█████╔╝ ██║ ██║╚█████╔╝
18
+ ╚══╝╚══╝ ╚════╝ ╚═╝ ╚═╝ ╚════╝
19
+ 🗜️ COMPRESSED & OPTIMIZED 🚀
20
+ ```
21
+
22
+ # Qwen3-32B - W8A8 Quantized
23
+
24
+ W8A8 (8-bit weights and activations) quantized version of Qwen/Qwen3-32B using **LLM-Compressor**.
25
+
26
+ - 🗜️ **Memory**: ~50% reduction vs FP16
27
+ - 🚀 **Speed**: Faster inference on supported hardware
28
+ - 🔗 **Original model**: https://huggingface.co/Qwen/Qwen3-32B
29
+ - 🏗️ **Recommended architectures**: Ampere and older
30
+
31
+ <details>
32
+ <summary>Click to view compression config</summary>
33
+
34
+ ```python
35
+ from datasets import load_dataset
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer
37
+
38
+ from llmcompressor.modifiers.quantization import GPTQModifier
39
+ from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
40
+ from llmcompressor.transformers import oneshot
41
+ from llmcompressor.utils import dispatch_for_generation
42
+
43
+ # Select model and load it.
44
+ model_id = "Qwen/Qwen3-32B"
45
+ model = AutoModelForCausalLM.from_pretrained(
46
+ model_id,
47
+ torch_dtype="auto",
48
+ device_map="auto",
49
+ low_cpu_mem_usage=True,
50
+ offload_folder="./offload_tmp", # Add offload directory
51
+ max_memory={0: "22GB", 1: "22GB", "cpu": "64GB"},
52
+ )
53
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
54
+
55
+ # Select calibration dataset.
56
+ DATASET_ID = "HuggingFaceH4/ultrachat_200k"
57
+ DATASET_SPLIT = "train_sft"
58
+
59
+ # Select number of samples. 512 samples is a good place to start.
60
+ # Increasing the number of samples can improve accuracy.
61
+ NUM_CALIBRATION_SAMPLES = 512
62
+ MAX_SEQUENCE_LENGTH = 2048
63
+
64
+ # Load dataset and preprocess.
65
+ ds = load_dataset(DATASET_ID, split=f"{DATASET_SPLIT}[:{NUM_CALIBRATION_SAMPLES}]")
66
+ ds = ds.shuffle(seed=42)
67
+
68
+
69
+ def preprocess(example):
70
+ return {
71
+ "text": tokenizer.apply_chat_template(
72
+ example["messages"],
73
+ tokenize=False,
74
+ )
75
+ }
76
+
77
+
78
+ ds = ds.map(preprocess)
79
+
80
+
81
+ # Tokenize inputs.
82
+ def tokenize(sample):
83
+ return tokenizer(
84
+ sample["text"],
85
+ padding=False,
86
+ max_length=MAX_SEQUENCE_LENGTH,
87
+ truncation=True,
88
+ add_special_tokens=False,
89
+ )
90
+
91
+
92
+ ds = ds.map(tokenize, remove_columns=ds.column_names)
93
+
94
+ # Configure the quantization algorithm to run.
95
+ # * apply SmoothQuant to make the activations easier to quantize
96
+ # * quantize the weights to int8 with GPTQ (static per channel)
97
+ # * quantize the activations to int8 (dynamic per token)
98
+ recipe = [
99
+ SmoothQuantModifier(smoothing_strength=0.8),
100
+ GPTQModifier(targets="Linear", scheme="W8A8", ignore=["lm_head"]),
101
+ ]
102
+ # Apply algorithms.
103
+ oneshot(
104
+ model=model,
105
+ dataset=ds,
106
+ recipe=recipe,
107
+ max_seq_length=MAX_SEQUENCE_LENGTH,
108
+ num_calibration_samples=NUM_CALIBRATION_SAMPLES,
109
+ output_dir="./Qwen3-32B-W8A8", # Add this line
110
+ )
111
+
112
+ # Save to disk compressed.
113
+ SAVE_DIR = model_id.rstrip("/").split("/")[-1] + "-W8A8"
114
+ model.save_pretrained(SAVE_DIR, save_compressed=True)
115
+ tokenizer.save_pretrained(SAVE_DIR)
116
+ ```
117
+
118
+ </details>
119
 
120
+ ---
121
+
122
+ ## 📄 Original Model README
123
+
124
+ # Qwen3-32B
125
+ <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
126
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
127
+ </a>
128
+
129
+ ## Qwen3 Highlights
130
 
131
+ Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
132
 
133
+ - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
134
+ - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
135
+ - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
136
+ - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
137
+ - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
138
 
139
+ ## Model Overview
140
 
141
+ **Qwen3-32B** has the following features:
142
+ - Type: Causal Language Models
143
+ - Training Stage: Pretraining & Post-training
144
+ - Number of Parameters: 32.8B
145
+ - Number of Paramaters (Non-Embedding): 31.2B
146
+ - Number of Layers: 64
147
+ - Number of Attention Heads (GQA): 64 for Q and 8 for KV
148
+ - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
149
 
150
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
151
 
152
+ ## Quickstart
153
+
154
+ The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
155
+
156
+ With `transformers<4.51.0`, you will encounter the following error:
157
+ ```
158
+ KeyError: 'qwen3'
159
+ ```
160
 
161
+ The following contains a code snippet illustrating how to use the model generate content based on given inputs.
162
+ ```python
163
+ from transformers import AutoModelForCausalLM, AutoTokenizer
164
 
165
+ model_name = "Qwen/Qwen3-32B"
166
 
167
+ # load the tokenizer and the model
168
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
169
+ model = AutoModelForCausalLM.from_pretrained(
170
+ model_name,
171
+ torch_dtype="auto",
172
+ device_map="auto"
173
+ )
174
+
175
+ # prepare the model input
176
+ prompt = "Give me a short introduction to large language model."
177
+ messages = [
178
+ {"role": "user", "content": prompt}
179
+ ]
180
+ text = tokenizer.apply_chat_template(
181
+ messages,
182
+ tokenize=False,
183
+ add_generation_prompt=True,
184
+ enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
185
+ )
186
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
187
+
188
+ # conduct text completion
189
+ generated_ids = model.generate(
190
+ **model_inputs,
191
+ max_new_tokens=32768
192
+ )
193
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
194
+
195
+ # parsing thinking content
196
+ try:
197
+ # rindex finding 151668 (</think>)
198
+ index = len(output_ids) - output_ids[::-1].index(151668)
199
+ except ValueError:
200
+ index = 0
201
+
202
+ thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
203
+ content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
204
+
205
+ print("thinking content:", thinking_content)
206
+ print("content:", content)
207
+ ```
208
+
209
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
210
+ - SGLang:
211
+ ```shell
212
+ python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser qwen3
213
+ ```
214
+ - vLLM:
215
+ ```shell
216
+ vllm serve Qwen/Qwen3-32B --enable-reasoning --reasoning-parser deepseek_r1
217
+ ```
218
+
219
+ For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
220
 
221
  ## Switching Between Thinking and Non-Thinking Mode
222
 
223
+ > [!TIP]
224
+ > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
225
+ > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
226
+
227
+ ### `enable_thinking=True`
228
+
229
+ By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
230
 
 
231
  ```python
232
  text = tokenizer.apply_chat_template(
233
  messages,
234
  tokenize=False,
235
  add_generation_prompt=True,
236
+ enable_thinking=True # True is the default value for enable_thinking
237
  )
238
  ```
239
 
240
+ In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
241
+
242
+ > [!NOTE]
243
+ > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
244
+
245
+
246
  ### `enable_thinking=False`
247
+
248
+ We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
249
+
250
  ```python
251
  text = tokenizer.apply_chat_template(
252
  messages,
253
  tokenize=False,
254
  add_generation_prompt=True,
255
+ enable_thinking=False # Setting enable_thinking=False disables thinking mode
256
  )
257
  ```
258
 
259
+ In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
260
 
261
+ > [!NOTE]
262
+ > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
263
 
264
+ ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
265
+
266
+ We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
267
+
268
+ Here is an example of a multi-turn conversation:
269
+
270
+ ```python
271
+ from transformers import AutoModelForCausalLM, AutoTokenizer
272
+
273
+ class QwenChatbot:
274
+ def __init__(self, model_name="Qwen/Qwen3-32B"):
275
+ self.tokenizer = AutoTokenizer.from_pretrained(model_name)
276
+ self.model = AutoModelForCausalLM.from_pretrained(model_name)
277
+ self.history = []
278
+
279
+ def generate_response(self, user_input):
280
+ messages = self.history + [{"role": "user", "content": user_input}]
281
+
282
+ text = self.tokenizer.apply_chat_template(
283
+ messages,
284
+ tokenize=False,
285
+ add_generation_prompt=True
286
+ )
287
+
288
+ inputs = self.tokenizer(text, return_tensors="pt")
289
+ response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
290
+ response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
291
+
292
+ # Update history
293
+ self.history.append({"role": "user", "content": user_input})
294
+ self.history.append({"role": "assistant", "content": response})
295
+
296
+ return response
297
+
298
+ # Example Usage
299
+ if __name__ == "__main__":
300
+ chatbot = QwenChatbot()
301
+
302
+ # First input (without /think or /no_think tags, thinking mode is enabled by default)
303
+ user_input_1 = "How many r's in strawberries?"
304
+ print(f"User: {user_input_1}")
305
+ response_1 = chatbot.generate_response(user_input_1)
306
+ print(f"Bot: {response_1}")
307
+ print("----------------------")
308
+
309
+ # Second input with /no_think
310
+ user_input_2 = "Then, how many r's in blueberries? /no_think"
311
+ print(f"User: {user_input_2}")
312
+ response_2 = chatbot.generate_response(user_input_2)
313
+ print(f"Bot: {response_2}")
314
+ print("----------------------")
315
+
316
+ # Third input with /think
317
+ user_input_3 = "Really? /think"
318
+ print(f"User: {user_input_3}")
319
+ response_3 = chatbot.generate_response(user_input_3)
320
+ print(f"Bot: {response_3}")
321
+ ```
322
+
323
+ > [!NOTE]
324
+ > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
325
+ > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
326
+
327
+ ## Agentic Use
328
+
329
+ Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
330
+
331
+ To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
332
+ ```python
333
+ from qwen_agent.agents import Assistant
334
+
335
+ # Define LLM
336
+ llm_cfg = {
337
+ 'model': 'Qwen3-32B',
338
 
339
+ # Use the endpoint provided by Alibaba Model Studio:
340
+ # 'model_type': 'qwen_dashscope',
341
+ # 'api_key': os.getenv('DASHSCOPE_API_KEY'),
342
 
343
+ # Use a custom endpoint compatible with OpenAI API:
344
+ 'model_server': 'http://localhost:8000/v1', # api_base
345
+ 'api_key': 'EMPTY',
346
 
347
+ # Other parameters:
348
+ # 'generate_cfg': {
349
+ # # Add: When the response content is `<think>this is the thought</think>this is the answer;
350
+ # # Do not add: When the response has been separated by reasoning_content and content.
351
+ # 'thought_in_content': True,
352
+ # },
353
+ }
354
+
355
+ # Define Tools
356
+ tools = [
357
+ {'mcpServers': { # You can specify the MCP configuration file
358
+ 'time': {
359
+ 'command': 'uvx',
360
+ 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
361
+ },
362
+ "fetch": {
363
+ "command": "uvx",
364
+ "args": ["mcp-server-fetch"]
365
+ }
366
+ }
367
+ },
368
+ 'code_interpreter', # Built-in tools
369
+ ]
370
+
371
+ # Define Agent
372
+ bot = Assistant(llm=llm_cfg, function_list=tools)
373
+
374
+ # Streaming generation
375
+ messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
376
+ for responses in bot.run(messages=messages):
377
+ pass
378
+ print(responses)
379
+ ```
380
+
381
+ ## Processing Long Texts
382
+
383
+ Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
384
 
385
+ YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
386
 
387
+ - Modifying the model files:
388
+ In the `config.json` file, add the `rope_scaling` fields:
389
+ ```json
390
+ {
391
+ ...,
392
+ "rope_scaling": {
393
+ "rope_type": "yarn",
394
+ "factor": 4.0,
395
+ "original_max_position_embeddings": 32768
396
+ }
397
+ }
398
+ ```
399
+ For `llama.cpp`, you need to regenerate the GGUF file after the modification.
400
 
401
+ - Passing command line arguments:
 
 
 
 
 
402
 
403
+ For `vllm`, you can use
404
+ ```shell
405
+ vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
406
+ ```
407
 
408
+ For `sglang`, you can use
409
+ ```shell
410
+ python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
411
+ ```
412
+
413
+ For `llama-server` from `llama.cpp`, you can use
414
+ ```shell
415
+ llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
416
+ ```
417
+
418
+ > [!IMPORTANT]
419
+ > If you encounter the following warning
420
+ > ```
421
+ > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
422
+ > ```
423
+ > please upgrade `transformers>=4.51.0`.
424
+
425
+ > [!NOTE]
426
+ > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
427
+ > We advise adding the `rope_scaling` configuration only when processing long contexts is required.
428
+ > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
429
+
430
+ > [!NOTE]
431
+ > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
432
+
433
+ > [!TIP]
434
+ > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
435
+
436
+ ## Best Practices
437
 
438
+ To achieve optimal performance, we recommend the following settings:
439
+
440
+ 1. **Sampling Parameters**:
441
+ - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
442
+ - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
443
+ - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
444
+
445
+ 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
446
+
447
+ 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
448
+ - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
449
+ - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
450
+
451
+ 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
452
+
453
+ ### Citation
454
+
455
+ If you find our work helpful, feel free to give us a cite.
456
+
457
+ ```
458
  @misc{qwen3technicalreport,
459
  title={Qwen3 Technical Report},
460
  author={Qwen Team},
 
464
  primaryClass={cs.CL},
465
  url={https://arxiv.org/abs/2505.09388},
466
  }
467
+ ```