Fazzie yo37 commited on
Commit
866afe2
·
verified ·
1 Parent(s): 342a450

Update readme, chat_template (#4)

Browse files

- Update readme, chat_template (b16d0400780ac4a6fd6a2be1e6e7e665c5aa0492)


Co-authored-by: yong <[email protected]>

Files changed (2) hide show
  1. README.md +10 -8
  2. chat_template.jinja +171 -0
README.md CHANGED
@@ -28,7 +28,7 @@ tags:
28
  <img src="https://img.shields.io/badge/Seed-Project Page-yellow"></a>
29
  <a href="https://github.com/ByteDance-Seed/seed-oss">
30
  <img src="https://img.shields.io/badge/Seed-Tech Report Coming Soon-red"></a>
31
- <a href="https://huggingface.co/ByteDance-Seed">
32
  <img src="https://img.shields.io/badge/Seed-Hugging Face-orange"></a>
33
  <br>
34
  <a href="./LICENSE">
@@ -312,12 +312,12 @@ Incorporating synthetic instruction data into pretraining leads to improved perf
312
  </tr>
313
  <tr>
314
  <td align="center">ArcAGI V2</td>
315
- <td align="center">50.3</td>
316
- <td align="center"><b>41.7</b></td>
317
- <td align="center">37.8</td>
318
- <td align="center">14.4</td>
319
  <td align="center">-</td>
320
- <td align="center"><ins>40.6</ins></td>
321
  </tr>
322
  <tr>
323
  <td align="center">KORBench</td>
@@ -463,6 +463,8 @@ Incorporating synthetic instruction data into pretraining leads to improved perf
463
  </sup><br/><sup>
464
  - The results of Gemma3-27B are sourced directly from its technical report.
465
  </sup><br/><sup>
 
 
466
  - Generation configs for Seed-OSS-36B-Instruct: temperature=1.1, top_p=0.95. Specifically, for Taubench, temperature=1, top_p=0.7.
467
  </sup><br/><sup>
468
  </sup>
@@ -474,7 +476,7 @@ Incorporating synthetic instruction data into pretraining leads to improved perf
474
 
475
  Users can flexibly specify the model's thinking budget. The figure below shows the performance curves across different tasks as the thinking budget varies. For simpler tasks (such as IFEval), the model's chain of thought (CoT) is shorter, and the score exhibits fluctuations as the thinking budget increases. For more challenging tasks (such as AIME and LiveCodeBench), the model's CoT is longer, and the score improves with an increase in the thinking budget.
476
 
477
- ![thinking_budget](./thinking_budget.png)
478
 
479
  Here is an example with a thinking budget set to 512: during the reasoning process, the model periodically triggers self-reflection to estimate the consumed and remaining budget, and delivers the final response once the budget is exhausted or the reasoning concludes.
480
  ```
@@ -567,7 +569,7 @@ Use vllm >= 0.10.0 or higher for inference.
567
 
568
  - First install vLLM with Seed-OSS support version:
569
  ```shell
570
- VLLM_USE_PRECOMPILED=1 VLLM_TEST_USE_PRECOMPILED_NIGHTLY_WHEEL=1 pip install git+ssh://git@github.com/FoolPlayer/vllm.git@seed-oss
571
  ```
572
 
573
  - Start vLLM API server:
 
28
  <img src="https://img.shields.io/badge/Seed-Project Page-yellow"></a>
29
  <a href="https://github.com/ByteDance-Seed/seed-oss">
30
  <img src="https://img.shields.io/badge/Seed-Tech Report Coming Soon-red"></a>
31
+ <a href="https://huggingface.co/collections/ByteDance-Seed/seed-oss-68a609f4201e788db05b5dcd">
32
  <img src="https://img.shields.io/badge/Seed-Hugging Face-orange"></a>
33
  <br>
34
  <a href="./LICENSE">
 
312
  </tr>
313
  <tr>
314
  <td align="center">ArcAGI V2</td>
315
+ <td align="center">1.16</td>
316
+ <td align="center"><b>1.74</b></td>
317
+ <td align="center">0.87</td>
318
+ <td align="center">0</td>
319
  <td align="center">-</td>
320
+ <td align="center"><ins>1.45</ins></td>
321
  </tr>
322
  <tr>
323
  <td align="center">KORBench</td>
 
463
  </sup><br/><sup>
464
  - The results of Gemma3-27B are sourced directly from its technical report.
465
  </sup><br/><sup>
466
+ - The results of ArcAGI-V2 were measured on the official evaluation set, which was not involved in the training process.
467
+ </sup><br/><sup>
468
  - Generation configs for Seed-OSS-36B-Instruct: temperature=1.1, top_p=0.95. Specifically, for Taubench, temperature=1, top_p=0.7.
469
  </sup><br/><sup>
470
  </sup>
 
476
 
477
  Users can flexibly specify the model's thinking budget. The figure below shows the performance curves across different tasks as the thinking budget varies. For simpler tasks (such as IFEval), the model's chain of thought (CoT) is shorter, and the score exhibits fluctuations as the thinking budget increases. For more challenging tasks (such as AIME and LiveCodeBench), the model's CoT is longer, and the score improves with an increase in the thinking budget.
478
 
479
+ ![thinking_budget](./figures/thinking_budget.png)
480
 
481
  Here is an example with a thinking budget set to 512: during the reasoning process, the model periodically triggers self-reflection to estimate the consumed and remaining budget, and delivers the final response once the budget is exhausted or the reasoning concludes.
482
  ```
 
569
 
570
  - First install vLLM with Seed-OSS support version:
571
  ```shell
572
+ VLLM_USE_PRECOMPILED=1 VLLM_TEST_USE_PRECOMPILED_NIGHTLY_WHEEL=1 pip install git+https://github.com/vllm-project/vllm.git
573
  ```
574
 
575
  - Start vLLM API server:
chat_template.jinja ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {# ----------‑‑‑ special token variables ‑‑‑---------- #}
2
+ {%- set bos_token = '<seed:bos>' -%}
3
+ {%- set eos_token = '<seed:eos>' -%}
4
+ {%- set pad_token = '<seed:pad>' -%}
5
+ {%- set toolcall_begin_token = '<seed:tool_call>' -%}
6
+ {%- set toolcall_end_token = '</seed:tool_call>' -%}
7
+ {%- set think_begin_token = '<seed:think>' -%}
8
+ {%- set think_end_token = '</seed:think>' -%}
9
+ {%- set budget_begin_token = '<seed:cot_budget_reflect>'-%}
10
+ {%- set budget_end_token = '</seed:cot_budget_reflect>'-%}
11
+ {# -------------- reflection-interval lookup -------------- #}
12
+ {%- if not thinking_budget is defined %}
13
+ {%- set thinking_budget = -1 -%}
14
+ {%- endif -%}
15
+ {%- set budget_reflections_v05 = {
16
+ 0: 0,
17
+ 512: 128,
18
+ 1024: 256,
19
+ 2048: 512,
20
+ 4096: 512,
21
+ 8192: 1024,
22
+ 16384: 1024
23
+ } -%}
24
+ {# Find the first gear that is greater than or equal to the thinking_budget. #}
25
+ {%- set ns = namespace(interval = None) -%}
26
+ {%- for k, v in budget_reflections_v05 | dictsort -%}
27
+ {%- if ns.interval is none and thinking_budget <= k -%}
28
+ {%- set ns.interval = v -%}
29
+ {%- endif -%}
30
+ {%- endfor -%}
31
+ {# If it exceeds the maximum gear, use the value of the last gear #}
32
+ {%- if ns.interval is none -%}
33
+ {%- set ns.interval = budget_reflections_v05[16384] -%}
34
+ {%- endif -%}
35
+ {# ---------- Preprocess the system message ---------- #}
36
+ {%- if messages[0]["role"] == "system" %}
37
+ {%- set system_message = messages[0]["content"] %}
38
+ {%- set loop_messages = messages[1:] %}
39
+ {%- else %}
40
+ {%- set loop_messages = messages %}
41
+ {%- endif %}
42
+ {# ---------- Ensure tools exist ---------- #}
43
+ {%- if not tools is defined or tools is none %}
44
+ {%- set tools = [] %}
45
+ {%- endif %}
46
+ {# tools2doc.jinja #}
47
+ {%- macro py_type(t) -%}
48
+ {%- if t == "string" -%}str
49
+ {%- elif t in ("number", "integer") -%}int
50
+ {%- elif t == "boolean" -%}bool
51
+ {%- elif t == "array" -%}list
52
+ {%- else -%}Any{%- endif -%}
53
+ {%- endmacro -%}
54
+ {# ---------- Output the system block ---------- #}
55
+ {%- if system_message is defined %}
56
+ {{ bos_token + "system\n" + system_message }}
57
+ {%- else %}
58
+ {%- if tools is iterable and tools | length > 0 %}
59
+ {{ bos_token + "system\nYou are Doubao, a helpful AI assistant. You may call one or more functions to assist with the user query." }}
60
+ {%- endif %}
61
+ {%- endif %}
62
+ {%- if use_json_tooldef is defined and use_json_tooldef %}
63
+
64
+ {{"Tool List:\nYou are authorized to use the following tools (described in JSON Schema format). Before performing any task, you must decide how to call them based on the descriptions and parameters of these tools."}}
65
+ {{ tools | tojson(ensure_ascii=False) }}
66
+ {%- else %}
67
+ {%- for item in tools if item.type == "function" %}
68
+
69
+
70
+ Function:
71
+ def {{ item.function.name }}(
72
+ {%- for name, spec in item.function.parameters.properties.items() %}
73
+ {{- name }}: {{ py_type(spec.type) }}{% if not loop.last %},{% endif %}
74
+ {%- endfor %}):
75
+ """
76
+ {{ item.function.description | trim }}
77
+
78
+ {# ---------- Args ---------- #}
79
+ {%- if item.function.parameters.properties %}
80
+ Args:
81
+ {%- for name, spec in item.function.parameters.properties.items() %}
82
+
83
+ - {{ name }} ({{ py_type(spec.type) }})
84
+ {%- if name in item.function.parameters.required %} [必填]{% else %} [选填]{% endif %}:
85
+ {{- " " ~ (spec.description or "") }}
86
+ {%- endfor %}
87
+ {%- endif %}
88
+
89
+ {# ---------- Returns ---------- #}
90
+ {%- if item.function.returns is defined
91
+ and item.function.returns.properties is defined
92
+ and item.function.returns.properties %}
93
+ Returns:
94
+ {%- for name, spec in item.function.returns.properties.items() %}
95
+
96
+ - {{ name }} ({{ py_type(spec.type) }}):
97
+ {{- " " ~ (spec.description or "") }}
98
+ {%- endfor %}
99
+ {%- endif %}
100
+
101
+ """
102
+ {%- endfor %}
103
+ {%- endif %}
104
+ {%- if tools is iterable and tools | length > 0 %}
105
+
106
+ {{"工具调用请遵循如下格式:\n<seed:tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>value_1</parameter>\n<parameter=example_parameter_2>This is the value for the second parameter\nthat can span\nmultiple lines</parameter>\n</function>\n</seed:tool_call>\n"}}
107
+ {%- endif %}
108
+ {# End the system block line #}
109
+ {%- if system_message is defined or tools is iterable and tools | length > 0 %}
110
+ {{ eos_token }}
111
+ {%- endif %}
112
+ {# ---------- Thinking Budget ---------- #}
113
+ {%- if thinking_budget is defined %}
114
+ {%- if thinking_budget == 0 %}
115
+ {{ bos_token+"system" }}
116
+ {{ "You are an intelligent assistant that can answer questions in one step without the need for reasoning and thinking, that is, your thinking budget is 0. Next, please skip the thinking process and directly start answering the user's questions." }}
117
+ {{ eos_token }}
118
+ {%- elif not thinking_budget == -1 %}
119
+ {{ bos_token+"system" }}
120
+ {{ "You are an intelligent assistant with reflective ability. In the process of thinking and reasoning, you need to strictly follow the thinking budget, which is "}}{{thinking_budget}}{{". That is, you need to complete your thinking within "}}{{thinking_budget}}{{" tokens and start answering the user's questions. You will reflect on your thinking process every "}}{{ns.interval}}{{" tokens, stating how many tokens have been used and how many are left."}}
121
+ {{ eos_token }}
122
+ {%- endif %}
123
+ {%- endif %}
124
+ {# ---------- List the historical messages one by one ---------- #}
125
+ {%- for message in loop_messages %}
126
+ {%- if message.role == "assistant"
127
+ and message.tool_calls is defined
128
+ and message.tool_calls is iterable
129
+ and message.tool_calls | length > 0 %}
130
+ {{ bos_token + message.role }}
131
+ {%- if message.reasoning_content is defined and message.reasoning_content is string and message.reasoning_content | trim | length > 0 %}
132
+ {{ "\n" + think_begin_token + message.reasoning_content | trim + think_end_token }}
133
+ {%- endif %}
134
+ {%- if message.content is defined and message.content is string and message.content | trim | length > 0 %}
135
+ {{ "\n" + message.content | trim + "\n" }}
136
+ {%- endif %}
137
+ {%- for tool_call in message.tool_calls %}
138
+ {%- if tool_call.function is defined %}{% set tool_call = tool_call.function %}{% endif %}
139
+ {{ "\n" + toolcall_begin_token + "\n<function=" + tool_call.name + ">\n" }}
140
+ {%- if tool_call.arguments is defined %}
141
+ {%- for arg_name, arg_value in tool_call.arguments | items %}
142
+ {{ "<parameter=" + arg_name + ">" }}
143
+ {%- set arg_value = arg_value if arg_value is string else arg_value | string %}
144
+ {{ arg_value+"</parameter>\n" }}
145
+ {%- endfor %}
146
+ {%- endif %}
147
+ {{ "</function>\n" + toolcall_end_token }}
148
+ {%- endfor %}
149
+ {{ eos_token }}
150
+ {%- elif message.role in ["user", "system"] %}
151
+ {{ bos_token + message.role + "\n" + message.content + eos_token }}
152
+ {%- elif message.role == "assistant" %}
153
+ {{ bos_token + message.role }}
154
+ {%- if message.reasoning_content is defined and message.reasoning_content is string and message.reasoning_content | trim | length > 0 %}
155
+ {{ "\n" + think_begin_token + message.reasoning_content | trim + think_end_token }}
156
+ {%- endif %}
157
+ {%- if message.content is defined and message.content is string and message.content | trim | length > 0 %}
158
+ {{ "\n" + message.content | trim + eos_token }}
159
+ {%- endif %}
160
+ {# Include the tool role #}
161
+ {%- else %}
162
+ {{ bos_token + message.role + "\n" + message.content + eos_token }}
163
+ {%- endif %}
164
+ {%- endfor %}
165
+ {# ---------- Control the model to start continuation ---------- #}
166
+ {%- if add_generation_prompt %}
167
+ {{ bos_token+"assistant\n" }}
168
+ {%- if thinking_budget == 0 %}
169
+ {{ think_begin_token + "\n" + budget_begin_token + "The current thinking budget is 0, so I will directly start answering the question." + budget_end_token + "\n" + think_end_token }}
170
+ {%- endif %}
171
+ {%- endif %}