uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,383 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
SeaLLM-7B-v2.5 - GGUF
|
11 |
+
- Model creator: https://huggingface.co/SeaLLMs/
|
12 |
+
- Original model: https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5/
|
13 |
+
|
14 |
+
|
15 |
+
| Name | Quant method | Size |
|
16 |
+
| ---- | ---- | ---- |
|
17 |
+
| [SeaLLM-7B-v2.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q2_K.gguf) | Q2_K | 3.24GB |
|
18 |
+
| [SeaLLM-7B-v2.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.IQ3_XS.gguf) | IQ3_XS | 3.54GB |
|
19 |
+
| [SeaLLM-7B-v2.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.IQ3_S.gguf) | IQ3_S | 3.71GB |
|
20 |
+
| [SeaLLM-7B-v2.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q3_K_S.gguf) | Q3_K_S | 3.71GB |
|
21 |
+
| [SeaLLM-7B-v2.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.IQ3_M.gguf) | IQ3_M | 3.82GB |
|
22 |
+
| [SeaLLM-7B-v2.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q3_K.gguf) | Q3_K | 4.07GB |
|
23 |
+
| [SeaLLM-7B-v2.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q3_K_M.gguf) | Q3_K_M | 4.07GB |
|
24 |
+
| [SeaLLM-7B-v2.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q3_K_L.gguf) | Q3_K_L | 4.39GB |
|
25 |
+
| [SeaLLM-7B-v2.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.IQ4_XS.gguf) | IQ4_XS | 4.48GB |
|
26 |
+
| [SeaLLM-7B-v2.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q4_0.gguf) | Q4_0 | 4.67GB |
|
27 |
+
| [SeaLLM-7B-v2.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.IQ4_NL.gguf) | IQ4_NL | 4.69GB |
|
28 |
+
| [SeaLLM-7B-v2.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q4_K_S.gguf) | Q4_K_S | 4.7GB |
|
29 |
+
| [SeaLLM-7B-v2.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q4_K.gguf) | Q4_K | 4.96GB |
|
30 |
+
| [SeaLLM-7B-v2.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q4_K_M.gguf) | Q4_K_M | 4.96GB |
|
31 |
+
| [SeaLLM-7B-v2.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q4_1.gguf) | Q4_1 | 5.12GB |
|
32 |
+
| [SeaLLM-7B-v2.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q5_0.gguf) | Q5_0 | 5.57GB |
|
33 |
+
| [SeaLLM-7B-v2.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q5_K_S.gguf) | Q5_K_S | 5.57GB |
|
34 |
+
| [SeaLLM-7B-v2.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q5_K.gguf) | Q5_K | 5.72GB |
|
35 |
+
| [SeaLLM-7B-v2.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q5_K_M.gguf) | Q5_K_M | 5.72GB |
|
36 |
+
| [SeaLLM-7B-v2.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q5_1.gguf) | Q5_1 | 6.02GB |
|
37 |
+
| [SeaLLM-7B-v2.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf/blob/main/SeaLLM-7B-v2.5.Q6_K.gguf) | Q6_K | 6.53GB |
|
38 |
+
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
Original model description:
|
43 |
+
---
|
44 |
+
license: other
|
45 |
+
license_name: seallms
|
46 |
+
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
|
47 |
+
language:
|
48 |
+
- en
|
49 |
+
- zh
|
50 |
+
- vi
|
51 |
+
- id
|
52 |
+
- th
|
53 |
+
- ms
|
54 |
+
- km
|
55 |
+
- lo
|
56 |
+
- my
|
57 |
+
- tl
|
58 |
+
tags:
|
59 |
+
- multilingual
|
60 |
+
- sea
|
61 |
+
---
|
62 |
+
|
63 |
+
<p align="center">
|
64 |
+
<img src="seal_logo.png" width="200" />
|
65 |
+
</p>
|
66 |
+
|
67 |
+
# *SeaLLM-7B-v2.5* - Large Language Models for Southeast Asia
|
68 |
+
|
69 |
+
|
70 |
+
<p align="center">
|
71 |
+
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
|
72 |
+
|
73 |
+
<a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5" target="_blank" rel="noopener"> 🤗 Tech Memo</a>
|
74 |
+
|
75 |
+
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2.5" target="_blank" rel="noopener"> 🤗 DEMO</a>
|
76 |
+
|
77 |
+
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a>
|
78 |
+
|
79 |
+
<a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a>
|
80 |
+
</p>
|
81 |
+
|
82 |
+
🔥<span style="color: #ff3860">[HOT]</span> SeaLLMs project now has a dedicated website - [damo-nlp-sg.github.io/SeaLLMs](https://damo-nlp-sg.github.io/SeaLLMs/)
|
83 |
+
|
84 |
+
We introduce [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages 🇬🇧 🇨🇳 🇻🇳 🇮🇩 🇹🇭 🇲🇾 🇰🇭 🇱🇦 🇲🇲 🇵🇭. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc.
|
85 |
+
|
86 |
+
### Highlights
|
87 |
+
* [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5) outperforms GPT-3.5 and achieves 7B SOTA on most multilingual knowledge benchmarks for SEA languages (MMLU, M3Exam & VMLU).
|
88 |
+
* It achieves 79.0 and 34.9 on GSM8K and MATH, surpassing GPT-3.5 in MATH.
|
89 |
+
|
90 |
+
### Release and DEMO
|
91 |
+
|
92 |
+
- DEMO:
|
93 |
+
- [SeaLLMs/SeaLLM-7B-v2.5](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2.5).
|
94 |
+
- [SeaLLMs/SeaLLM-7B | SeaLMMM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B) - Experimental multimodal SeaLLM.
|
95 |
+
- Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf).
|
96 |
+
- Model weights:
|
97 |
+
- [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5).
|
98 |
+
- [SeaLLM-7B-v2.5-GGUF](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF).
|
99 |
+
- Run locally:
|
100 |
+
- [LM-studio](https://lmstudio.ai/):
|
101 |
+
- [SeaLLM-7B-v2.5-q4_0-chatml](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF/blob/main/seallm-7b-v2.5-chatml.Q4_K_M.gguf) with ChatML template (`<eos>` token changed to `<|im_end|>`)
|
102 |
+
- [SeaLLM-7B-v2.5-q4_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF/blob/main/seallm-7b-v2.5.Q4_K_M.gguf) - must use SeaLLM-7B-v2.5 chat format.
|
103 |
+
- [MLX for Apple Silicon](https://github.com/ml-explore/mlx): [SeaLLMs/SeaLLM-7B-v2.5-mlx-quantized](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-mlx-quantized)
|
104 |
+
- Previous models:
|
105 |
+
- [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2)
|
106 |
+
- [SeaLLM-7B-v1](https://huggingface.co/SeaLLMs/SeaLLM-7B-v1)
|
107 |
+
|
108 |
+
<blockquote style="color:red">
|
109 |
+
<p><strong style="color: red">Terms of Use and License</strong>:
|
110 |
+
By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>.
|
111 |
+
</blockquote>
|
112 |
+
|
113 |
+
> **Disclaimer**:
|
114 |
+
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation.
|
115 |
+
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations.
|
116 |
+
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
|
117 |
+
|
118 |
+
> The logo was generated by DALL-E 3.
|
119 |
+
|
120 |
+
|
121 |
+
### What's new since SeaLLM-7B-v2?
|
122 |
+
|
123 |
+
* SeaLLM-7B-v2.5 was built on top of Gemma-7b, and underwent large scale SFT and carefully designed alignment.
|
124 |
+
|
125 |
+
|
126 |
+
## Evaluation
|
127 |
+
|
128 |
+
|
129 |
+
### Multilingual World Knowledge
|
130 |
+
|
131 |
+
|
132 |
+
We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi.
|
133 |
+
|
134 |
+
| Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e
|
135 |
+
|-----| ----- | --- | -- | ----- | ---- | --- | --- | --- |
|
136 |
+
| GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41
|
137 |
+
| Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27
|
138 |
+
| Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 45.02 | 24.29 | 20.25
|
139 |
+
| SailorLM | Multi | 52.72 | 59.76 | 67.74 | 50.14 | --- | 39.53 | 37.73
|
140 |
+
| SeaLLM-7B-v2 | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 45.74 | 42.25 | 35.52
|
141 |
+
| SeaLLM-7B-v2.5 | Multi | 64.05 | 76.87 | 62.54 | 63.11 | 53.30 | 48.64 | 46.86
|
142 |
+
|
143 |
+
|
144 |
+
### Zero-shot CoT Multilingual Math Reasoning
|
145 |
+
|
146 |
+
<!--
|
147 |
+
[SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.5** score on the GSM8K with zero-shot CoT reasoning, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **28.4** vs 18.1 scores.
|
148 |
+
|
149 |
+

|
150 |
+
-->
|
151 |
+
|
152 |
+
| Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th
|
153 |
+
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
154 |
+
| GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1
|
155 |
+
| Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6.0
|
156 |
+
| Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | |
|
157 |
+
| Qwen1.5-7B-chat | 56.8 | 15.3 | 40.0 | 2.7 | 37.7 | 9 | 36.9 | 7.7 | 21.9 | 4.7
|
158 |
+
| SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4
|
159 |
+
| SeaLLM-7B-v2.5 | 78.5 | 34.9 | 51.3 | 22.1 | 72.3 | 30.2 | 71.5 | 30.1 | 62.0 | 28.4
|
160 |
+
|
161 |
+
|
162 |
+
Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Vistral](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)).
|
163 |
+
|
164 |
+
#### Zero-shot MGSM
|
165 |
+
|
166 |
+
[SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Thai.
|
167 |
+
|
168 |
+
| Model | MGSM-Zh | MGSM-Th
|
169 |
+
|-----| ----- | ---
|
170 |
+
| ChatGPT (reported) | 61.2 | 47.2
|
171 |
+
| Qwen-14B-chat | 59.6 | 28
|
172 |
+
| SeaLLM-7B-v2 | **64.8** | 62.4
|
173 |
+
| SeaLLM-7B-v2.5 | 58.0 | **64.8**
|
174 |
+
|
175 |
+
|
176 |
+
### Sea-Bench
|
177 |
+
|
178 |
+

|
179 |
+
|
180 |
+
|
181 |
+
### Usage
|
182 |
+
|
183 |
+
**IMPORTANT NOTICE for using the model**
|
184 |
+
|
185 |
+
* `<bos>` must be at start of prompt, ff your code's tokenizer does not prepend `<bos>` by default, you MUST prepend <bos> into the prompt yourself, otherwise, it would not work!
|
186 |
+
* Repitition penalty (e.g: in llama.cpp, ollama, LM-studio) must be set to **1** , otherwise will lead to degeneration!
|
187 |
+
|
188 |
+
#### Instruction format
|
189 |
+
|
190 |
+
```python
|
191 |
+
# ! WARNING, if your code's tokenizer does not prepend <bos> by default,
|
192 |
+
# You MUST prepend <bos> into the prompt yourself, otherwise, it would not work!
|
193 |
+
|
194 |
+
prompt = """<|im_start|>system
|
195 |
+
You are a helpful assistant.<eos>
|
196 |
+
<|im_start|>user
|
197 |
+
Hello world<eos>
|
198 |
+
<|im_start|>assistant
|
199 |
+
Hi there, how can I help?<eos>"""
|
200 |
+
|
201 |
+
# <|im_start|> is not a special token.
|
202 |
+
# Transformers chat_template should be consistent with vLLM format below.
|
203 |
+
|
204 |
+
# ! ENSURE 1 and only 1 bos `<bos>` at the beginning of sequence
|
205 |
+
print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))
|
206 |
+
|
207 |
+
"""
|
208 |
+
```
|
209 |
+
|
210 |
+
#### Using transformers's chat_template
|
211 |
+
|
212 |
+
Install the latest transformers (>4.40)
|
213 |
+
|
214 |
+
```python
|
215 |
+
|
216 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
217 |
+
|
218 |
+
device = "cuda" # the device to load the model onto
|
219 |
+
|
220 |
+
# use bfloat16 to ensure the best performance.
|
221 |
+
model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2.5", torch_dtype=torch.bfloat16, device_map=device)
|
222 |
+
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2.5")
|
223 |
+
|
224 |
+
messages = [
|
225 |
+
{"role": "system", "content": "You are a helpful assistant."},
|
226 |
+
{"role": "user", "content": "Hello world"},
|
227 |
+
{"role": "assistant", "content": "Hi there, how can I help you today?"},
|
228 |
+
{"role": "user", "content": "Explain general relativity in details."}
|
229 |
+
]
|
230 |
+
|
231 |
+
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
|
232 |
+
print(tokenizer.convert_ids_to_tokens(encodeds[0]))
|
233 |
+
|
234 |
+
model_inputs = encodeds.to(device)
|
235 |
+
model.to(device)
|
236 |
+
|
237 |
+
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id)
|
238 |
+
decoded = tokenizer.batch_decode(generated_ids)
|
239 |
+
print(decoded[0])
|
240 |
+
|
241 |
+
```
|
242 |
+
|
243 |
+
#### Using vLLM
|
244 |
+
|
245 |
+
```python
|
246 |
+
from vllm import LLM, SamplingParams
|
247 |
+
TURN_TEMPLATE = "<|im_start|>{role}\n{content}<eos>\n"
|
248 |
+
TURN_PREFIX = "<|im_start|>{role}\n"
|
249 |
+
|
250 |
+
def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None):
|
251 |
+
# conversations: list of dict with key `role` and `content` (openai format)
|
252 |
+
if conversations[0]['role'] != 'system' and system_prompt is not None:
|
253 |
+
conversations = [{"role": "system", "content": system_prompt}] + conversations
|
254 |
+
text = ''
|
255 |
+
for turn_id, turn in enumerate(conversations):
|
256 |
+
prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
|
257 |
+
text += prompt
|
258 |
+
if add_assistant_prefix:
|
259 |
+
prompt = TURN_PREFIX.format(role='assistant')
|
260 |
+
text += prompt
|
261 |
+
return text
|
262 |
+
|
263 |
+
sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['<eos>', '<|im_start|>'])
|
264 |
+
llm = LLM("SeaLLMs/SeaLLM-7B-v2.5", dtype="bfloat16")
|
265 |
+
|
266 |
+
message = "Explain general relativity in details."
|
267 |
+
prompt = seallm_chat_convo_format(message, True)
|
268 |
+
gen = llm.generate(prompt, sampling_params)
|
269 |
+
|
270 |
+
print(gen[0].outputs[0].text)
|
271 |
+
```
|
272 |
+
|
273 |
+
#### Fine-tuning SeaLLM-7B-v2.5
|
274 |
+
|
275 |
+
Should follow the chat format and accurately mask out source tokens. Here is an example.
|
276 |
+
|
277 |
+
```python
|
278 |
+
conversations = [
|
279 |
+
{"role": "system", "content": "You are helful assistant."},
|
280 |
+
{"role": "user", "content": "Hello world."},
|
281 |
+
{"role": "assistant", "content": "Hi there, how can I help?"},
|
282 |
+
{"role": "user", "content": "Tell me a joke."},
|
283 |
+
{"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."},
|
284 |
+
]
|
285 |
+
def seallm_7b_v25_tokenize_multi_turns(tokenizer, conversations, add_assistant_prefix=False):
|
286 |
+
"""
|
287 |
+
Inputs:
|
288 |
+
conversations: list of dict following openai format, eg
|
289 |
+
conversations = [
|
290 |
+
{"role": "system", "content": "You are helful assistant."},
|
291 |
+
{"role": "user", "content": "Hello world."},
|
292 |
+
{"role": "assistant", "content": "Hi there, how can I help?"},
|
293 |
+
{"role": "user", "content": "Tell me a joke."},
|
294 |
+
{"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."},
|
295 |
+
]
|
296 |
+
add_assistant_prefix: whether to add assistant_prefix, only for inference decoding
|
297 |
+
Outputs:
|
298 |
+
tokenize_output_sample, {
|
299 |
+
"input_ids": ...
|
300 |
+
"token_type_ids": 1 if train and 0 if masked out (not train)
|
301 |
+
}
|
302 |
+
During training, need to create a labels, with masked-out tokens = -100 to avoid loss computations.
|
303 |
+
labels = sample['input_ids'].clone()
|
304 |
+
labels[sample['token_type_ids'] == 0] = -100
|
305 |
+
"""
|
306 |
+
TURN_TEMPLATE = "<|im_start|>{role}\n{content}<eos>\n"
|
307 |
+
TURN_PREFIX = "<|im_start|>{role}\n"
|
308 |
+
TURN_SUFFIX = "<eos>\n"
|
309 |
+
TURN_SUFFIX_TAKE = "<eos>"
|
310 |
+
sample = None
|
311 |
+
assistant_prefix_len = None
|
312 |
+
assistant_suffix_len = None
|
313 |
+
for turn_id, turn in enumerate(conversations):
|
314 |
+
prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
|
315 |
+
turn_sample = tokenizer(
|
316 |
+
prompt, padding=False, truncation=False, verbose=False, add_special_tokens=False,
|
317 |
+
return_token_type_ids=True,
|
318 |
+
)
|
319 |
+
if turn['role'] == 'assistant':
|
320 |
+
if assistant_prefix_len is None:
|
321 |
+
assistant_prefix_len = len(tokenizer.encode(TURN_PREFIX.format(role=turn['role']), add_special_tokens=False))
|
322 |
+
if assistant_suffix_len is None:
|
323 |
+
assistant_suffix_len = (
|
324 |
+
len(tokenizer.encode(TURN_SUFFIX.format(role=turn['role']), add_special_tokens=False)) -
|
325 |
+
len(tokenizer.encode(TURN_SUFFIX_TAKE, add_special_tokens=False))
|
326 |
+
)
|
327 |
+
turn_sample['token_type_ids'][assistant_prefix_len:-assistant_suffix_len] = [1] * (len(turn_sample['input_ids']) - assistant_prefix_len - assistant_suffix_len)
|
328 |
+
if sample is None:
|
329 |
+
sample = turn_sample
|
330 |
+
else:
|
331 |
+
for k in turn_sample.keys():
|
332 |
+
sample[k].extend(turn_sample[k])
|
333 |
+
if add_assistant_prefix:
|
334 |
+
assistant_prefix_sample = tokenizer(
|
335 |
+
TURN_PREFIX.format(role="assistant"), padding=False, truncation=False, verbose=False, add_special_tokens=False,
|
336 |
+
return_token_type_ids=True,
|
337 |
+
)
|
338 |
+
for k in sample.keys():
|
339 |
+
sample[k].extend(assistant_prefix_sample[k])
|
340 |
+
if tokenizer.add_bos_token:
|
341 |
+
sample['input_ids'] = [tokenizer.bos_token_id] + sample['input_ids']
|
342 |
+
sample['attention_mask'] = [1] + sample['attention_mask']
|
343 |
+
sample['token_type_ids'] = [sample['token_type_ids'][0]] + sample['token_type_ids']
|
344 |
+
return sample
|
345 |
+
|
346 |
+
# ! testing
|
347 |
+
sample = seallm_7b_v25_tokenize_multi_turns(tokenizer, conversations)
|
348 |
+
tokens = tokenizer.convert_ids_to_tokens(sample['input_ids'])
|
349 |
+
pairs = [(x, y) for x, y in zip(tokens, sample['token_type_ids'])]
|
350 |
+
print(pairs)
|
351 |
+
|
352 |
+
# source and special tokens is masked out (token_type 0), only assistant with <eos> is trained (token_type 1)
|
353 |
+
# [('<bos>', 0), ('<', 0), ('|', 0), ..., ('assistant', 0), ('\n', 0), ('Hi', 1), ('▁there', 1), (',', 1), ('▁how', 1), ('▁can', 1), ('▁I', 1), ('▁help', 1), ('?', 1), ('<eos>', 1), ('\n', 0), ('<', 0), ...
|
354 |
+
|
355 |
+
```
|
356 |
+
|
357 |
+
|
358 |
+
## Acknowledgement to Our Linguists
|
359 |
+
|
360 |
+
We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.
|
361 |
+
|
362 |
+
## Citation
|
363 |
+
|
364 |
+
If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [[email protected]](mailto:[email protected])
|
365 |
+
|
366 |
+
**Author list and order will change!**
|
367 |
+
|
368 |
+
* `*` and `^` are equal contributions.
|
369 |
+
|
370 |
+
```
|
371 |
+
@article{damonlpsg2023seallm,
|
372 |
+
author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, Weiwen Xu, Hou Pong Chan,
|
373 |
+
Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang,
|
374 |
+
Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
|
375 |
+
Chaoqun Liu, Hang Zhang, Lidong Bing},
|
376 |
+
title = {SeaLLMs - Large Language Models for Southeast Asia},
|
377 |
+
year = 2023,
|
378 |
+
Eprint = {arXiv:2312.00738},
|
379 |
+
}
|
380 |
+
```
|
381 |
+
|
382 |
+
|
383 |
+
|