kunato commited on
Commit
da95860
·
verified ·
1 Parent(s): 68ef533

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +159 -0
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - th
4
+ - en
5
+ pipeline_tag: text-generation
6
+ license: llama3
7
+ ---
8
+ **Llama-3-Typhoon-1.5X-70B-instruct: Thai Large Language Model (Instruct)**
9
+
10
+ **Llama-3-Typhoon-1.5X-70B-instruct** is a 70 billion parameter instruct model designed for Thai 🇹🇭 language. It demonstrates competitive performance with GPT-4-0612, and is optimized for **production** environments, **Retrieval-Augmented Generation (RAG), constrained generation**, and **reasoning** tasks.
11
+
12
+ Built on Typhoon 1.5 70B (not yet released) and Llama 3 70B Instruct. this model is a result of our experiment on cross-lingual transfer. It utilizes the [task-arithmetic model editing](https://arxiv.org/abs/2212.04089) technique, combining the Thai understanding capability of Typhoon with the human alignment performance of Llama 3 Instruct.
13
+
14
+ Remark: To acknowledge Meta's efforts in creating the foundation model and comply with the license, we explicitly include "llama-3" in the model name.
15
+
16
+ ## **Model Description**
17
+
18
+ - **Model type**: A 70B instruct decoder-only model based on the Llama architecture
19
+ - **Requirement**: Transformers 4.38.0 or newer
20
+ - **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
21
+ - **License**: [**Llama 3 Community License**](https://llama.meta.com/llama3/license/)
22
+
23
+ ## **Performance**
24
+
25
+ We evaluated the model's performance in **Language & Knowledge Capabilities** and **Instruction Following Capabilities**.
26
+
27
+ - **Language & Knowledge Capabilities**:
28
+ - Assessed using multiple-choice question-answering datasets such as ThaiExam and MMLU.
29
+ - **Instruction Following Capabilities**:
30
+ - Evaluated based on our beta users' feedback, focusing on two factors:
31
+ - **Human Alignment & Reasoning**: Ability to generate responses that are understandable and reasoned across multiple steps.
32
+ - Evaluated using [MT-Bench](https://arxiv.org/abs/2306.05685) — How LLMs can answer embedded knowledge to align with human needs.
33
+ - **Instruction-following**: Ability to adhere to specified constraints in the instruction
34
+ - Evaluated using [IFEval](https://arxiv.org/abs/2311.07911) — How LLMs can follow specified constraints, such as formatting and brevity.
35
+ - **Agentic** **Capabilities:**
36
+ - Evaluated in agent use-cases using [Hugging Face's agent implementation](https://huggingface.co/blog/agents) and the [benchmark](https://huggingface.co/blog/open-source-llms-as-agents).
37
+
38
+ Remark: We developed the TH pair by translating the original datasets into Thai and conducting a human verification on them.
39
+
40
+ ### ThaiExam
41
+
42
+ | Model | ONET | IC | TGAT | TPAT-1 | A-Level | Average (ThaiExam) | MMLU |
43
+ | --- | --- | --- | --- | --- | --- | --- | --- |
44
+ | Typhoon-1.5X 70B | 0.565 | 0.68 | 0.778 | 0.517 | 0.56 | 0.620 | 0.7945 |
45
+ | gpt-4-0612 | 0.493 | 0.69 | 0.744 | 0.509 | 0.616 | 0.610 | 0.864** |
46
+ | --- | --- | --- | --- | --- | --- | --- | --- |
47
+ | gpt-4o | 0.62 | 0.63 | 0.789 | 0.56 | 0.623 | 0.644 | 0.887** |
48
+
49
+ ### MT-Bench
50
+
51
+ | Model | MT-Bench Thai | MT-Bench English |
52
+ | --- | --- | --- |
53
+ | Typhoon-1.5X 70B | 8.029 | 8.797 |
54
+ | gpt-4-0612 | 7.801 | 8.671 |
55
+ | --- | --- | --- |
56
+ | gpt-4o | 8.514 | 9.184 |
57
+
58
+ ### IFEval
59
+
60
+ | Model | IFEval Thai | IFEval English |
61
+ | --- | --- | --- |
62
+ | Typhoon-1.5X 70B | 0.645 | 0.810 |
63
+ | gpt-4-0612 | 0.612 | 0.793 * report from IFEval paper |
64
+ | --- | --- | --- |
65
+ | gpt-4o | 0.737 | 0.871 |
66
+
67
+ ### Agent
68
+
69
+ | Model | GAIA - Thai/English | GSM8K - Thai/English | HotpotQA - Thai/English |
70
+ | --- | --- | --- | --- |
71
+ | gpt-3.5-turbo-0125 | 18.42/37.5 | 70/80 | 39.56/59 |
72
+ | Typhoon-1.5X 70B | 17.10/36.25 | 80/95 | 52.7/65.83 |
73
+ | gpt-4-0612 | 17.10/38.75 | 90/100 | 56.41/76.25 |
74
+ | --- | --- | --- | --- |
75
+ | gpt-4o | 44.73/57.5 | 100/100 | 71.64/76.58 |
76
+
77
+ ## Insight
78
+
79
+ Utilized model editing technique. We found that the most critical feature for generating Thai answers is located in the backend (the upper layers of the transformer block). Accordingly, we incorporated a high ratio of Typhoon in these backend layers.
80
+
81
+ ## **Usage Example**
82
+
83
+ ```python
84
+ from transformers import AutoTokenizer, AutoModelForCausalLM
85
+ import torch
86
+
87
+ model_id = "scb10x/llama-3-typhoon-v1.5x-70b-instruct"
88
+
89
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
90
+ model = AutoModelForCausalLM.from_pretrained(
91
+ model_id,
92
+ torch_dtype=torch.bfloat16,
93
+ device_map="auto",
94
+ )
95
+
96
+ messages = [...] # add message here
97
+
98
+ input_ids = tokenizer.apply_chat_template(
99
+ messages,
100
+ add_generation_prompt=True,
101
+ return_tensors="pt"
102
+ ).to(model.device)
103
+
104
+ terminators = [
105
+ tokenizer.eos_token_id,
106
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
107
+ ]
108
+
109
+ outputs = model.generate(
110
+ input_ids,
111
+ max_new_tokens=512,
112
+ eos_token_id=terminators,
113
+ do_sample=True,
114
+ temperature=0.4,
115
+ top_p=0.95,
116
+ )
117
+ response = outputs[0][input_ids.shape[-1]:]
118
+ print(tokenizer.decode(response, skip_special_tokens=True))
119
+ ```
120
+
121
+ ## **Chat Template**
122
+
123
+ We use the Llama 3 chat template.
124
+
125
+ ```python
126
+ {% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}
127
+ ```
128
+
129
+ ## **Intended Uses & Limitations**
130
+
131
+ This model is experimental and might not be fully evaluated for all use cases. Developers should assess risks in the context of their specific applications.
132
+
133
+ ## **Follow us**
134
+
135
+ [**https://twitter.com/opentyphoon**](https://twitter.com/opentyphoon)
136
+
137
+ ## **Support**
138
+
139
+ [**https://discord.gg/CqyBscMFpg**](https://discord.gg/CqyBscMFpg)
140
+
141
+ ## **SCB 10X Typhoon Team**
142
+
143
+ - Kunat Pipatanakul, Potsawee Manakul, Sittipong Sripaisarnmongkol, Pathomporn Chokchainant, Kasima Tharnpipitchai
144
+ - If you find Typhoon-1.5X useful for your work, please cite it using:
145
+
146
+ ```
147
+ @article{pipatanakul2023typhoon,
148
+ title={Typhoon: Thai Large Language Models},
149
+ author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
150
+ year={2023},
151
+ journal={arXiv preprint arXiv:2312.13951},
152
+ url={https://arxiv.org/abs/2312.13951}
153
+ }
154
+ ```
155
+
156
+ ## **Contact Us**
157
+
158
+ - General & Collaboration: [**[email protected]**](mailto:[email protected]), [**[email protected]**](mailto:[email protected])
159
+ - Technical: [**[email protected]**](mailto:[email protected])