Taka008 commited on
Commit
971e8f5
·
verified ·
1 Parent(s): 5fd1cba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +189 -3
README.md CHANGED
@@ -1,3 +1,189 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - ja
6
+ programming_language:
7
+ - C
8
+ - C++
9
+ - C#
10
+ - Go
11
+ - Java
12
+ - JavaScript
13
+ - Lua
14
+ - PHP
15
+ - Python
16
+ - Ruby
17
+ - Rust
18
+ - Scala
19
+ - TypeScript
20
+ pipeline_tag: text-generation
21
+ library_name: transformers
22
+ inference: false
23
+ ---
24
+
25
+ # llm-jp-3-7.2b-instruct
26
+
27
+ This repository provides large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
28
+
29
+
30
+ | Model Variants |
31
+ | :--- |
32
+ | [llm-jp-3-1.8b](https://huggingface.co/llm-jp/llm-jp-3-1.8b) |
33
+ | [llm-jp-3-1.8b-instruct](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct) |
34
+ | [llm-jp-3-3.7b](https://huggingface.co/llm-jp/llm-jp-3-3.7b) |
35
+ | [llm-jp-3-3.7b-instruct](https://huggingface.co/llm-jp/llm-jp-3-3.7b-instruct) |
36
+ | [llm-jp-3-7.2b](https://huggingface.co/llm-jp/llm-jp-3-7.2b) |
37
+ | [llm-jp-3-7.2b-instruct](https://huggingface.co/llm-jp/llm-jp-3-7.2b-instruct) |
38
+ | [llm-jp-3-13b](https://huggingface.co/llm-jp/llm-jp-3-13b) |
39
+ | [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) |
40
+ | [llm-jp-3-172b-beta1](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1) |
41
+ | [llm-jp-3-172b-beta1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1-instruct) |
42
+ | [llm-jp-3-172b-beta2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2) |
43
+ | [llm-jp-3-172b-beta2-instruct2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2-instruct2) |
44
+
45
+
46
+ Checkpoints format: Hugging Face Transformers
47
+
48
+ ## Required Libraries and Their Versions
49
+
50
+ - torch>=2.3.0
51
+ - transformers>=4.40.1
52
+ - tokenizers>=0.19.1
53
+ - accelerate>=0.29.3
54
+ - flash-attn>=2.5.8
55
+
56
+ ## Usage
57
+
58
+ ```python
59
+ import torch
60
+ from transformers import AutoTokenizer, AutoModelForCausalLM
61
+ tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-7.2b-instruct")
62
+ model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-7.2b-instruct", device_map="auto", torch_dtype=torch.bfloat16)
63
+ chat = [
64
+ {"role": "system", "content": "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。"},
65
+ {"role": "user", "content": "自然言語処理とは何か"},
66
+ ]
67
+ tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
68
+ with torch.no_grad():
69
+ output = model.generate(
70
+ tokenized_input,
71
+ max_new_tokens=100,
72
+ do_sample=True,
73
+ top_p=0.95,
74
+ temperature=0.7,
75
+ repetition_penalty=1.05,
76
+ )[0]
77
+ print(tokenizer.decode(output))
78
+ ```
79
+
80
+ ## Model Details
81
+
82
+ - **Model type:** Transformer-based Language Model
83
+ - **Total seen tokens:** 2.1T
84
+
85
+ |Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
86
+ |:---:|:---:|:---:|:---:|:---:|:---:|:---:|
87
+ |1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
88
+ |3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
89
+ |7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
90
+ |13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
91
+ |172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
92
+
93
+ ## Tokenizer
94
+
95
+ The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
96
+ The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
97
+ Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
98
+
99
+ ## Datasets
100
+
101
+ ### Pre-training
102
+
103
+ The models have been pre-trained using a blend of the following datasets.
104
+
105
+ | Language | Dataset | Tokens|
106
+ |:---|:---|---:|
107
+ |Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
108
+ ||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
109
+ ||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
110
+ ||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
111
+ ||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
112
+ |English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
113
+ ||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
114
+ ||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
115
+ ||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
116
+ ||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
117
+ ||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
118
+ ||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
119
+ |Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
120
+ |Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
121
+ |Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
122
+
123
+ ### Instruction tuning
124
+
125
+ The models have been fine-tuned on the following datasets.
126
+
127
+ | Language | Dataset | description |
128
+ |:---|:---|:---|
129
+ |Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset |
130
+ | |[answer-carefully-002](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/)| A manually constructed instruction dataset focusing on LLMs' safety |
131
+ | |ichikara-instruction-format| A small amount of instruction dataset edited from ichikara-instruction, with some constraints on the output format. |
132
+ | |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
133
+ | |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
134
+ | |[wizardlm8x22b-logical-math-coding-sft_additional-ja](https://huggingface.co/datasets/kanhatakeyama/wizardlm8x22b-logical-math-coding-sft_additional-ja)| A synthetic instruction dataset. |
135
+ | |[Synthetic-JP-EN-Coding-Dataset-567k](https://huggingface.co/datasets/Aratako/Synthetic-JP-EN-Coding-Dataset-567k)| A synthetic instruction dataset. We used sampled one.|
136
+ |English |[FLAN](https://huggingface.co/datasets/Open-Orca/FLAN) | We used sampled one. |
137
+
138
+
139
+ ## Evaluation
140
+
141
+ ### llm-jp-eval (v1.3.1)
142
+
143
+ We evaluated the models using 100 examples from the dev split.
144
+
145
+ | Model name | average | EL | FA | HE | MC | MR | MT | NLI | QA | RC |
146
+ | :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
147
+ | [llm-jp-3-1.8b](https://huggingface.co/llm-jp/llm-jp-3-1.8b) | 0.3767 | 0.3725 | 0.1948 | 0.2350 | 0.2500 | 0.0900 | 0.7730 | 0.3080 | 0.4629 | 0.7040 |
148
+ | [llm-jp-3-1.8b-instruct](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct) | 0.4596 | 0.4280 | 0.1987 | 0.3250 | 0.3300 | 0.4200 | 0.7900 | 0.3520 | 0.4698 | 0.8224 |
149
+ | [llm-jp-3-3.7b](https://huggingface.co/llm-jp/llm-jp-3-3.7b) | 0.4231 | 0.3812 | 0.2440 | 0.2200 | 0.1900 | 0.3600 | 0.7947 | 0.3800 | 0.4688 | 0.7694 |
150
+ | [llm-jp-3-3.7b-instruct](https://huggingface.co/llm-jp/llm-jp-3-3.7b-instruct) | 0.5188 | 0.4191 | 0.2504 | 0.3400 | 0.5000 | 0.5800 | 0.8166 | 0.4500 | 0.4881 | 0.8247 |
151
+ | [llm-jp-3-7.2b](https://huggingface.co/llm-jp/llm-jp-3-7.2b) | - | - | - | - | - | - | - | - | - | - |
152
+ | [llm-jp-3-7.2b-instruct](https://huggingface.co/llm-jp/llm-jp-3-7.2b-instruct) | 0.5888 | 0.4282 | 0.2659 | 0.4350 | 0.8900 | 0.5800 | 0.8250 | 0.4860 | 0.5565 | 0.8330 |
153
+ | [llm-jp-3-13b](https://huggingface.co/llm-jp/llm-jp-3-13b) | 0.5802 | 0.5570 | 0.2593 | 0.4600 | 0.7000 | 0.6300 | 0.8292 | 0.3460 | 0.5937 | 0.8469 |
154
+ | [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) | 0.6168 | 0.5408 | 0.2757 | 0.4950 | 0.9200 | 0.7100 | 0.8317 | 0.4640 | 0.4642 | 0.8500 |
155
+
156
+
157
+ ### Japanese MT Bench
158
+
159
+ We evaluated the models using `gpt-4-0613`. Please see the [codes](https://github.com/llm-jp/llm-leaderboard/tree/main) for details.
160
+
161
+ | Model name | average | coding | extraction | humanities | math | reasoning | roleplay | stem | writing |
162
+ | :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
163
+ | [llm-jp-3-1.8b-instruct](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct) | 4.93 | 1.50 | 4.70 | 7.80 | 1.55 | 2.60 | 7.80 | 6.10 | 7.40 |
164
+ | [llm-jp-3-3.7b-instruct](https://huggingface.co/llm-jp/llm-jp-3-3.7b-instruct) | 5.50 | 1.95 | 4.05 | 8.25 | 2.25 | 4.00 | 8.80 | 7.25 | 7.45 |
165
+ | [llm-jp-3-7.2b-instruct](https://huggingface.co/llm-jp/llm-jp-3-7.2b-instruct) | 5.70 | 2.95 | 5.60 | 7.95 | 2.80 | 3.90 | 8.40 | 6.15 | 7.85 |
166
+ | [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) | 6.47 | 3.15 | 7.05 | 9.15 | 3.75 | 5.40 | 8.30 | 7.50 | 7.45 |
167
+
168
+
169
+
170
+ ## Risks and Limitations
171
+
172
+ The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
173
+
174
+
175
+ ## Send Questions to
176
+
177
+ llm-jp(at)nii.ac.jp
178
+
179
+
180
+ ## License
181
+
182
+ [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
183
+
184
+
185
+ ## Model Card Authors
186
+
187
+ *The names are listed in alphabetical order.*
188
+
189
+ Hirokazu Kiyomaru and Takashi Kodama.