GuoPD commited on
Commit
c160d27
·
1 Parent(s): 5bcc98c

modify: update model name

Browse files
Files changed (1) hide show
  1. README.md +31 -31
README.md CHANGED
@@ -5,49 +5,49 @@ language:
5
  pipeline_tag: text-generation
6
  inference: false
7
  ---
8
- # baichuan-7B
9
 
10
  <!-- Provide a quick summary of what the model is/does. -->
11
 
12
- baichuan-7B是由百川智能开发的一个开源的大规模预训练模型。基于Transformer结构,在大约1.2万亿tokens上训练的70亿参数模型,支持中英双语,上下文窗口长度为4096。在标准的中文和英文权威benchmark(C-EVAL/MMLU)上均取得同尺寸最好的效果。
13
 
14
- 如果希望使用baichuan-7B(如进行推理、Finetune等),我们推荐使用配套代码库[baichuan-7B](https://
15
- .com/baichuan-inc/baichuan-7B)。
16
 
17
- baichuan-7B is an open-source large-scale pre-trained model developed by Baichuan Intelligent Technology. Based on the Transformer architecture, it is a model with 7 billion parameters trained on approximately 1.2 trillion tokens. It supports both Chinese and English, with a context window length of 4096. It achieves the best performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU).
18
 
19
- If you wish to use baichuan-7B (for inference, finetuning, etc.), we recommend using the accompanying code library [baichuan-7B](https://github.com/baichuan-inc/baichuan-7B).
20
 
21
- ## Why use baichuan-7B
22
 
23
- - 在同尺寸模型中baichuan-7B达到了目前SOTA的水平,参考下面MMLU指标
24
- - baichuan-7B使用自有的中英文双语语料进行训练,在中文上进行优化,在C-Eval达到SOTA水平
25
- - 不同于LLaMA完全禁止商业使用,baichuan-7B使用更宽松的开源协议,允许用于商业目的
26
 
27
- - Among models of the same size, baichuan-7B has achieved the current state-of-the-art (SOTA) level, as evidenced by the following MMLU metrics.
28
- - baichuan-7B is trained on proprietary bilingual Chinese-English corpora, optimized for Chinese, and achieves SOTA performance on C-Eval.
29
- - Unlike LLaMA, which completely prohibits commercial use, baichuan-7B employs a more lenient open-source license, allowing for commercial purposes.
30
 
31
  ## How to Get Started with the Model
32
 
33
- 如下是一个使用baichuan-7B进行1-shot推理的任务,根据作品给出作者名,正确输出为"夜雨寄北->李商隐"
34
  ```python
35
  from transformers import AutoModelForCausalLM, AutoTokenizer
36
 
37
- tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/baichuan-7B", trust_remote_code=True)
38
- model = AutoModelForCausalLM.from_pretrained("baichuan-inc/baichuan-7B", device_map="auto", trust_remote_code=True)
39
  inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
40
  inputs = inputs.to('cuda:0')
41
  pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
42
  print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
43
  ```
44
 
45
- The following is a task of performing 1-shot inference using baichuan-7B, where the author's name is given based on the work, with the correct output being "One Hundred Years of Solitude->Gabriel Garcia Marquez"
46
  ```python
47
  from transformers import AutoModelForCausalLM, AutoTokenizer
48
 
49
- tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/baichuan-7B", trust_remote_code=True)
50
- model = AutoModelForCausalLM.from_pretrained("baichuan-inc/baichuan-7B", device_map="auto", trust_remote_code=True)
51
  inputs = tokenizer('Hamlet->Shakespeare\nOne Hundred Years of Solitude->', return_tensors='pt')
52
  inputs = inputs.to('cuda:0')
53
  pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
@@ -63,7 +63,7 @@ print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
63
  - **Developed by:** 百川智能(Baichuan Intelligent Technology)
64
  - **Email**: [email protected]
65
  - **Language(s) (NLP):** Chinese/English
66
- - **License:** [baichuan-7B License](https://huggingface.co/baichuan-inc/baichuan-7B/blob/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)
67
 
68
  ### Model Sources
69
 
@@ -107,9 +107,9 @@ The specific parameters are as follows:
107
  ### Downstream Use
108
 
109
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
110
- 我们同时开源出了和本模型配套的训练代码,允许进行高效的Finetune用于下游任务,具体参见[baichuan-7B](https://github.com/baichuan-inc/baichuan-7B)。
111
 
112
- We have also open-sourced the training code that accompanies this model, allowing for efficient finetuning for downstream tasks. For more details, please refer to [baichuan-7B](https://github.com/baichuan-inc/baichuan-7B).
113
 
114
  ### Out-of-Scope Use
115
 
@@ -122,15 +122,15 @@ Production use without adequate assessment of risks and mitigation; any use case
122
 
123
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
124
 
125
- baichuan-7B可能会产生事实上不正确的输出,不应依赖它产生事实上准确的信息。baichuan-7B是在各种公共数据集上进行训练的。尽管我们已经做出了巨大的努力来清洗预训练数据,但这个模型可能会生成淫秽、偏见或其他冒犯性的输出。
126
 
127
- baichuan-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. baichuan-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
128
 
129
  ## Training Details
130
 
131
- 训练具体设置参见[baichuan-7B](https://github.com/baichuan-inc/baichuan-7B)。
132
 
133
- For specific training settings, please refer to [baichuan-7B](https://github.com/baichuan-inc/baichuan-7B).
134
 
135
  ## Evaluation
136
 
@@ -155,7 +155,7 @@ For specific training settings, please refer to [baichuan-7B](https://github.com
155
  | Aquila-7B<sup>*</sup> | 25.5 | 25.2 | 25.6 | 24.6 | 25.2 | 26.6 |
156
  | BLOOM-7B | 22.8 | 20.2 | 21.8 | 23.3 | 23.9 | 23.3 |
157
  | BLOOMZ-7B | 35.7 | 25.8 | 31.3 | 43.5 | 36.6 | 35.6 |
158
- | **baichuan-7B** | 42.8 | 31.5 | 38.2 | 52.0 | 46.2 | 39.3 |
159
 
160
 
161
  #### Gaokao
@@ -175,7 +175,7 @@ For specific training settings, please refer to [baichuan-7B](https://github.com
175
  | BLOOM-7B | 26.96 |
176
  | BLOOMZ-7B | 28.72 |
177
  | Aquila-7B<sup>*</sup> | 24.39 |
178
- | **baichuan-7B** | **36.24** |
179
 
180
 
181
  #### AGIEval
@@ -193,7 +193,7 @@ For specific training settings, please refer to [baichuan-7B](https://github.com
193
  | BLOOM-7B | 26.55 |
194
  | BLOOMZ-7B | 30.27 |
195
  | Aquila-7B<sup>*</sup> | 25.58 |
196
- | **baichuan-7B** | **34.44** |
197
 
198
  <sup>*</sup>其中Aquila模型来源于[智源官方网站](https://model.baai.ac.cn/model-detail/100098),仅做参考
199
 
@@ -216,7 +216,7 @@ We adopted the [open-source]((https://github.com/hendrycks/test)) evaluation sch
216
  | BLOOMZ 7B<sup>0</sup> | 31.3 | 42.1 | 34.4 | 39.0 | 36.1 |
217
  | moss-moon-003-base (16B)<sup>0</sup> | 24.2 | 22.8 | 22.4 | 24.4 | 23.6 |
218
  | moss-moon-003-sft (16B)<sup>0</sup> | 30.5 | 33.8 | 29.3 | 34.4 | 31.9 |
219
- | **baichuan-7B<sup>0</sup>** | 38.4 | 48.9 | 35.6 | 48.1 | 42.3 |
220
 
221
  The superscript in the Model column indicates the source of the results.
222
  ```
@@ -226,4 +226,4 @@ The superscript in the Model column indicates the source of the results.
226
  ```
227
 
228
  ## Our Group
229
- [WeChat](https://github.com/baichuan-inc/baichuan-7B/blob/main/media/wechat.jpeg?raw=true)
 
5
  pipeline_tag: text-generation
6
  inference: false
7
  ---
8
+ # Baichuan-7B
9
 
10
  <!-- Provide a quick summary of what the model is/does. -->
11
 
12
+ Baichuan-7B是由百川智能开发的一个开源的大规模预训练模型。基于Transformer结构,在大约1.2万亿tokens上训练的70亿参数模型,支持中英双语,上下文窗口长度为4096。在标准的中文和英文权威benchmark(C-EVAL/MMLU)上均取得同尺寸最好的效果。
13
 
14
+ 如果希望使用Baichuan-7B(如进行推理、Finetune等),我们推荐使用配套代码库[Baichuan-7B](https://
15
+ .com/Baichuan-inc/Baichuan-7B)。
16
 
17
+ Baichuan-7B is an open-source large-scale pre-trained model developed by Baichuan Intelligent Technology. Based on the Transformer architecture, it is a model with 7 billion parameters trained on approximately 1.2 trillion tokens. It supports both Chinese and English, with a context window length of 4096. It achieves the best performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU).
18
 
19
+ If you wish to use Baichuan-7B (for inference, finetuning, etc.), we recommend using the accompanying code library [Baichuan-7B](https://github.com/Baichuan-inc/Baichuan-7B).
20
 
21
+ ## Why use Baichuan-7B
22
 
23
+ - 在同尺寸模型中Baichuan-7B达到了目前SOTA的水平,参考下面MMLU指标
24
+ - Baichuan-7B使用自有的中英文双语语料进行训练,在中文上进行优化,在C-Eval达到SOTA水平
25
+ - 不同于LLaMA完全禁止商业使用,Baichuan-7B使用更宽松的开源协议,允许用于商业目的
26
 
27
+ - Among models of the same size, Baichuan-7B has achieved the current state-of-the-art (SOTA) level, as evidenced by the following MMLU metrics.
28
+ - Baichuan-7B is trained on proprietary bilingual Chinese-English corpora, optimized for Chinese, and achieves SOTA performance on C-Eval.
29
+ - Unlike LLaMA, which completely prohibits commercial use, Baichuan-7B employs a more lenient open-source license, allowing for commercial purposes.
30
 
31
  ## How to Get Started with the Model
32
 
33
+ 如下是一个使用Baichuan-7B进行1-shot推理的任务,根据作品给出作者名,正确输出为"夜雨寄北->李商隐"
34
  ```python
35
  from transformers import AutoModelForCausalLM, AutoTokenizer
36
 
37
+ tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-7B", trust_remote_code=True)
38
+ model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-7B", device_map="auto", trust_remote_code=True)
39
  inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
40
  inputs = inputs.to('cuda:0')
41
  pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
42
  print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
43
  ```
44
 
45
+ The following is a task of performing 1-shot inference using Baichuan-7B, where the author's name is given based on the work, with the correct output being "One Hundred Years of Solitude->Gabriel Garcia Marquez"
46
  ```python
47
  from transformers import AutoModelForCausalLM, AutoTokenizer
48
 
49
+ tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-7B", trust_remote_code=True)
50
+ model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-7B", device_map="auto", trust_remote_code=True)
51
  inputs = tokenizer('Hamlet->Shakespeare\nOne Hundred Years of Solitude->', return_tensors='pt')
52
  inputs = inputs.to('cuda:0')
53
  pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
 
63
  - **Developed by:** 百川智能(Baichuan Intelligent Technology)
64
  - **Email**: [email protected]
65
  - **Language(s) (NLP):** Chinese/English
66
+ - **License:** [Baichuan-7B License](https://huggingface.co/baichuan-inc/Baichuan-7B/blob/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)
67
 
68
  ### Model Sources
69
 
 
107
  ### Downstream Use
108
 
109
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
110
+ 我们同时开源出了和本模型配套的训练代码,允许进行高效的Finetune用于下游任务,具体参见[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。
111
 
112
+ We have also open-sourced the training code that accompanies this model, allowing for efficient finetuning for downstream tasks. For more details, please refer to [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B).
113
 
114
  ### Out-of-Scope Use
115
 
 
122
 
123
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
124
 
125
+ Baichuan-7B可能会产生事实上不正确的输出,不应依赖它产生事实上准确的信息。Baichuan-7B是在各种公共数据集上进行训练的。尽管我们已经做出了巨大的努力来清洗预训练数据,但这个模型可能会生成淫秽、偏见或其他冒犯性的输出。
126
 
127
+ Baichuan-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. Baichuan-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
128
 
129
  ## Training Details
130
 
131
+ 训练具体设置参见[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。
132
 
133
+ For specific training settings, please refer to [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B).
134
 
135
  ## Evaluation
136
 
 
155
  | Aquila-7B<sup>*</sup> | 25.5 | 25.2 | 25.6 | 24.6 | 25.2 | 26.6 |
156
  | BLOOM-7B | 22.8 | 20.2 | 21.8 | 23.3 | 23.9 | 23.3 |
157
  | BLOOMZ-7B | 35.7 | 25.8 | 31.3 | 43.5 | 36.6 | 35.6 |
158
+ | **Baichuan-7B** | 42.8 | 31.5 | 38.2 | 52.0 | 46.2 | 39.3 |
159
 
160
 
161
  #### Gaokao
 
175
  | BLOOM-7B | 26.96 |
176
  | BLOOMZ-7B | 28.72 |
177
  | Aquila-7B<sup>*</sup> | 24.39 |
178
+ | **Baichuan-7B** | **36.24** |
179
 
180
 
181
  #### AGIEval
 
193
  | BLOOM-7B | 26.55 |
194
  | BLOOMZ-7B | 30.27 |
195
  | Aquila-7B<sup>*</sup> | 25.58 |
196
+ | **Baichuan-7B** | **34.44** |
197
 
198
  <sup>*</sup>其中Aquila模型来源于[智源官方网站](https://model.baai.ac.cn/model-detail/100098),仅做参考
199
 
 
216
  | BLOOMZ 7B<sup>0</sup> | 31.3 | 42.1 | 34.4 | 39.0 | 36.1 |
217
  | moss-moon-003-base (16B)<sup>0</sup> | 24.2 | 22.8 | 22.4 | 24.4 | 23.6 |
218
  | moss-moon-003-sft (16B)<sup>0</sup> | 30.5 | 33.8 | 29.3 | 34.4 | 31.9 |
219
+ | **Baichuan-7B<sup>0</sup>** | 38.4 | 48.9 | 35.6 | 48.1 | 42.3 |
220
 
221
  The superscript in the Model column indicates the source of the results.
222
  ```
 
226
  ```
227
 
228
  ## Our Group
229
+ [WeChat](https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true)