RichardErkhov commited on
Commit
3a663dc
·
verified ·
1 Parent(s): 32a865a

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +153 -0
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Wenzhong-GPT2-110M - GGUF
11
+ - Model creator: https://huggingface.co/IDEA-CCNL/
12
+ - Original model: https://huggingface.co/IDEA-CCNL/Wenzhong-GPT2-110M/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Wenzhong-GPT2-110M.Q2_K.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q2_K.gguf) | Q2_K | 0.08GB |
18
+ | [Wenzhong-GPT2-110M.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
19
+ | [Wenzhong-GPT2-110M.IQ3_S.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.IQ3_S.gguf) | IQ3_S | 0.08GB |
20
+ | [Wenzhong-GPT2-110M.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
21
+ | [Wenzhong-GPT2-110M.IQ3_M.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.IQ3_M.gguf) | IQ3_M | 0.09GB |
22
+ | [Wenzhong-GPT2-110M.Q3_K.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q3_K.gguf) | Q3_K | 0.09GB |
23
+ | [Wenzhong-GPT2-110M.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
24
+ | [Wenzhong-GPT2-110M.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
25
+ | [Wenzhong-GPT2-110M.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
26
+ | [Wenzhong-GPT2-110M.Q4_0.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q4_0.gguf) | Q4_0 | 0.1GB |
27
+ | [Wenzhong-GPT2-110M.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
28
+ | [Wenzhong-GPT2-110M.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
29
+ | [Wenzhong-GPT2-110M.Q4_K.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q4_K.gguf) | Q4_K | 0.11GB |
30
+ | [Wenzhong-GPT2-110M.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
31
+ | [Wenzhong-GPT2-110M.Q4_1.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q4_1.gguf) | Q4_1 | 0.11GB |
32
+ | [Wenzhong-GPT2-110M.Q5_0.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q5_0.gguf) | Q5_0 | 0.11GB |
33
+ | [Wenzhong-GPT2-110M.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
34
+ | [Wenzhong-GPT2-110M.Q5_K.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q5_K.gguf) | Q5_K | 0.12GB |
35
+ | [Wenzhong-GPT2-110M.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
36
+ | [Wenzhong-GPT2-110M.Q5_1.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q5_1.gguf) | Q5_1 | 0.12GB |
37
+ | [Wenzhong-GPT2-110M.Q6_K.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q6_K.gguf) | Q6_K | 0.13GB |
38
+ | [Wenzhong-GPT2-110M.Q8_0.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q8_0.gguf) | Q8_0 | 0.17GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ language:
46
+ - zh
47
+
48
+ inference:
49
+ parameters:
50
+ temperature: 0.7
51
+ top_p: 0.6
52
+ repetition_penalty: 1.1
53
+ max_new_tokens: 128
54
+ num_return_sequences: 3
55
+ do_sample: true
56
+
57
+ license: apache-2.0
58
+ tags:
59
+ - generate
60
+ - gpt2
61
+
62
+ widget:
63
+ - 北京是中国的
64
+ - 西湖的景色
65
+
66
+ ---
67
+
68
+ # Wenzhong-GPT2-110M
69
+
70
+ - Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
71
+ - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
72
+ ## 简介 Brief Introduction
73
+
74
+ 善于处理NLG任务,中文版的GPT2-Small。
75
+
76
+ Focused on handling NLG tasks, Chinese GPT2-Small.
77
+
78
+ ## 模型分类 Model Taxonomy
79
+
80
+ | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
81
+ | :----: | :----: | :----: | :----: | :----: | :----: |
82
+ | 通用 General | 自然语言生成 NLG | 闻仲 Wenzhong | GPT2 | 110M | 中文 Chinese |
83
+
84
+ ## 模型信息 Model Information
85
+
86
+ 类似于Wenzhong2.0-GPT2-3.5B-chinese,我们实现了一个small版本的12层的Wenzhong-GPT2-110M,并且在悟道(300G版本)上面进行预训练。
87
+
88
+ Similar to Wenzhong2.0-GPT2-3.5B-chinese, we implement a small size Wenzhong-GPT2-110M with 12 layers, which is pre-trained on Wudao Corpus (300G version).
89
+
90
+ ## 使用 Usage
91
+
92
+ ### 加载模型 Loading Models
93
+
94
+ ```python
95
+ from transformers import GPT2Tokenizer,GPT2LMHeadModel
96
+ hf_model_path = 'IDEA-CCNL/Wenzhong-GPT2-110M'
97
+ tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path)
98
+ model = GPT2LMHeadModel.from_pretrained(hf_model_path)
99
+ ```
100
+
101
+ ### 使用示例 Usage Examples
102
+
103
+ ```python
104
+ question = "北京是中国的"
105
+ inputs = tokenizer(question,return_tensors='pt')
106
+ generation_output = model.generate(**inputs,
107
+ return_dict_in_generate=True,
108
+ output_scores=True,
109
+ max_length=150,
110
+ # max_new_tokens=80,
111
+ do_sample=True,
112
+ top_p = 0.6,
113
+ # num_beams=5,
114
+ eos_token_id=50256,
115
+ pad_token_id=0,
116
+ num_return_sequences = 5)
117
+
118
+ for idx,sentence in enumerate(generation_output.sequences):
119
+ print('next sentence %d:\n'%idx,
120
+ tokenizer.decode(sentence).split('<|endoftext|>')[0])
121
+ print('*'*40)
122
+ ```
123
+
124
+ ## 引用 Citation
125
+
126
+ 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
127
+
128
+ If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
129
+
130
+ ```text
131
+ @article{fengshenbang,
132
+ author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
133
+ title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
134
+ journal = {CoRR},
135
+ volume = {abs/2209.02970},
136
+ year = {2022}
137
+ }
138
+ ```
139
+
140
+ 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
141
+
142
+ You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
143
+
144
+ ```text
145
+ @misc{Fengshenbang-LM,
146
+ title={Fengshenbang-LM},
147
+ author={IDEA-CCNL},
148
+ year={2021},
149
+ howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
150
+ }
151
+ ```
152
+
153
+