Xinnian Liang
commited on
Commit
·
c89deca
1
Parent(s):
4e8da40
Update README.md
Browse files
README.md
CHANGED
@@ -8,4 +8,23 @@ tags:
|
|
8 |
- Chinese Pre-trained Language Model
|
9 |
---
|
10 |
|
11 |
-
Please use 'XLMRoberta' related functions to load this model!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
- Chinese Pre-trained Language Model
|
9 |
---
|
10 |
|
11 |
+
Please use 'XLMRoberta' related functions to load this model!
|
12 |
+
|
13 |
+
# MigBERT | 中文混合粒度预训练模型
|
14 |
+
[Character, Word, or Both? Revisiting the Segmentation Granularity for Chinese Pre-trained Language Models](https://arxiv.org/abs/2303.10893)
|
15 |
+
|
16 |
+
|
17 |
+
# Citation
|
18 |
+
如果你觉得我们的工作对你有用,请在您的工作中引用我们的文章。
|
19 |
+
If you find our resource or paper is useful, please consider including the following citation in your paper.
|
20 |
+
|
21 |
+
```
|
22 |
+
@misc{liang2023character,
|
23 |
+
title={Character, Word, or Both? Revisiting the Segmentation Granularity for Chinese Pre-trained Language Models},
|
24 |
+
author={Xinnian Liang and Zefan Zhou and Hui Huang and Shuangzhi Wu and Tong Xiao and Muyun Yang and Zhoujun Li and Chao Bian},
|
25 |
+
year={2023},
|
26 |
+
eprint={2303.10893},
|
27 |
+
archivePrefix={arXiv},
|
28 |
+
primaryClass={cs.CL}
|
29 |
+
}
|
30 |
+
```
|