Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [U
|
|
15 |
|
16 |
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users in reproducing the results, we used a publicly available corpus and word segmentation tool, and provided all training details.
|
17 |
|
18 |
-
You can download the 6 Chinese RoBERTa miniatures either from the [UER-py
|
19 |
|
20 |
| | Link |
|
21 |
| -------- | :-----------------------: |
|
|
|
15 |
|
16 |
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users in reproducing the results, we used a publicly available corpus and word segmentation tool, and provided all training details.
|
17 |
|
18 |
+
You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
|
19 |
|
20 |
| | Link |
|
21 |
| -------- | :-----------------------: |
|