Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,22 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: cc-by-4.0
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
tags:
|
| 3 |
+
- bert
|
| 4 |
license: cc-by-4.0
|
| 5 |
---
|
| 6 |
+
## bert-ascii-medium
|
| 7 |
+
is a medium size BERT Language Model pre-trained by predicting the summation of the **ASCII** code values of the characters in a masked token as a pre-training objective. For more details about the pre-training objective and the pre-training hyperparameters, please refer to [How does the pre-training objective affect what large language models learn about linguistic properties?](https://arxiv.org/abs/2203.10415)
|
| 8 |
+
|
| 9 |
+
## License
|
| 10 |
+
CC BY 4.0
|
| 11 |
+
|
| 12 |
+
## Citation
|
| 13 |
+
If you use this model, please cite the following paper:
|
| 14 |
+
```
|
| 15 |
+
@inproceedings{alajrami2022does,
|
| 16 |
+
title={How does the pre-training objective affect what large language models learn about linguistic properties?},
|
| 17 |
+
author={Alajrami, Ahmed and Aletras, Nikolaos},
|
| 18 |
+
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
|
| 19 |
+
pages={131--147},
|
| 20 |
+
year={2022}
|
| 21 |
+
}
|
| 22 |
+
```
|