imvladikon's picture
Update README.md
ed4bb7e
|
raw
history blame
723 Bytes
---
language:
- he
tags:
- language model
---
Checkpoint of the alephbertgimmel-base-512 from https://github.com/Dicta-Israel-Center-for-Text-Analysis/alephbertgimmel
(for testing purpose, please use original checkpoints of the authors of this model)
AlephBertGimmel - Modern Hebrew pretrained BERT model with a 128K token vocabulary.
When using AlephBertGimmel, please reference:
```
Eylon Guetta, Avi Shmidman, Shaltiel Shmidman, Cheyn Shmuel Shmidman, Joshua Guedalia, Moshe Koppel, Dan Bareket, Amit Seker and Reut Tsarfaty, "Large Pre-Trained Models with Extra-Large Vocabularies: A Contrastive Analysis of Hebrew BERT Models and a New One to Outperform Them All", Nov 2022 [http://arxiv.org/abs/2211.15199]
```