Transformer language model for Croatian and Serbian

Trained on 6GB datasets that contain Croatian and Serbian language for two epochs (500k steps). Leipzig, OSCAR and srWac datasets

Model #params Arch. Training data
Andrija/SRoBERTa-L 80M Third Leipzig Corpus, OSCAR and srWac (6 GB of text)
Downloads last month
9
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Datasets used to train Andrija/SRoBERTa-L