Gabriel Okasa's picture
4

Gabriel Okasa

okasag

AI & ML interests

text similarity & text classification

Recent Activity

reacted to tomaarsen's post with ❀️ 1 day ago
An assembly of 18 European companies, labs, and universities have banded together to launch πŸ‡ͺπŸ‡Ί EuroBERT! It's a state-of-the-art multilingual encoder for 15 European languages, designed to be finetuned for retrieval, classification, etc. πŸ‡ͺπŸ‡Ί 15 Languages: English, French, German, Spanish, Chinese, Italian, Russian, Polish, Portuguese, Japanese, Vietnamese, Dutch, Arabic, Turkish, Hindi 3️⃣ 3 model sizes: 210M, 610M, and 2.1B parameters - very very useful sizes in my opinion ➑️ Sequence length of 8192 tokens! Nice to see these higher sequence lengths for encoders becoming more common. βš™οΈ Architecture based on Llama, but with bi-directional (non-causal) attention to turn it into an encoder. Flash Attention 2 is supported. πŸ”₯ A new Pareto frontier (stronger *and* smaller) for multilingual encoder models πŸ“Š Evaluated against mDeBERTa, mGTE, XLM-RoBERTa for Retrieval, Classification, and Regression (after finetuning for each task separately): EuroBERT punches way above its weight. πŸ“ Detailed paper with all details, incl. data: FineWeb for English and CulturaX for multilingual data, The Stack v2 and Proof-Pile-2 for code. Check out the release blogpost here: https://huggingface.co/blog/EuroBERT/release * https://huggingface.co/EuroBERT/EuroBERT-210m * https://huggingface.co/EuroBERT/EuroBERT-610m * https://huggingface.co/EuroBERT/EuroBERT-2.1B The next step is for researchers to build upon the 3 EuroBERT base models and publish strong retrieval, zero-shot classification, etc. models for all to use. I'm very much looking forward to it!
reacted to tomaarsen's post with ❀️ 1 day ago
An assembly of 18 European companies, labs, and universities have banded together to launch πŸ‡ͺπŸ‡Ί EuroBERT! It's a state-of-the-art multilingual encoder for 15 European languages, designed to be finetuned for retrieval, classification, etc. πŸ‡ͺπŸ‡Ί 15 Languages: English, French, German, Spanish, Chinese, Italian, Russian, Polish, Portuguese, Japanese, Vietnamese, Dutch, Arabic, Turkish, Hindi 3️⃣ 3 model sizes: 210M, 610M, and 2.1B parameters - very very useful sizes in my opinion ➑️ Sequence length of 8192 tokens! Nice to see these higher sequence lengths for encoders becoming more common. βš™οΈ Architecture based on Llama, but with bi-directional (non-causal) attention to turn it into an encoder. Flash Attention 2 is supported. πŸ”₯ A new Pareto frontier (stronger *and* smaller) for multilingual encoder models πŸ“Š Evaluated against mDeBERTa, mGTE, XLM-RoBERTa for Retrieval, Classification, and Regression (after finetuning for each task separately): EuroBERT punches way above its weight. πŸ“ Detailed paper with all details, incl. data: FineWeb for English and CulturaX for multilingual data, The Stack v2 and Proof-Pile-2 for code. Check out the release blogpost here: https://huggingface.co/blog/EuroBERT/release * https://huggingface.co/EuroBERT/EuroBERT-210m * https://huggingface.co/EuroBERT/EuroBERT-610m * https://huggingface.co/EuroBERT/EuroBERT-2.1B The next step is for researchers to build upon the 3 EuroBERT base models and publish strong retrieval, zero-shot classification, etc. models for all to use. I'm very much looking forward to it!
liked a model 1 day ago
EuroBERT/EuroBERT-2.1B
View all activity

Organizations

SNSF Data Team's profile picture

models

None public yet

datasets

None public yet