Papers
arxiv:1906.01569

Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation

Published on Jun 4, 2019
Authors:

Abstract

Pretrained contextual and non-contextual subword embeddings have become available in over 250 languages, allowing massively multilingual NLP. However, while there is no dearth of pretrained embeddings, the distinct lack of systematic evaluations makes it difficult for practitioners to choose between them. In this work, we conduct an extensive evaluation comparing non-contextual subword embeddings, namely FastText and BPEmb, and a contextual representation method, namely BERT, on multilingual named entity recognition and part-of-speech tagging. We find that overall, a combination of BERT, BPEmb, and character representations works best across languages and tasks. A more detailed analysis reveals different strengths and weaknesses: Multilingual BERT performs well in medium- to high-resource languages, but is outperformed by non-contextual subword embeddings in a low-resource setting.

Community

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1906.01569 in a dataset README.md to link it from this page.

Spaces citing this paper 36

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.