Papers
arxiv:2003.11080

XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization

Published on Mar 24, 2020
Authors:
,
,
,
,
,

Abstract

Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders XTREME benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.

Community

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 2

Spaces citing this paper 8

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.