|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
- de |
|
- ru |
|
- zh |
|
tags: |
|
- mt-evaluation |
|
- WMT |
|
- MQM |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
# Dataset Summary |
|
|
|
This dataset contains all MQM human annotations from previous [WMT Metrics shared tasks](https://wmt-metrics-task.github.io/) and the MQM annotations from [Experts, Errors, and Context](https://aclanthology.org/2021.tacl-1.87/) in a form of error spans. Moreover, it contains some hallucinations used in the training of [XCOMET models](https://huggingface.co/Unbabel/XCOMET-XXL). |
|
|
|
**Please note that this is not an official release of the data** and the original data can be found [here](https://github.com/google/wmt-mqm-human-evaluation). |
|
|
|
The data is organised into 8 columns: |
|
|
|
- src: input text |
|
- mt: translation |
|
- ref: reference translation |
|
- annotations: List of error spans (dictionaries with 'start', 'end', 'severity', 'text') |
|
- lp: language pair |
|
|
|
|
|
While `en-ru` was annotated by Unbabel, `en-de` and `zh-en` was annotated by Google. This means that for en-de and zh-en you will only find minor and major errors while for en-ru you can find a few critical errors. |
|
|
|
## Python usage: |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("RicardoRei/wmt-mqm-error-spans", split="train") |
|
``` |
|
|
|
There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. : |
|
|
|
```python |
|
# split by LP |
|
data = dataset.filter(lambda example: example["lp"] == "en-de") |
|
``` |
|
|
|
## Citation Information |
|
|
|
If you use this data please cite the following works: |
|
- [Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation](https://aclanthology.org/2021.tacl-1.87/) |
|
- [Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain](https://aclanthology.org/2021.wmt-1.73/) |
|
- [Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust](https://aclanthology.org/2022.wmt-1.2/) |
|
- [xCOMET: Transparent Machine Translation Evaluation through Fine-grained Error Detection](https://arxiv.org/pdf/2310.10482.pdf) |
|
|