Datasets:
Tasks:
Translation
Formats:
csv
Size:
10K - 100K
ArXiv:
Tags:
machine-translation
quality-estimation
post-editing
translation
behavioral-data
multidimensional-quality-metric
License:
Link paper to HF papers URL
#3
by
nielsr
HF staff
- opened
README.md
CHANGED
@@ -1,10 +1,22 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
2 |
language:
|
3 |
- en
|
4 |
- it
|
5 |
- nl
|
6 |
license:
|
7 |
- apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
tags:
|
9 |
- machine-translation
|
10 |
- quality-estimation
|
@@ -15,18 +27,6 @@ tags:
|
|
15 |
- mqm
|
16 |
- comet
|
17 |
- qe
|
18 |
-
language_creators:
|
19 |
-
- machine-generated
|
20 |
-
- expert-generated
|
21 |
-
annotations_creators:
|
22 |
-
- machine-generated
|
23 |
-
pretty_name: qe4pe
|
24 |
-
size_categories:
|
25 |
-
- 10K<n<100K
|
26 |
-
source_datasets:
|
27 |
-
- Unbabel/TowerEval-Data-v0.1
|
28 |
-
task_categories:
|
29 |
-
- translation
|
30 |
configs:
|
31 |
- config_name: main
|
32 |
data_files:
|
@@ -56,11 +56,11 @@ configs:
|
|
56 |
|
57 |
# Quality Estimation for Post-Editing (QE4PE)
|
58 |
|
59 |
-
*For more details on QE4PE, see our [paper](
|
60 |
|
61 |
## Dataset Description
|
62 |
- **Source:** [Github](https://github.com/gsarti/qe4pe)
|
63 |
-
- **Paper:** [Arxiv](https://
|
64 |
- **Point of Contact:** [Gabriele Sarti](mailto:[email protected])
|
65 |
|
66 |
[Gabriele Sarti](https://gsarti.com) • [Vilém Zouhar](https://vilda.net/) • [Grzegorz Chrupała](https://grzegorz.chrupala.me/) • [Ana Guerberof Arenas](https://scholar.google.com/citations?user=i6bqaTsAAAAJ) • [Malvina Nissim](https://malvinanissim.github.io/) • [Arianna Bisazza](https://www.cs.rug.nl/~bisazza/)
|
@@ -80,7 +80,7 @@ We publicly release the granular editing logs alongside the processed dataset to
|
|
80 |
|
81 |
### News 📢
|
82 |
|
83 |
-
**March 2025**: The QE4PE paper is available on [Arxiv](https://
|
84 |
|
85 |
**January 2025**: MQM annotations are now available for the `main` task.
|
86 |
|
@@ -229,8 +229,12 @@ A single entry in the dataframe represents a segment (~sentence) in the dataset,
|
|
229 |
|`mt_text` | Output of the `NLLB-3.3B` model when translating `src_text` into `tgt_lang` (default config, 5 beams) |
|
230 |
|`mt_text_highlighted` | Highlighted version of `mt_text` with potential errors according to the `highlight_modality`. |
|
231 |
|`pe_text` | Post-edited version of `mt_text` produced by a professional translator with `highlight_modality`. |
|
232 |
-
|`mt_pe_word_aligned` | Aligned visual representation of word-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace
|
233 |
-
|
|
|
|
|
|
|
|
|
234 |
|`highlights` | List of dictionaries for highlighted spans with error severity and position, matching XCOMET format for word-level error annotations. |
|
235 |
|**MQM annotations (`main` config only)**| |
|
236 |
|`qa_mt_annotator_id` | Annotator ID for the MQM evaluation of `qa_mt_annotated_text`. |
|
@@ -323,128 +327,4 @@ The following is an example of the subject `oracle_t1` post-editing for segment
|
|
323 |
"pe_chrf_mean": 79.173,
|
324 |
"pe_chrf_std": 13.679,
|
325 |
"pe_ter_max": 100.0,
|
326 |
-
"pe_ter_min": 0.0
|
327 |
-
"pe_ter_mean": 28.814,
|
328 |
-
"pe_ter_std": 28.827,
|
329 |
-
"pe_comet_max": 0.977,
|
330 |
-
"pe_comet_min": 0.851,
|
331 |
-
"pe_comet_mean": 0.937,
|
332 |
-
"pe_comet_std": 0.035,
|
333 |
-
"pe_xcomet_qe": 0.984,
|
334 |
-
"pe_xcomet_errors": "[]",
|
335 |
-
# Behavioral data
|
336 |
-
"doc_num_edits": 103,
|
337 |
-
"doc_edit_order": 20,
|
338 |
-
"doc_edit_time": 118,
|
339 |
-
"doc_edit_time_filtered": 118,
|
340 |
-
"doc_keys_per_min": 52.37,
|
341 |
-
"doc_chars_per_min": 584.24,
|
342 |
-
"doc_words_per_min": 79.83,
|
343 |
-
"segment_num_edits": 9,
|
344 |
-
"segment_edit_order": 3,
|
345 |
-
"segment_edit_time": 9,
|
346 |
-
"segment_edit_time_filtered": 9,
|
347 |
-
"segment_keys_per_min": 60.0,
|
348 |
-
"segment_chars_per_min": 906.67,
|
349 |
-
"segment_words_per_min": 106.67,
|
350 |
-
"num_enter_actions": 2,
|
351 |
-
"remove_highlights": False,
|
352 |
-
# Texts and annotations
|
353 |
-
"src_text": "The speed of its emerging growth frequently outpaces the development of quality assurance and education.",
|
354 |
-
"mt_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
|
355 |
-
"mt_text_highlighted": "De snelheid van de opkomende groei is vaak <minor>sneller</minor> dan de ontwikkeling van kwaliteitsborging en <major>onderwijs.</major>",
|
356 |
-
"pe_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.",
|
357 |
-
"mt_pe_word_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \
|
358 |
-
"PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \
|
359 |
-
" S",
|
360 |
-
"mt_pe_char_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \
|
361 |
-
"PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \
|
362 |
-
" SS SS SS ",
|
363 |
-
"highlights": """[
|
364 |
-
{
|
365 |
-
'text': 'sneller',
|
366 |
-
'severity': 'minor',
|
367 |
-
'start': 43,
|
368 |
-
'end': 50
|
369 |
-
},
|
370 |
-
{
|
371 |
-
'text': 'onderwijs.',
|
372 |
-
'severity': 'major',
|
373 |
-
'start': 96,
|
374 |
-
'end': 106
|
375 |
-
}
|
376 |
-
]"""
|
377 |
-
# QA annotations
|
378 |
-
"qa_mt_annotator_id": 'qa_nld_3',
|
379 |
-
"qa_pe_annotator_id": 'qa_nld_1',
|
380 |
-
"qa_mt_esa_rating": 100.0,
|
381 |
-
"qa_pe_esa_rating": 80.0,
|
382 |
-
"qa_mt_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
|
383 |
-
"qa_pe_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.",
|
384 |
-
"qa_mt_fixed_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
|
385 |
-
"qa_pe_fixed_text": "De snelheid van de ontluikende groei overtreft vaak de ontwikkeling van kwaliteitsborging en onderwijs.",
|
386 |
-
"qa_mt_mqm_errors": "[]",
|
387 |
-
"qa_pe_mqm_errors": """[
|
388 |
-
{
|
389 |
-
"text": "opkomende",
|
390 |
-
"text_start": 19,
|
391 |
-
"text_end": 28,
|
392 |
-
"correction":
|
393 |
-
"ontluikende",
|
394 |
-
"correction_start": 19,
|
395 |
-
"correction_end": 30,
|
396 |
-
"description": "Mistranslation - not the correct word",
|
397 |
-
"mqm_category": "Mistranslation",
|
398 |
-
"severity": "Minor",
|
399 |
-
"comment": "",
|
400 |
-
"edit_order": 1
|
401 |
-
}
|
402 |
-
]"""
|
403 |
-
|
404 |
-
}
|
405 |
-
```
|
406 |
-
|
407 |
-
The text is provided as-is, without further preprocessing or tokenization.
|
408 |
-
|
409 |
-
### Dataset Creation
|
410 |
-
|
411 |
-
The datasets were parsed from GroTE inputs, logs and outputs for the QE4PE study, available in this repository. Processed dataframes using the `qe4pe process_task_data` command. Refer to the [QE4PE Github repository](https://github.com/gsarti/qe4pe) for additional details. The overall structure and processing of the dataset were inspired by the [DivEMT dataset](https://huggingface.co/datasets/GroNLP/divemt).
|
412 |
-
|
413 |
-
### QA Annotations
|
414 |
-
|
415 |
-
MQM annotations were collected using Google Sheets and highlights were parsed from HTML exported output, ensuring their compliance with well-formedness checks. Out of the original 51 docs (324 segments) in `main`, 24 docs (10 biomedical, 14 social, totaling 148 segments) were samples at random and annotated by professional translators.
|
416 |
-
|
417 |
-
## Additional Information
|
418 |
-
|
419 |
-
### Metric signatures
|
420 |
-
|
421 |
-
The following signatures correspond to the metrics reported in the processed dataframes:
|
422 |
-
|
423 |
-
```shell
|
424 |
-
# Computed using SacreBLEU: https://github.com/mjpost/sacrebleu
|
425 |
-
BLEU: case:mixed|eff:yes|tok:13a|smooth:exp|version:2.3.1
|
426 |
-
ChrF: case:mixed|eff:yes|nc:6|nw:0|space:no|version:2.3.1
|
427 |
-
TER: case:lc|tok:tercom|norm:no|punct:yes|asian:no|version:2.3.1
|
428 |
-
|
429 |
-
# Computed using Unbabel COMET: https://github.com/Unbabel/COMET
|
430 |
-
Comet: Python3.11.9|Comet2.2.2|fp32|Unbabel/wmt22-comet-da
|
431 |
-
XComet: Python3.10.12|Comet2.2.1|fp32|Unbabel/XCOMET-XXL
|
432 |
-
```
|
433 |
-
|
434 |
-
### Dataset Curators
|
435 |
-
|
436 |
-
For problems related to this 🤗 Datasets version, please contact me at [[email protected]](mailto:[email protected]).
|
437 |
-
|
438 |
-
### Citation Information
|
439 |
-
|
440 |
-
```bibtex
|
441 |
-
@misc{sarti-etal-2024-qe4pe,
|
442 |
-
title={{QE4PE}: Word-level Quality Estimation for Human Post-Editing},
|
443 |
-
author={Gabriele Sarti and Vilém Zouhar and Grzegorz Chrupała and Ana Guerberof-Arenas and Malvina Nissim and Arianna Bisazza},
|
444 |
-
year={2025},
|
445 |
-
eprint={2503.03044},
|
446 |
-
archivePrefix={arXiv},
|
447 |
-
primaryClass={cs.CL},
|
448 |
-
url={https://arxiv.org/abs/2503.03044},
|
449 |
-
}
|
450 |
-
```
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- machine-generated
|
4 |
+
language_creators:
|
5 |
+
- machine-generated
|
6 |
+
- expert-generated
|
7 |
language:
|
8 |
- en
|
9 |
- it
|
10 |
- nl
|
11 |
license:
|
12 |
- apache-2.0
|
13 |
+
size_categories:
|
14 |
+
- 10K<n<100K
|
15 |
+
source_datasets:
|
16 |
+
- Unbabel/TowerEval-Data-v0.1
|
17 |
+
task_categories:
|
18 |
+
- translation
|
19 |
+
pretty_name: qe4pe
|
20 |
tags:
|
21 |
- machine-translation
|
22 |
- quality-estimation
|
|
|
27 |
- mqm
|
28 |
- comet
|
29 |
- qe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
configs:
|
31 |
- config_name: main
|
32 |
data_files:
|
|
|
56 |
|
57 |
# Quality Estimation for Post-Editing (QE4PE)
|
58 |
|
59 |
+
*For more details on QE4PE, see our [paper](https://huggingface.co/papers/2503.03044) and our [Github repository](https://github.com/gsarti/qe4pe)*
|
60 |
|
61 |
## Dataset Description
|
62 |
- **Source:** [Github](https://github.com/gsarti/qe4pe)
|
63 |
+
- **Paper:** [Arxiv](https://huggingface.co/papers/2503.03044)
|
64 |
- **Point of Contact:** [Gabriele Sarti](mailto:[email protected])
|
65 |
|
66 |
[Gabriele Sarti](https://gsarti.com) • [Vilém Zouhar](https://vilda.net/) • [Grzegorz Chrupała](https://grzegorz.chrupala.me/) • [Ana Guerberof Arenas](https://scholar.google.com/citations?user=i6bqaTsAAAAJ) • [Malvina Nissim](https://malvinanissim.github.io/) • [Arianna Bisazza](https://www.cs.rug.nl/~bisazza/)
|
|
|
80 |
|
81 |
### News 📢
|
82 |
|
83 |
+
**March 2025**: The QE4PE paper is available on [Arxiv](https://huggingface.co/papers/2503.03044).
|
84 |
|
85 |
**January 2025**: MQM annotations are now available for the `main` task.
|
86 |
|
|
|
229 |
|`mt_text` | Output of the `NLLB-3.3B` model when translating `src_text` into `tgt_lang` (default config, 5 beams) |
|
230 |
|`mt_text_highlighted` | Highlighted version of `mt_text` with potential errors according to the `highlight_modality`. |
|
231 |
|`pe_text` | Post-edited version of `mt_text` produced by a professional translator with `highlight_modality`. |
|
232 |
+
|`mt_pe_word_aligned` | Aligned visual representation of word-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\
|
233 |
+
` with `
|
234 |
+
` to show the three aligned rows). |
|
235 |
+
|`mt_pe_char_aligned` | Aligned visual representation of character-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\
|
236 |
+
` with `
|
237 |
+
` to show the three aligned rows). |
|
238 |
|`highlights` | List of dictionaries for highlighted spans with error severity and position, matching XCOMET format for word-level error annotations. |
|
239 |
|**MQM annotations (`main` config only)**| |
|
240 |
|`qa_mt_annotator_id` | Annotator ID for the MQM evaluation of `qa_mt_annotated_text`. |
|
|
|
327 |
"pe_chrf_mean": 79.173,
|
328 |
"pe_chrf_std": 13.679,
|
329 |
"pe_ter_max": 100.0,
|
330 |
+
"pe_ter_min": 0.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|