Datasets:
Tasks:
Translation
Formats:
csv
Size:
10K - 100K
ArXiv:
Tags:
machine-translation
quality-estimation
post-editing
translation
behavioral-data
multidimensional-quality-metric
License:
annotations_creators: | |
- machine-generated | |
language_creators: | |
- machine-generated | |
- expert-generated | |
language: | |
- en | |
- it | |
- nl | |
license: | |
- apache-2.0 | |
size_categories: | |
- 10K<n<100K | |
source_datasets: | |
- Unbabel/TowerEval-Data-v0.1 | |
task_categories: | |
- translation | |
pretty_name: qe4pe | |
tags: | |
- machine-translation | |
- quality-estimation | |
- post-editing | |
- translation | |
- behavioral-data | |
- multidimensional-quality-metric | |
- mqm | |
- comet | |
- qe | |
configs: | |
- config_name: main | |
data_files: | |
- split: train | |
path: task/main/processed_main.csv | |
- config_name: pretask | |
data_files: | |
- split: train | |
path: task/pretask/processed_pretask.csv | |
- config_name: posttask | |
data_files: | |
- split: train | |
path: task/posttask/processed_posttask.csv | |
- config_name: pretask_questionnaire | |
data_files: | |
- split: train | |
path: questionnaires/pretask_results.csv | |
- config_name: posttask_highlight_questionnaire | |
data_files: | |
- split: train | |
path: questionnaires/posttask_highlight_results.csv | |
- config_name: posttask_no_highlight_questionnaire | |
data_files: | |
- split: train | |
path: questionnaires/posttask_no_highlight_results.csv | |
# Quality Estimation for Post-Editing (QE4PE) | |
*For more details on QE4PE, see our [paper](https://huggingface.co/papers/2503.03044) and our [Github repository](https://github.com/gsarti/qe4pe)* | |
## Dataset Description | |
- **Source:** [Github](https://github.com/gsarti/qe4pe) | |
- **Paper:** [Arxiv](https://huggingface.co/papers/2503.03044) | |
- **Point of Contact:** [Gabriele Sarti](mailto:[email protected]) | |
[Gabriele Sarti](https://gsarti.com) • [Vilém Zouhar](https://vilda.net/) • [Grzegorz Chrupała](https://grzegorz.chrupala.me/) • [Ana Guerberof Arenas](https://scholar.google.com/citations?user=i6bqaTsAAAAJ) • [Malvina Nissim](https://malvinanissim.github.io/) • [Arianna Bisazza](https://www.cs.rug.nl/~bisazza/) | |
<p float="left"> | |
<img src="https://github.com/gsarti/qe4pe/blob/main/figures/highlevel_qe4pe.png?raw=true" alt="QE4PE annotation pipeline" width=400/> | |
</p> | |
>Word-level quality estimation (QE) detects erroneous spans in machine translations, which can direct and facilitate human post-editing. While the accuracy of word-level QE systems has been assessed extensively, their usability and downstream influence on the speed, quality and editing choices of human post-editing remain understudied. Our QE4PE study investigates the impact of word-level QE on machine translation (MT) post-editing in a realistic setting involving 42 professional post-editors across two translation directions. We compare four error-span highlight modalities, including supervised and uncertainty-based word-level QE methods, for identifying potential errors in the outputs of a state-of-the-art neural MT model. Post-editing effort and productivity are estimated by behavioral logs, while quality improvements are assessed by word- and segment-level human annotation. We find that domain, language and editors' speed are critical factors in determining highlights' effectiveness, with modest differences between human-made and automated QE highlights underlining a gap between accuracy and usability in professional workflows. | |
### Dataset Summary | |
This dataset provides a convenient access to the processed `pretask`, `main` and `posttask` splits and the questionnaires for the QE4PE study. A sample of challenging documents extracted from WMT23 evaluation data were machine translated from English to Italian and Dutch using [NLLB 3.3B](https://huggingface.co/facebook/nllb-200-3.3B), and post-edited by 12 translators per direction across 4 highlighting modalities employing various word-level quality estimation (QE) strategies to present translators with potential errors during the editing. Additional details are provided in the [main task readme](./task/main/README.md) and in our paper. During the post-editing, behavioral data (keystrokes, pauses and editing times) were collected using the [GroTE](https://github.com/gsarti/grote) online platform. For the main task, a subset of the data was annotated with Multidimensional Quality Metrics (MQM) by professional annotators. | |
We publicly release the granular editing logs alongside the processed dataset to foster new research on the usability of word-level QE strategies in modern post-editing workflows. | |
### News 📢 | |
**March 2025**: The QE4PE paper is available on [Arxiv](https://huggingface.co/papers/2503.03044). | |
**January 2025**: MQM annotations are now available for the `main` task. | |
**October 2024**: The QE4PE dataset is released on the HuggingFace Hub! 🎉 | |
### Repository Structure | |
The repository is organized as follows: | |
```shell | |
qe4pe/ | |
├── questionnaires/ # Configs and results for pre- and post-task questionnaires for translators | |
│ ├── pretask_results.csv # Results of the pretask questionnaire, corresponding to the `pretask_questionnaire` configuration | |
│ ├── posttask_highlight_results.csv # Results of the posttask questionnaire for highlighted modalities, corresponding to the `posttask_highlight_questionnaire` configuration | |
│ ├── posttask_no_highlight_results.csv # Results of the posttask questionnaire for the `no_highlight` modality, corresponding to the `posttask_no_highlight_questionnaire` configuration | |
│ └── ... # Configurations reporting the exact questionnaires questions and options. | |
├── setup/ | |
│ ├── highlights/ # Outputs of word-level QE strategies used to setup highlighted spans in the tasks | |
│ ├── qa/ # MQM/ESA annotations for the main task | |
│ ├── processed/ # Intermediate outputs of the selection process for the main task | |
│ └── wmt23/ # Original collection of WMT23 sources and machine-translated outputs | |
└── task/ | |
├── example/ # Example folder with task structure | |
├── main/ # Main task data, logs, outputs and guidelines | |
│ ├── ... | |
│ ├── processed_main.csv # Processed main task data, corresponds to the `main` configuration | |
│ └── README.md # Details about the main task | |
├── posttask/ # Posttask task data, logs, outputs and guidelines | |
│ ├── ... | |
│ ├── processed_main.csv # Processed posttask task data, corresponds to the `posttask` configuration | |
│ └── README.md # Details about the post-task | |
└── pretask/ # Pretask data, logs, outputs and guidelines | |
├── ... | |
├── processed_pretask.csv # Processed pretask data, corresponds to the `pretask` configuration | |
└── README.md # Details about the pretask | |
``` | |
### Languages | |
The language data of QE4PE is in English (BCP-47 `en`), Italian (BCP-47 `it`) and Dutch (BCP-47 `nl`). | |
## Dataset Structure | |
### Data Instances | |
The dataset contains two configurations, corresponding to the two tasks: `pretask`, `main` and `posttask`. `main` contains the full data collected during the main task and analyzed during our experiments. `pretask` contains the data collected in the initial verification phase before the main task, in which all translators worked on texts highlighted in the `supervised` modality. `posttask` contains the data collected in the final phase in which all translators worked on texts in the `no_highlight` modality. | |
### Data Fields | |
A single entry in the dataframe represents a segment (~sentence) in the dataset, that was machine-translated and post-edited by a professional translator. The following fields are contained in the training set: | |
|Field |Description | | |
|------------------------|-------------------------------------------------------------------------------------------------------------------------------------| | |
| **Identification** | | | |
|`unit_id` | The full entry identifier. Format: `qe4pe-{task_id}-{src_lang}-{tgt_lang}-{doc_id}-{segment_in_doc_id}-{translator_main_task_id}`. | | |
|`wmt_id` | Identifier of the sentence in the original [WMT23](./data/setup/wmt23/wmttest2023.eng.jsonl) dataset. | | |
|`wmt_category` | Category of the document: `biomedical` or `social` | | |
|`doc_id` | The index of the document in the current configuration of the QE4PE dataset containing the current segment. | | |
|`segment_in_doc_id` | The index of the segment inside the current document. | | |
|`segment_id` | The index of the segment in the current configurations (i.e. concatenating all segments from all documents in order) | | |
|`translator_pretask_id` | The identifier for the translator according to the `pretask` format before modality assignments: `tXX`. | | |
|`translator_main_id` | The identifier for the translator according to the `main` task format after modality assignments: `{highlight_modality}_tXX`. | | |
|`src_lang` | The source language of the segment. For QE4PE, this is always English (`eng`) | | |
|`tgt_lang` | The target language of the segment: either Italian (`ita`) or Dutch (`nld`). | | |
|`highlight_modality` | The highlighting modality used for the segment. Values: `no_highlight`, `oracle`, `supervised`, `unsupervised`. | | |
| **Text statistics** | | | |
|`src_num_chars` | Length of the source segment in number of characters. | | |
|`mt_num_chars` | Length of the machine-translated segment in number of characters. | | |
|`pe_num_chars` | Length of the post-edited segment in number of characters. | | |
|`src_num_words` | Length of the source segment in number of words. | | |
|`mt_num_words` | Length of the machine-translated segment in number of words. | | |
|`pe_num_words` | Length of the post-edited segment in number of words. | | |
|`num_minor_highlighted_chars` | Number of characters highlighted as minor errors in the machine-translated text. | | |
|`num_major_highlighted_chars` | Number of characters highlighted as major errors in the machine-translated text. | | |
|`num_minor_highlighted_words` | Number of words highlighted as minor errors in the machine-translated text. | | |
|`num_major_highlighted_words` | Number of words highlighted as major errors in the machine-translated text. | | |
| **Edits statistics** | | | |
|`num_words_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). | | |
|`num_words_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). | | |
|`num_words_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). | | |
|`num_words_unchanged` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). | | |
|`tot_words_edits` | Total of all edit types for the sentence. | | |
|`wer` | Word Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). | | |
|`num_chars_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). | | |
|`num_chars_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). | | |
|`num_chars_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). | | |
|`num_chars_unchanged` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). | | |
|`tot_chars_edits` | Total of all edit types for the sentence. | | |
|`cer` | Character Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). | | |
| **Translation quality**| | | |
|`mt_bleu_max` | Max BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_bleu_min` | Min BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_bleu_mean` | Mean BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_bleu_std` | Standard deviation of BLEU scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_chrf_max` | Max chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_chrf_min` | Min chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_chrf_mean` | Mean chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_chrf_std` | Standard deviation of chrF scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_ter_max` | Max TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_ter_min` | Min TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_ter_mean` | Mean TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_ter_std` | Standard deviation of TER scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`mt_comet_max` | Max COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. | | |
|`mt_comet_min` | Min COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. | | |
|`mt_comet_mean` | Mean COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters.| | |
|`mt_comet_std` | Standard deviation of COMET sentence-level scores for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. | | |
|`mt_xcomet_qe` | `Unbabel/XCOMET-XXL` sentence-level quality estimation score for the mt_text. | | |
|`mt_xcomet_errors` | List of error spans detected by `Unbabel/XCOMET-XXL` for the mt_text. | | |
|`pe_bleu_max` | Max BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_bleu_min` | Min BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_bleu_mean` | Mean BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_bleu_std` | Standard deviation of BLEU scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_chrf_max` | Max chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_chrf_min` | Min chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_chrf_mean` | Mean chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_chrf_std` | Standard deviation of chrF scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_ter_max` | Max TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_ter_min` | Min TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_ter_mean` | Mean TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_ter_std` | Standard deviation of TER scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. | | |
|`pe_comet_max` | Max COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. | | |
|`pe_comet_min` | Min COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. | | |
|`pe_comet_mean` | Mean COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters.| | |
|`pe_comet_std` | Standard deviation of COMET sentence-level scores for the `pe_text` and all other `pe_text` for the corresponding segment using Unbabel/wmt22-comet-da with default parameters. | | |
|`pe_xcomet_qe` | `Unbabel/XCOMET-XXL` sentence-level quality estimation score for the pe_text. | | |
|`pe_xcomet_errors` | List of error spans detected by `Unbabel/XCOMET-XXL` for the pe_text. | | |
| **Behavioral data** | | | |
|`doc_num_edits` | Total number of edits performed by the translator on the current document. Only the last edit outputs are considered valid. | | |
|`doc_edit_order` | Index corresponding to the current document edit order. If equal to `doc_id`, the document was edited in the given order. | | |
|`doc_edit_time` | Total editing time for the current document in seconds (from `start` to `end`, no times ignored) | | |
|`doc_edit_time_filtered`| Total editing time for the current document in seconds (from `start` to `end`, >5m pauses between logged actions ignored) | | |
|`doc_keys_per_min` | Keystrokes per minute computed for the current document using `doc_edit_time_filtered`. | | |
|`doc_chars_per_min` | Characters per minute computed for the current document using `doc_edit_time_filtered`. | | |
|`doc_words_per_min` | Words per minute computed for the current document using `doc_edit_time_filtered`. | | |
|`segment_num_edits` | Total number of edits performed by the translator on the current segment. Only edits for the last edit of the doc are considered valid. | | |
|`segment_edit_order` | Index corresponding to the current segment edit order (only first `enter` action counts). If equal to `segment_in_doc_id`, the segment was edited in the given order. | | |
|`segment_edit_time` | Total editing time for the current segment in seconds (summed time between `enter`-`exit` blocks) | | |
|`segment_edit_time_filtered` | Total editing time for the current segment in seconds (>5m pauses between logged actions ignored). | | |
|`segment_keys_per_min` | Keystrokes per minute computed for the current segment using `segment_edit_time_filtered`. | | |
|`segment_chars_per_min` | Characters per minute computed for the current segment using `segment_edit_time_filtered`. | | |
|`segment_words_per_min` | Words per minute computed for the current segment using `segment_edit_time_filtered`. | | |
|`num_enter_actions` | Number of `enter` actions (focus on textbox) performed by the translator on the current segment during post-editing. | | |
|`remove_highlights` | If True, the Clear Highlights button was pressed for this segment (always false for `no_highlight` modality). | | |
|**Texts and annotations**| | | |
|`src_text` | The original source segment from WMT23 requiring translation. | | |
|`mt_text` | Output of the `NLLB-3.3B` model when translating `src_text` into `tgt_lang` (default config, 5 beams) | | |
|`mt_text_highlighted` | Highlighted version of `mt_text` with potential errors according to the `highlight_modality`. | | |
|`pe_text` | Post-edited version of `mt_text` produced by a professional translator with `highlight_modality`. | | |
|`mt_pe_word_aligned` | Aligned visual representation of word-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\` with `\n` to show the three aligned rows). | | |
|`mt_pe_char_aligned` | Aligned visual representation of character-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\` with `\n` to show the three aligned rows). | | |
|`highlights` | List of dictionaries for highlighted spans with error severity and position, matching XCOMET format for word-level error annotations. | | |
|**MQM annotations (`main` config only)**| | | |
|`qa_mt_annotator_id` | Annotator ID for the MQM evaluation of `qa_mt_annotated_text`. | | |
|`qa_pe_annotator_id` | Annotator ID for the MQM evaluation of `qa_pe_annotated_text`. | | |
|`qa_mt_esa_rating` | 0-100 quality rating for the `qa_mt_annotated_text` translation, following the [ESA framework](https://aclanthology.org/2024.wmt-1.131/). | | |
|`qa_pe_esa_rating` | 0-100 quality rating for the `qa_pe_annotated_text` translation, following the [ESA framework](https://aclanthology.org/2024.wmt-1.131/). | | |
|`qa_mt_annotated_text` | Version of `mt_text` annotated with MQM errors. Might differ (only slightly) from `mt_text`, included since `qa_mt_mqm_errors` indices are computed on this string. | | |
|`qa_pe_annotated_text` | Version of `pe_text` annotated with MQM errors. Might differ (only slightly) from `pe_text`, included since `qa_pe_mqm_errors` indices are computed on this string. | | |
|`qa_mt_fixed_text` | Proposed correction of `mqm_mt_annotated_text` following MQM annotation. | | |
|`qa_pe_fixed_text` | Proposed correction of `mqm_pe_annotated_text` following MQM annotation. | | |
|`qa_mt_mqm_errors` | List of error spans detected by the MQM annotator for the `qa_mt_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `mqm_mt_annotated_text` containing an error. `text_start`: the start index of the error span in `qa_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `qa_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `qa_mt_fixed_text` for the error span in `qa_mt_annotated_text`. `correction_start`: the start index of the error span in `mqm_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `qa_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). | | |
|`qa_pe_mqm_errors` | List of error spans detected by the MQM annotator for the `qa_pe_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `qa_pe_annotated_text` containing an error. `text_start`: the start index of the error span in `qa_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `qa_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `qa_pe_fixed_text` for the error span in `qa_pe_annotated_text`. `correction_start`: the start index of the error span in `qa_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `qa_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). | | |
### Data Splits | |
|`config` | `split`| | | |
|------------------------------------:|-------:|--------------------------------------------------------------:| | |
|`main` | `train`| 8100 (51 docs i.e. 324 sents x 25 translators) | | |
|`pretask` | `train`| 950 (6 docs i.e. 38 sents x 25 translators) | | |
|`posttask` | `train`| 1200 (8 docs i.e. 50 sents x 24 translators) | | |
|`pretask_questionnaire` | `train`| 26 (all translators, including replaced/replacements) | | |
|`posttask_highlight_questionnaire` | `train`| 19 (all translators for highlight modalities + 1 replacement) | | |
|`posttask_no_highlight_questionnaire`| `train`| 6 (all translators for `no_highlight` modality) | | |
#### Train Split | |
The `train` split contains the totality of triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation. | |
The following is an example of the subject `oracle_t1` post-editing for segment `3` of `doc20` in the `eng-nld` direction of the `main` task. The fields `mt_pe_word_aligned` and `mt_pe_char_aligned` are shown over three lines to provide a visual understanding of their contents. | |
```python | |
{ | |
# Identification | |
"unit_id": "qe4pe-main-eng-nld-20-3-oracle_t1", | |
"wmt_id": "doc5", | |
"wmt_category": "biomedical", | |
"doc_id": 20, | |
"segment_in_doc_id": 3, | |
"segment_id": 129, | |
"translator_pretask_id": "t4", | |
"translator_main_id": "oracle_t1", | |
"src_lang": "eng", | |
"tgt_lang": "nld", | |
"highlight_modality": "oracle", | |
# Text statistics | |
"src_num_chars": 104, | |
"mt_num_chars": 136, | |
"pe_num_chars": 106, | |
"src_num_words": 15, | |
"mt_num_words": 16, | |
"pe_num_words": 16, | |
# Edits statistics | |
"num_words_insert": 0, | |
"num_words_delete": 0, | |
"num_words_substitute": 1, | |
"num_words_unchanged": 15, | |
"tot_words_edits": 1, | |
"wer": 0.0625, | |
"num_chars_insert": 0, | |
"num_chars_delete": 0, | |
"num_chars_substitute": 6, | |
"num_chars_unchanged": 100, | |
"tot_chars_edits": 6, | |
"cer": 0.0566, | |
# Translation quality | |
"mt_bleu_max": 100.0, | |
"mt_bleu_min": 7.159, | |
"mt_bleu_mean": 68.687, | |
"mt_bleu_std": 31.287, | |
"mt_chrf_max": 100.0, | |
"mt_chrf_min": 45.374, | |
"mt_chrf_mean": 83.683, | |
"mt_chrf_std": 16.754, | |
"mt_ter_max": 100.0, | |
"mt_ter_min": 0.0, | |
"mt_ter_mean": 23.912, | |
"mt_ter_std": 29.274, | |
"mt_comet_max": 0.977, | |
"mt_comet_min": 0.837, | |
"mt_comet_mean": 0.94, | |
"mt_comet_std": 0.042, | |
"mt_xcomet_qe": 0.985, | |
"mt_xcomet_errors": "[]", | |
"pe_bleu_max": 100.0, | |
"pe_bleu_min": 11.644, | |
"pe_bleu_mean": 61.335, | |
"pe_bleu_std": 28.617, | |
"pe_chrf_max": 100.0, | |
"pe_chrf_min": 53.0, | |
"pe_chrf_mean": 79.173, | |
"pe_chrf_std": 13.679, | |
"pe_ter_max": 100.0, | |
"pe_ter_min": 0.0, | |
"pe_ter_mean": 28.814, | |
"pe_ter_std": 28.827, | |
"pe_comet_max": 0.977, | |
"pe_comet_min": 0.851, | |
"pe_comet_mean": 0.937, | |
"pe_comet_std": 0.035, | |
"pe_xcomet_qe": 0.984, | |
"pe_xcomet_errors": "[]", | |
# Behavioral data | |
"doc_num_edits": 103, | |
"doc_edit_order": 20, | |
"doc_edit_time": 118, | |
"doc_edit_time_filtered": 118, | |
"doc_keys_per_min": 52.37, | |
"doc_chars_per_min": 584.24, | |
"doc_words_per_min": 79.83, | |
"segment_num_edits": 9, | |
"segment_edit_order": 3, | |
"segment_edit_time": 9, | |
"segment_edit_time_filtered": 9, | |
"segment_keys_per_min": 60.0, | |
"segment_chars_per_min": 906.67, | |
"segment_words_per_min": 106.67, | |
"num_enter_actions": 2, | |
"remove_highlights": False, | |
# Texts and annotations | |
"src_text": "The speed of its emerging growth frequently outpaces the development of quality assurance and education.", | |
"mt_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.", | |
"mt_text_highlighted": "De snelheid van de opkomende groei is vaak <minor>sneller</minor> dan de ontwikkeling van kwaliteitsborging en <major>onderwijs.</major>", | |
"pe_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.", | |
"mt_pe_word_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \ | |
"PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \ | |
" S", | |
"mt_pe_char_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \ | |
"PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \ | |
" SS SS SS ", | |
"highlights": """[ | |
{ | |
'text': 'sneller', | |
'severity': 'minor', | |
'start': 43, | |
'end': 50 | |
}, | |
{ | |
'text': 'onderwijs.', | |
'severity': 'major', | |
'start': 96, | |
'end': 106 | |
} | |
]""" | |
# QA annotations | |
"qa_mt_annotator_id": 'qa_nld_3', | |
"qa_pe_annotator_id": 'qa_nld_1', | |
"qa_mt_esa_rating": 100.0, | |
"qa_pe_esa_rating": 80.0, | |
"qa_mt_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.", | |
"qa_pe_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.", | |
"qa_mt_fixed_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.", | |
"qa_pe_fixed_text": "De snelheid van de ontluikende groei overtreft vaak de ontwikkeling van kwaliteitsborging en onderwijs.", | |
"qa_mt_mqm_errors": "[]", | |
"qa_pe_mqm_errors": """[ | |
{ | |
"text": "opkomende", | |
"text_start": 19, | |
"text_end": 28, | |
"correction": | |
"ontluikende", | |
"correction_start": 19, | |
"correction_end": 30, | |
"description": "Mistranslation - not the correct word", | |
"mqm_category": "Mistranslation", | |
"severity": "Minor", | |
"comment": "", | |
"edit_order": 1 | |
} | |
]""" | |
} | |
``` | |
The text is provided as-is, without further preprocessing or tokenization. | |
### Dataset Creation | |
The datasets were parsed from GroTE inputs, logs and outputs for the QE4PE study, available in this repository. Processed dataframes using the `qe4pe process_task_data` command. Refer to the [QE4PE Github repository](https://github.com/gsarti/qe4pe) for additional details. The overall structure and processing of the dataset were inspired by the [DivEMT dataset](https://huggingface.co/datasets/GroNLP/divemt). | |
### QA Annotations | |
MQM annotations were collected using Google Sheets and highlights were parsed from HTML exported output, ensuring their compliance with well-formedness checks. Out of the original 51 docs (324 segments) in `main`, 24 docs (10 biomedical, 14 social, totaling 148 segments) were samples at random and annotated by professional translators. | |
## Additional Information | |
### Metric signatures | |
The following signatures correspond to the metrics reported in the processed dataframes: | |
```shell | |
# Computed using SacreBLEU: https://github.com/mjpost/sacrebleu | |
BLEU: case:mixed|eff:yes|tok:13a|smooth:exp|version:2.3.1 | |
ChrF: case:mixed|eff:yes|nc:6|nw:0|space:no|version:2.3.1 | |
TER: case:lc|tok:tercom|norm:no|punct:yes|asian:no|version:2.3.1 | |
# Computed using Unbabel COMET: https://github.com/Unbabel/COMET | |
Comet: Python3.11.9|Comet2.2.2|fp32|Unbabel/wmt22-comet-da | |
XComet: Python3.10.12|Comet2.2.1|fp32|Unbabel/XCOMET-XXL | |
``` | |
### Dataset Curators | |
For problems related to this 🤗 Datasets version, please contact me at [[email protected]](mailto:[email protected]). | |
### Citation Information | |
```bibtex | |
@misc{sarti-etal-2024-qe4pe, | |
title={{QE4PE}: Word-level Quality Estimation for Human Post-Editing}, | |
author={Gabriele Sarti and Vilém Zouhar and Grzegorz Chrupała and Ana Guerberof-Arenas and Malvina Nissim and Arianna Bisazza}, | |
year={2025}, | |
eprint={2503.03044}, | |
archivePrefix={arXiv}, | |
primaryClass={cs.CL}, | |
url={https://arxiv.org/abs/2503.03044}, | |
} | |
``` |