Update README.md
Browse files
README.md
CHANGED
@@ -35,13 +35,29 @@
|
|
35 |
|
36 |
### Dataset Summary
|
37 |
|
38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
-
For the second edition of the Large-Scale MT shared task, we aim to bring together the community on the topic of machine translation for a set of 24 African languages. We do so by introducing a high quality benchmark, paired with a fair and rigorous evaluation procedure.
|
41 |
|
42 |
### Supported Tasks and Leaderboards
|
43 |
|
44 |
-
[
|
45 |
|
46 |
### Languages
|
47 |
|
@@ -78,6 +94,8 @@ Colonial linguae francae: English - eng, French - fra
|
|
78 |
|
79 |
## Dataset Structure
|
80 |
|
|
|
|
|
81 |
### Data Instances
|
82 |
|
83 |
The dataset contains 248 language pairs.
|
@@ -354,33 +372,33 @@ Example:
|
|
354 |
```
|
355 |
### Data Splits
|
356 |
|
357 |
-
|
358 |
|
359 |
## Dataset Creation
|
360 |
|
361 |
### Curation Rationale
|
362 |
|
363 |
-
[
|
364 |
|
365 |
### Source Data
|
366 |
|
367 |
#### Initial Data Collection and Normalization
|
368 |
|
369 |
-
|
370 |
|
371 |
#### Who are the source language producers?
|
372 |
|
373 |
-
|
374 |
|
375 |
### Annotations
|
376 |
|
377 |
#### Annotation process
|
378 |
|
379 |
-
The metadata
|
380 |
|
381 |
#### Who are the annotators?
|
382 |
|
383 |
-
[
|
384 |
|
385 |
### Personal and Sensitive Information
|
386 |
|
@@ -390,11 +408,11 @@ The metadata for creation can be found here: https://github.com/facebookresearch
|
|
390 |
|
391 |
### Social Impact of Dataset
|
392 |
|
393 |
-
|
394 |
|
395 |
### Discussion of Biases
|
396 |
|
397 |
-
|
398 |
|
399 |
### Other Known Limitations
|
400 |
|
@@ -412,4 +430,4 @@ We are releasing this dataset under the terms of [CC-BY-NC](https://github.com/f
|
|
412 |
|
413 |
### Citation Information
|
414 |
|
415 |
-
|
|
|
35 |
|
36 |
### Dataset Summary
|
37 |
|
38 |
+
This dataset was created based on [metadata](https://github.com/facebookresearch/LASER/tree/main/data/wmt22_african) for mined bitext released by Meta AI. It contains bitext for 248 pairs for the African languages that are part of the [2022 WMT Shared Task on Large Scale Machine Translation Evaluation for African Languages](https://www.statmt.org/wmt22/large-scale-multilingual-translation-task.html).
|
39 |
+
|
40 |
+
#### How to use the data
|
41 |
+
|
42 |
+
There are two ways to access the data:
|
43 |
+
|
44 |
+
* Via the Hugging Face Python datasets library
|
45 |
+
|
46 |
+
```
|
47 |
+
from datasets import load_dataset
|
48 |
+
dataset = load_dataset("allenai/wmt22_african")
|
49 |
+
```
|
50 |
+
|
51 |
+
* Clone the git repo
|
52 |
+
```
|
53 |
+
git lfs install
|
54 |
+
git clone https://huggingface.co/datasets/allenai/wmt22_african
|
55 |
+
```
|
56 |
|
|
|
57 |
|
58 |
### Supported Tasks and Leaderboards
|
59 |
|
60 |
+
This dataset is one of resources allowed under the Constrained Track for the [2022 WMT Shared Task on Large Scale Machine Translation Evaluation for African Languages](https://www.statmt.org/wmt22/large-scale-multilingual-translation-task.html).
|
61 |
|
62 |
### Languages
|
63 |
|
|
|
94 |
|
95 |
## Dataset Structure
|
96 |
|
97 |
+
The dataset contains gzipped tab delimited text files for each direction. Each text file contains lines with parallel sentences.
|
98 |
+
|
99 |
### Data Instances
|
100 |
|
101 |
The dataset contains 248 language pairs.
|
|
|
372 |
```
|
373 |
### Data Splits
|
374 |
|
375 |
+
The data is not split into train, dev, and test.
|
376 |
|
377 |
## Dataset Creation
|
378 |
|
379 |
### Curation Rationale
|
380 |
|
381 |
+
Parallel sentences from monolingual data in Common Crawl and ParaCrawl were identified via [Language-Agnostic Sentence Representation (LASER)](https://github.com/facebookresearch/LASER) encoders.
|
382 |
|
383 |
### Source Data
|
384 |
|
385 |
#### Initial Data Collection and Normalization
|
386 |
|
387 |
+
Monolingual data was obtained from Common Crawl and ParaCrawl.
|
388 |
|
389 |
#### Who are the source language producers?
|
390 |
|
391 |
+
Contributors to web text in Common Crawl and ParaCrawl.
|
392 |
|
393 |
### Annotations
|
394 |
|
395 |
#### Annotation process
|
396 |
|
397 |
+
The data was not human annotated. The metadata used to create the dataset can be found here: https://github.com/facebookresearch/LASER/tree/main/data/wmt22_african
|
398 |
|
399 |
#### Who are the annotators?
|
400 |
|
401 |
+
The data was not human annotated. Parallel text from Common Crawl and Para Crawl monolingual data were identified automatically via [LASER](https://github.com/facebookresearch/LASER) encoders.
|
402 |
|
403 |
### Personal and Sensitive Information
|
404 |
|
|
|
408 |
|
409 |
### Social Impact of Dataset
|
410 |
|
411 |
+
This dataset provides data for training machine learning systems for many languages that have low resources available for NLP.
|
412 |
|
413 |
### Discussion of Biases
|
414 |
|
415 |
+
Biases in the data have not been studied.
|
416 |
|
417 |
### Other Known Limitations
|
418 |
|
|
|
430 |
|
431 |
### Citation Information
|
432 |
|
433 |
+
Forthcoming research paper that describes the approach used to create the metadata. Citation Information will be updated with the paper information when that is available.
|