Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
kazparc / README.md
yeshpanovrustem's picture
Update README.md
41df65b verified
---
task_categories:
- translation
language:
- kk
- en
- ru
- tr
pretty_name: Kazakh Parallel Corpus
dataset_info:
- config_name: kazparc_raw
features:
- name: id
dtype: string
- name: kk
dtype: string
- name: en
dtype: string
- name: ru
dtype: string
- name: tr
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 230957871
num_examples: 371902
- config_name: kazparc
features:
- name: id
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
- name: domain
dtype: string
- name: pair
dtype: string
splits:
- name: train
num_bytes: 584249013
num_examples: 1742956
- name: validation
num_bytes: 145898177
num_examples: 435742
- name: test
num_bytes: 8936796
num_examples: 28500
- config_name: sync_raw
features:
- name: id
dtype: string
- name: kk
dtype: string
- name: en
dtype: string
- name: ru
dtype: string
- name: tr
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 1278185141
num_examples: 1797066
- config_name: sync
features:
- name: id
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
- name: domain
dtype: string
- name: pair
dtype: string
splits:
- name: train
num_bytes: 3654616080
num_examples: 9654322
- name: validation
num_bytes: 405929897
num_examples: 1072705
configs:
- config_name: kazparc_raw
data_files:
- split: train
path: kazparc/01_kazparc_all_entries.csv
default: true
- config_name: kazparc
data_files:
- split: train
path:
- kazparc/02_kazparc_train_kk_en.csv
- kazparc/03_kazparc_train_kk_ru.csv
- kazparc/04_kazparc_train_kk_tr.csv
- kazparc/05_kazparc_train_en_ru.csv
- kazparc/06_kazparc_train_en_tr.csv
- kazparc/07_kazparc_train_ru_tr.csv
- split: validation
path:
- kazparc/08_kazparc_valid_kk_en.csv
- kazparc/09_kazparc_valid_kk_ru.csv
- kazparc/10_kazparc_valid_kk_tr.csv
- kazparc/11_kazparc_valid_en_ru.csv
- kazparc/12_kazparc_valid_en_tr.csv
- kazparc/13_kazparc_valid_ru_tr.csv
- split: test
path:
- kazparc/14_kazparc_test_kk_en.csv
- kazparc/15_kazparc_test_kk_ru.csv
- kazparc/16_kazparc_test_kk_tr.csv
- kazparc/17_kazparc_test_en_ru.csv
- kazparc/18_kazparc_test_en_tr.csv
- kazparc/19_kazparc_test_ru_tr.csv
- config_name: sync_raw
data_files:
- split: train
path: sync/20_sync_all_entries.csv
- config_name: sync
data_files:
- split: train
path:
- sync/21_sync_train_kk_en.csv
- sync/22_sync_train_kk_ru.csv
- sync/23_sync_train_kk_tr.csv
- sync/24_sync_train_en_ru.csv
- sync/25_sync_train_en_tr.csv
- sync/26_sync_train_ru_tr.csv
- split: validation
path:
- sync/27_sync_valid_kk_en.csv
- sync/28_sync_valid_kk_ru.csv
- sync/29_sync_valid_kk_tr.csv
- sync/30_sync_valid_en_ru.csv
- sync/31_sync_valid_en_tr.csv
- sync/32_sync_valid_ru_tr.csv
size_categories:
- 100K<n<1M
---
## Dataset Description
- **Repository:** https://github.com/IS2AI/KazParC
- **Paper:** https://arxiv.org/abs/2403.19399
<h1 align = "center">KazParC</h1>
<p align = "justify">Kazakh Parallel Corpus (KazParC) is a parallel corpus designed for machine translation across Kazakh, English, Russian, and Turkish. The first and largest publicly available corpus of its kind, KazParC contains a collection of 372,164 parallel sentences covering different domains and developed with the assistance of human translators.
</p>
<a style="text-decoration:none" name = "sources_domains"><h2 align = "center">Data Sources and Domains</h2></a>
<p align = "justify">The data sources include</p>
<ul>
<li>proverbs and sayings</li>
<li>terminology glossaries</li>
<li>phrasebooks</li>
<li>literary works</li>
<li>periodicals</li>
<li>language learning materials, including the SCoRE corpus by <a href = "https://www.torrossa.com/en/resources/an/5000845#page=118">Chujo et al. (2015)</a></li>
<li>educational video subtitle collections, such as QED by <a href = "http://www.lrec-conf.org/proceedings/lrec2014/pdf/877_Paper.pdf">Abdelali et al. (2014)</a></li>
<li>news items, such as KazNERD (<a href = "https://aclanthology.org/2022.lrec-1.44.pdf">Yeshpanov et al., 2022</a>) and WMT (<a href = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf">Tiedemann, 2012</a>)</li>
<li><a href = "https://www.ted.com/">TED</a> talks</li>
<li><a href = "https://adilet.zan.kz/">governmental and regulatory legal documents from Kazakhstan</a></li>
<li>communications from the <a href = "https://www.akorda.kz/">official website of the President of the Republic of Kazakhstan</a></li>
<li><a href = "https://www.un.org/">United Nations</a> publications</li>
<li>image captions from sources like <a href = "https://arxiv.org/pdf/1405.0312.pdf%090.949.pdf">COCO</a></li>
</ul>
<p align = "justify">The sources are categorised into five broad domains:</p>
<table align = "center">
<thead>
<tr align = "center">
<th rowspan="3">Domain</th>
<th align = "right" colspan="2" rowspan="2">lines</th>
<th colspan="8">tokens</th>
</tr>
<tr align = "right">
<th colspan="2">EN</th>
<th colspan="2">KK</th>
<th colspan="2">RU</th>
<th colspan="2">TR</th>
</tr>
<tr align = "right">
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
</tr>
</thead>
<tbody align = "right">
<tr>
<td align = "center">Mass media</td>
<td>120,547</td>
<td>32.4</td>
<td>1,817,276</td>
<td>28.3</td>
<td>1,340,346</td>
<td>28.6</td>
<td>1,454,430</td>
<td>29.0</td>
<td>1,311,985</td>
<td>28.5</td>
</tr>
<tr>
<td align = "center">General</td>
<td>94,988</td>
<td>25.5</td>
<td>844,541</td>
<td>13.1</td>
<td>578,236</td>
<td>12.3</td>
<td>618,960</td>
<td>12.3</td>
<td>608,020</td>
<td>13.2</td>
</tr>
<tr>
<td align = "center">Legal documents</td>
<td>77,183</td>
<td>20.8</td>
<td>2,650,626</td>
<td>41.3</td>
<td>1,925,561</td>
<td>41.0</td>
<td>1,991,222</td>
<td>39.7</td>
<td>1,880,081</td>
<td>40.8</td>
</tr>
<tr>
<td align = "center">Education and science</td>
<td>46,252</td>
<td>12.4</td>
<td>522,830</td>
<td>8.1</td>
<td>392,348</td>
<td>8.4</td>
<td>444,786</td>
<td>8.9</td>
<td>376,484</td>
<td>8.2</td>
</tr>
<tr>
<td align = "center">Fiction</td>
<td>32,932</td>
<td>8.9</td>
<td>589,001</td>
<td>9.2</td>
<td>456,385</td>
<td>9.7</td>
<td>510,168</td>
<td>10.2</td>
<td>433,968</td>
<td>9.4</td>
</tr>
<tr>
<td align = "center"><b>Total</b></td>
<td><b>371,902</b></td>
<td><b>100</b></td>
<td><b>6,424,274</b></td>
<td><b>100</b></td>
<td><b>4,692,876</b></td>
<td><b>100</b></td>
<td><b>5,019,566</b></td>
<td><b>100</b></td>
<td><b>4,610,538</b></td>
<td><b>100</b></td>
</tr>
</tbody>
</table>
<table align = "center">
<thead align = "center">
<tr>
<th>Pair</th>
<th># lines</th>
<th># sents</th>
<th># tokens</th>
<th># types</th>
</tr>
</thead>
<tbody align = "center">
<tr>
<td>KK&harr;EN</td>
<td>363,594</td>
<td>362,230<br>361,087</td>
<td>4,670,789<br>6,393,381</td>
<td>184,258<br>59,062</td>
</tr>
<tr>
<td>KK&harr;RU</td>
<td>363,482</td>
<td>362,230<br>362,748</td>
<td>4,670,593<br>4,996,031</td>
<td>184,258<br>183,204</td>
</tr>
<tr>
<td>KK&harr;TR</td>
<td>362,150</td>
<td>362,230<br>361,660</td>
<td>4,668,852<br>4,586,421</td>
<td>184,258<br>175,145</td>
</tr>
<tr>
<td>EN&harr;RU</td>
<td>363,456</td>
<td>361,087<br>362,748</td>
<td>6,392,301<br>4,994,310</td>
<td>59,062<br>183,204</td>
</tr>
<tr>
<td>EN&harr;TR</td>
<td>362,392</td>
<td>361,087<br>361,660</td>
<td>6,380,703<br>4,579,375</td>
<td>59,062<br>175,145</td>
</tr>
<tr>
<td>RU&harr;TR</td>
<td>363,324</td>
<td>362,748<br>361,660</td>
<td>4,999,850<br>4,591,847</td>
<td>183,204<br>175,145</td>
</tr>
</tbody>
</table>
<h2 align = "center">Synthetic Corpus</h2>
<p align = "justify">To make our parallel corpus more extensive, we carried out web crawling to gather a total of 1,797,066 sentences from English-language websites. These sentences were then automatically translated into Kazakh, Russian, and Turkish using the <a href = "https://translate.google.com/">Google Translate</a> service. We refer to this collection of data as 'SynC' (Synthetic Corpus).</p>
<table align = "center">
<thead align = "center">
<tr>
<th>Pair</th>
<th># lines</th>
<th># sents</th>
<th># tokens</th>
<th># types</th>
</tr>
</thead>
<tbody align = "center">
<tr>
<td>KK&harr;EN</td>
<td>1,787,050</td>
<td>1,782,192<br>1,781,019</td>
<td>26,630,960<br>35,291,705</td>
<td>685,135<br>300,556</td>
</tr>
<tr>
<td>KK&harr;RU</td>
<td>1,787,448</td>
<td>1,782,192<br>1,777,500</td>
<td>26,654,195<br>30,241,895</td>
<td>685,135<br>672,146</td>
</tr>
<tr>
<td>KK&harr;TR</td>
<td>1,791,425</td>
<td>1,782,192<br>1,782,257</td>
<td>26,726,439<br>27,865,860</td>
<td>685,135<br>656,294</td>
</tr>
<tr>
<td>EN&harr;RU</td>
<td>1,784,513</td>
<td>1,781,019<br>1,777,500</td>
<td>35,244,800<br>30,175,611</td>
<td>300,556<br>672,146</td>
</tr>
<tr>
<td>EN&harr;TR</td>
<td>1,788,564</td>
<td>1,781,019<br>1,782,257</td>
<td>35,344,188<br>27,806,708</td>
<td>300,556<br>656,294</td>
</tr>
<tr>
<td>RU&harr;TR</td>
<td>1,788,027</td>
<td>1,777,500<br>1,782,257</td>
<td>30,269,083<br>27,816,210</td>
<td>672,146<br>656,294</td>
</tr>
</tbody>
</table>
<h2 align = "center">Data Splits</h2>
<h3 align = "center">KazParC</h3>
<p align = "justify">We first created a test set by randomly selecting 250 unique and non-repeating rows from each of the sources outlined in <a href = "#sources_domains">Data Sources and Domains</a>.
The remaining data were divided into language pairs, following an 80/20 split, while ensuring that the distribution of domains was maintained within both the training and validation sets.</p>
<table align = "center">
<thead align = "center">
<tr>
<th rowspan="3">Pair</th>
<th colspan="4">Train</th>
<th colspan="4">Valid</th>
<th colspan="4">Test</th>
</tr>
<tr>
<th>#<br>lines</th>
<th>#<br>sents</th>
<th>#<br>tokens</th>
<th>#<br>types</th>
<th>#<br>lines</th>
<th>#<br>sents</th>
<th>#<br>tokens</th>
<th>#<br>lines</th>
<th>#<br>lines</th>
<th>#<br>sents</th>
<th>#<br>tokens</th>
<th>#<br>lines</th>
</tr>
</thead>
<tbody align = "center">
<tr>
<td>KK&harr;EN</td>
<td>290,877</td>
<td>286,958<br>286,197</td>
<td>3,693,263<br>5,057,687</td>
<td>164,766<br>54,311</td>
<td>72,719</td>
<td>72,426 <br>72,403</td>
<td>920,482<br>1,259,827</td>
<td>83,057<br>32,063</td>
<td>4,750</td>
<td>4,750 <br>4,750</td>
<td>57,044<br>75,867</td>
<td>17,475<br>9,729</td>
</tr>
<tr>
<td>KK&harr;RU</td>
<td>290,785</td>
<td>286,943 <br>287,215</td>
<td>3,689,799<br>3,945,741</td>
<td>164,995<br>165,882</td>
<td>72,697</td>
<td>72,413<br>72,439</td>
<td>923,750<br>988,374</td>
<td>82,958<br>87,519</td>
<td>4,750</td>
<td>4,750 <br>4,750</td>
<td>57,044<br>61,916</td>
<td>17,475<br>18,804</td>
</tr>
<tr>
<td>KK&harr;TR</td>
<td>289,720</td>
<td>286,694 <br>286,279</td>
<td>3,691,751<br>3,626,361</td>
<td>164,961<br>157,460</td>
<td>72,430</td>
<td>72,211 <br>72,190</td>
<td>920,057<br>904,199</td>
<td>82,698<br>80,885</td>
<td>4,750</td>
<td>4,750 <br>4,750</td>
<td>57,044<br>55,861</td>
<td>17,475<br>17,284</td>
</tr>
<tr>
<td>EN&harr;RU</td>
<td>290,764</td>
<td>286,185 <br>287,261</td>
<td>5,058,530<br>3,950,362</td>
<td>54,322<br>165,701</td>
<td>72,692</td>
<td>72,377 <br>72,427</td>
<td>1,257,904<br>982,032</td>
<td>32,208<br>87,541</td>
<td>4,750</td>
<td>4,750 <br>4,750</td>
<td>75,867<br>61,916</td>
<td>9,729<br>18,804</td>
</tr>
<tr>
<td>EN&harr;TR</td>
<td>289,913</td>
<td>285,967<br>286,288</td>
<td>5,048,274<br>3,621,531</td>
<td>54,224<br>157,369</td>
<td>72,479</td>
<td>72,220 <br>72,219</td>
<td>1,256,562<br>901,983</td>
<td>32,269<br>80,838</td>
<td>4,750</td>
<td>4,750 <br>4,750</td>
<td>75,867<br>55,861</td>
<td>9,729<br>17,284</td>
</tr>
<tr>
<td>RU&harr;TR</td>
<td>290,899</td>
<td>287,241 <br>286,475</td>
<td>3,947,809<br>3,626,436</td>
<td>165,482<br>157,470</td>
<td>72,725</td>
<td>72,455<br>72,362</td>
<td>990,125<br>909,550</td>
<td>87,831<br>80,962</td>
<td>4,750</td>
<td>4,750 <br>4,750</td>
<td>61,916<br>55,861</td>
<td>18,804<br>17,284</td>
</tr>
</tbody>
</table>
<h3 align = "center">SynC</h3>
<p align = "justify">We divided the synthetic corpus into training and validation sets with a 90/10 ratio.</p>
<table align = "center">
<thead align = "center">
<tr>
<th rowspan="2">Pair</th>
<th colspan="4">Train</th>
<th colspan="4">Valid</th>
</tr>
<tr>
<th># lines</th>
<th># sents</th>
<th># tokens</th>
<th># types</th>
<th># lines</th>
<th># sents</th>
<th># tokens</th>
<th># types</th>
</tr>
</thead>
<tbody align = "center">
<tr>
<td>KK&harr;EN</td>
<td>1,608,345</td>
<td>1,604,414<br>1,603,426</td>
<td>23,970,260<br>31,767,617</td>
<td>650,144<br>286,372</td>
<td>178,705</td>
<td>178,654<br>178,639</td>
<td>2,660,700<br>3,524,088</td>
<td>208,838<br>105,517</td>
</tr>
<tr>
<td>KK&harr;RU</td>
<td>1,608,703</td>
<td>1,604,468<br>1,600,643</td>
<td>23,992,148<br>27,221,583</td>
<td>650,170<br>642,604</td>
<td>178,745</td>
<td>178,691<br>178,642</td>
<td>2,662,047<br>3,020,312</td>
<td>209,188<br>235,642</td>
</tr>
<tr>
<td>KK&harr;TR</td>
<td>1,612,282</td>
<td>1,604,793<br>1,604,822</td>
<td>24,053,671<br>25,078,688</td>
<td>650,384<br>626,724</td>
<td>179,143</td>
<td>179,057<br>179,057</td>
<td>2,672,768<br>2,787,172</td>
<td>209,549<br>221,773</td>
</tr>
<tr>
<td>EN&harr;RU</td>
<td>1,606,061</td>
<td>1,603,199<br>1,600,372</td>
<td>31,719,781<br>27,158,101</td>
<td>286,645<br>642,686</td>
<td>178,452</td>
<td>178,419<br>178,379</td>
<td>3,525,019<br>3,017,510</td>
<td>104,834<br>235,069</td>
</tr>
<tr>
<td>EN&harr;TR</td>
<td>1,609,707</td>
<td>1,603,636<br>1,604,545</td>
<td>31,805,393<br>25,022,782</td>
<td>286,387<br>626,740</td>
<td>178,857</td>
<td>178,775<br>178,796</td>
<td>3,538,795<br>2,783,926</td>
<td>105,641<br>221,372</td>
</tr>
<tr>
<td>RU&harr;TR</td>
<td>1,609,224</td>
<td>1,600,605<br>1,604,521</td>
<td>27,243,278<br>25,035,274</td>
<td>642,797<br>626,587</td>
<td>178,803</td>
<td>178,695<br>178,750</td>
<td>3,025,805<br>2,780,936</td>
<td>235,970<br>221,792</td>
</tr>
</tbody>
</table>
<h2 align = "center">Corpus Structure</h2>
<p align = "justify">The entire corpus</a> is organised into two distinct groups based on their file prefixes. Files "01" through "19" have the "kazparc" prefix, while Files "20" to "32" have the "sync" prefix.</p>
```
β”œβ”€β”€ kazparc
β”œβ”€β”€ 01_kazparc_all_entries.csv
β”œβ”€β”€ 02_kazparc_train_kk_en.csv
β”œβ”€β”€ 03_kazparc_train_kk_ru.csv
β”œβ”€β”€ 04_kazparc_train_kk_tr.csv
β”œβ”€β”€ 05_kazparc_train_en_ru.csv
β”œβ”€β”€ 06_kazparc_train_en_tr.csv
β”œβ”€β”€ 07_kazparc_train_ru_tr.csv
β”œβ”€β”€ 08_kazparc_valid_kk_en.csv
β”œβ”€β”€ 09_kazparc_valid_kk_ru.csv
β”œβ”€β”€ 10_kazparc_valid_kk_tr.csv
β”œβ”€β”€ 11_kazparc_valid_en_ru.csv
β”œβ”€β”€ 12_kazparc_valid_en_tr.csv
β”œβ”€β”€ 13_kazparc_valid_ru_tr.csv
β”œβ”€β”€ 14_kazparc_test_kk_en.csv
β”œβ”€β”€ 15_kazparc_test_kk_ru.csv
β”œβ”€β”€ 16_kazparc_test_kk_tr.csv
β”œβ”€β”€ 17_kazparc_test_en_ru.csv
β”œβ”€β”€ 18_kazparc_test_en_tr.csv
β”œβ”€β”€ 19_kazparc_test_ru_tr.csv
β”œβ”€β”€ sync
β”œβ”€β”€ 20_sync_all_entries.csv
β”œβ”€β”€ 21_sync_train_kk_en.csv
β”œβ”€β”€ 22_sync_train_kk_ru.csv
β”œβ”€β”€ 23_sync_train_kk_tr.csv
β”œβ”€β”€ 24_sync_train_en_ru.csv
β”œβ”€β”€ 25_sync_train_en_tr.csv
β”œβ”€β”€ 26_sync_train_ru_tr.csv
β”œβ”€β”€ 27_sync_valid_kk_en.csv
β”œβ”€β”€ 28_sync_valid_kk_ru.csv
β”œβ”€β”€ 29_sync_valid_kk_tr.csv
β”œβ”€β”€ 30_sync_valid_en_ru.csv
β”œβ”€β”€ 31_sync_valid_en_tr.csv
β”œβ”€β”€ 32_sync_valid_ru_tr.csv
```
<h3 align = "center">KazParC files</h3>
<ul>
<li>File "01" contains the original, unprocessed text data for the four languages considered within KazParC.
<li>Files "02" through "19" represent pre-processed texts divided into language pairs for training (Files "02" to "07"), validation (Files "08" to "13"), and testing (Files "14" to "19"). Language pairs are indicated within the filenames using two-letter language codes (e.g., kk_en).
</ul>
<h3 align = "center">SynC files</h3>
<ul>
<li>File "20" contains raw, unprocessed text data for the four languages.</li>
<li>Files "21" to "32" contain pre-processed text divided into language pairs for training (Files "21" to "26") and validation (Files "27" to "32") purposes.</li>
</ul>
<h3 align = "center">Data Fields</h3>
<p align = "justify">In both "01" and "20", each line consists of specific components:</p>
- `id`: the unique line identifier
- `kk`: the sentence in Kazakh
- `en`: the sentence in English
- `ru`: the sentence in Russian
- `tr`: the sentence in Turkish
- `domain`: the domain of the sentence
<p align = "justify">For the other files, the fields are:</p>
- `id`: the unique line identifier
- `source_lang`: the source language code
- `target_lang`: the target language code
- `domain`: the domain of the sentence
- `pair`: the language pair
<h2 align = "center">How to Use</h2>
To load the subsets of KazParC separately:
```python
from datasets import load_dataset
kazparc_raw = load_dataset("issai/kazparc", "kazparc_raw")
kazparc = load_dataset("issai/kazparc", "kazparc")
sync_raw = load_dataset("issai/kazparc", "sync_raw")
sync = load_dataset("issai/kazparc", "sync")
```