jsaizant commited on
Commit
cbdd95a
·
verified ·
1 Parent(s): d71435e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -15
README.md CHANGED
@@ -278,18 +278,19 @@ for output in outputs:
278
 
279
  ### Pretraining Data
280
 
281
- The training corpus consists of 2.4 trillion tokens, including 35 European languages and 92 programming languages. It amounts to a total of 33TB of pre-processed text.
282
- Languages were sampled manually by giving x2 oversampling to Spain's co-official languages (Spanish, Catalan, Galician and Basque), code was undersampled by half,
283
- and the rest of the languages were kept as is, resulting in the following distribution:
 
 
 
284
 
285
  ![lang distrib](./images/corpus_languages.png)
286
 
287
- This highly multilingual corpus is predominantly composed of data from Colossal OSCAR,
288
- which contributes a significant 66.06% of the total tokens.
289
- Following this, Starcoder provides 11.91%, and Spanish Crawling adds 3.34%.
290
- The next largest sources are French PD at 3.12% and Proof Pile at 1.98%.
291
- Other notable contributions include Macocu, Pile of Law, and Eurlex, each contributing around 1.5% to 1.3%.
292
- These major sources collectively form the bulk of the corpus, ensuring a rich and diverse dataset for training the language model.
293
  The remaining 10% comes from smaller sources in various languages.
294
 
295
  Feel free to click the expand button below to see the full list of sources.
@@ -428,8 +429,9 @@ To consult the data summary document with the respective licences, please send a
428
 
429
  </details>
430
 
431
- The model was trained for 3 epochs, with two final rounds of 0.3B higher-quality tokens each,
432
- meaning that the total number of tokens seen during pre-training amounts to roughly 7.8 trillion tokens.
 
433
 
434
  We provide an extense Datasheet section following the best practices defined by [(Gebru et al., 2021)](https://arxiv.org/pdf/1803.09010).
435
 
@@ -463,6 +465,9 @@ and public institutions, which can be found in detail in the acknowledgements.
463
 
464
  This work/research has been promoted and financed by the Government of Catalonia through the [Aina project](https://projecteaina.cat/).
465
 
 
 
 
466
  #### Composition
467
 
468
  **What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.**
@@ -486,10 +491,10 @@ We provide a complete list of dataset sources at the end of this section.
486
  **How many instances are there in total (of each type, if appropriate)?**
487
 
488
  The dataset contains a diverse range of instances across multiple languages, with notable adjustments for certain languages. English
489
- represents the largest portion, accounting for 39.08% of the total data. Spanish was upsampled by a factor of 2, bringing its share to 16.59%,
490
- while Catalan (1.84%), Basque (0.26%), and Galician (0.36%) were also upsampled by 2. On the other hand, code-related data was downsampled
491
- by half, making up 6.42% of the total. Other prominent languages include French (6.59%), Russian (5.39%), German (4.25%), and Hungarian
492
- (3.93%), with several additional languages contributing between 1% and 2%, and smaller portions represented by a variety of others.
493
 
494
  **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).**
495
 
 
278
 
279
  ### Pretraining Data
280
 
281
+ The pre-training corpus comprises data from 35 European languages and 92 programming languages, with detailed data sources provided below.
282
+ The initial three training epochs used 2.4 trillion tokens, obtained by manually adjusting data proportion to balance the representation
283
+ and give more importance to Spain’s co-official (Spanish, Catalan, Galician, and Basque). This way, we downsampled code and English data to half,
284
+ Spanish co-official languages were oversampled by 2x, and the remaining languages were kept in their original proportions.
285
+ Following, we trained two additional epochs during which the Colossal OSCAR dataset was replaced with the FineWebEdu dataset.
286
+ This adjustment resulted in a total of 2.08 trillion tokens, distributed as outlined below:
287
 
288
  ![lang distrib](./images/corpus_languages.png)
289
 
290
+ The pretraining corpus is predominantly composed of data from Colossal OSCAR, which contributes a significant 53,05% of the total tokens.
291
+ Following this, Starcoder provides 13,67%, and FineWebEdu (350B tokens subset) adds 10,24%. The next largest sources are HPLT at 4,21% and French-PD at 3,59%.
292
+ Other notable contributions include MaCoCu, Legal-ES, and EurLex, each contributing around 1.72% to 1.41%.
293
+ These major sources collectively form the bulk of the corpus, ensuring a rich and diverse dataset for training the language model.
 
 
294
  The remaining 10% comes from smaller sources in various languages.
295
 
296
  Feel free to click the expand button below to see the full list of sources.
 
429
 
430
  </details>
431
 
432
+ The model was trained on 3 pre-training epochs with 2.4T tokens per epoch, 2 additional pre-training epochs in which the English part
433
+ of the Colossal OSCAR dataset was replaced with FineWebEdu (350T subset), resulting in 2.08T tokens per epoch;
434
+ and 1 final round of 0.315T higher quality tokens, meaning that the total number of tokens seen during pre-training is approximately 11.675 trillion tokens.
435
 
436
  We provide an extense Datasheet section following the best practices defined by [(Gebru et al., 2021)](https://arxiv.org/pdf/1803.09010).
437
 
 
465
 
466
  This work/research has been promoted and financed by the Government of Catalonia through the [Aina project](https://projecteaina.cat/).
467
 
468
+ This work is funded by the _Ministerio para la Transformación Digital y de la Función Pública_ - Funded by EU – NextGenerationEU
469
+ within the framework of [ILENIA Project](https://proyectoilenia.es/) with reference 2022/TL22/00215337.
470
+
471
  #### Composition
472
 
473
  **What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.**
 
491
  **How many instances are there in total (of each type, if appropriate)?**
492
 
493
  The dataset contains a diverse range of instances across multiple languages, with notable adjustments for certain languages. English
494
+ represents the largest portion, accounting for 39.31% of the total data. Spanish was upsampled by a factor of 2, bringing its share to 16.12%,
495
+ while Catalan (1.97%), Basque (0.24%), and Galician (0.31%) were also upsampled by 2. On the other hand, code-related data was downsampled
496
+ by half, making up 5.78% of the total. Other prominent languages include French (6.6%), Russian (5.56%), German (4.79%), and Hungarian
497
+ (4.59%), with several additional languages contributing between 1% and 2%, and smaller portions represented by a variety of others.
498
 
499
  **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).**
500