datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknowndate 2021-04-29 15:34:29
2025-03-26 01:30:06
| downloads
int64 0
4.51M
| likes
int64 0
7.64k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
47
⌀ | createdAt
unknowndate 2022-03-02 23:29:22
2025-03-26 01:27:57
| card
stringlengths 15
1.02M
|
---|---|---|---|---|---|---|---|---|
AI-MO/NuminaMath-TIR | AI-MO | "2024-11-25T05:32:53Z" | 15,477 | 117 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"math",
"aimo"
] | [
"text-generation"
] | "2024-07-16T07:44:24Z" | ---
language:
- en
license: apache-2.0
task_categories:
- text-generation
pretty_name: NuminaMath TIR
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 327147067
num_examples: 72441
- name: test
num_bytes: 461331
num_examples: 99
download_size: 147557990
dataset_size: 327608398
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- math
- aimo
---
# Dataset Card for NuminaMath CoT
## Dataset Description
- **Homepage:** https://projectnumina.ai
- **Repository:** https://github.com/project-numina/aimo-progress-prize
- **Paper:** https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf
- **Leaderboard:**
- **Point of Contact:** [Jia Li]([email protected])
### Dataset Summary
Tool-integrated reasoning (TIR) plays a crucial role in this competition. However, collecting and annotating such data is both costly and time-consuming. To address this, we selected approximately 70k problems from the NuminaMath-CoT dataset, focusing on those with numerical outputs, most of which are integers. We then utilized a pipeline leveraging GPT-4 to generate TORA-like reasoning paths, executing the code and producing results until the solution was complete. We filtered out solutions where the final answer did not match the reference and repeated this process three times to ensure accuracy and consistency. This iterative approach allowed us to generate high-quality TORA data efficiently.
### Licensing Information
The dataset is available under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@misc{numina_math_datasets,
author = {Jia LI, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Costa Huang, Kashif Rasul, Longhui Yu, Albert Jiang, Ziju Shen, Zihan Qin, Bin Dong, Li Zhou, Yann Fleureau, Guillaume Lample, and Stanislas Polu},
title = {NuminaMath TIR},
year = {2024},
publisher = {Numina},
journal = {Hugging Face repository},
howpublished = {\url{[https://huggingface.co/AI-MO/NuminaMath-TIR](https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf)}}
}
``` |
allenai/sciq | allenai | "2024-01-04T16:23:51Z" | 15,458 | 103 | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-3.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
paperswithcode_id: sciq
pretty_name: SciQ
dataset_info:
features:
- name: question
dtype: string
- name: distractor3
dtype: string
- name: distractor1
dtype: string
- name: distractor2
dtype: string
- name: correct_answer
dtype: string
- name: support
dtype: string
splits:
- name: train
num_bytes: 6546183
num_examples: 11679
- name: validation
num_bytes: 554120
num_examples: 1000
- name: test
num_bytes: 563927
num_examples: 1000
download_size: 4674410
dataset_size: 7664230
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "sciq"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/sciq](https://allenai.org/data/sciq)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
### Dataset Summary
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"correct_answer": "coriolis effect",
"distractor1": "muon effect",
"distractor2": "centrifugal effect",
"distractor3": "tropical effect",
"question": "What phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere?",
"support": "\"Without Coriolis Effect the global winds would blow north to south or south to north. But Coriolis makes them blow northeast to..."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `question`: a `string` feature.
- `distractor3`: a `string` feature.
- `distractor1`: a `string` feature.
- `distractor2`: a `string` feature.
- `correct_answer`: a `string` feature.
- `support`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|11679| 1000|1000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under the [Creative Commons Attribution-NonCommercial 3.0 Unported License](http://creativecommons.org/licenses/by-nc/3.0/).
### Citation Information
```
@inproceedings{SciQ,
title={Crowdsourcing Multiple Choice Science Questions},
author={Johannes Welbl, Nelson F. Liu, Matt Gardner},
year={2017},
journal={arXiv:1707.06209v1}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
lmms-lab/GQA | lmms-lab | "2024-03-08T05:02:22Z" | 15,443 | 13 | [
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-12-26T13:11:16Z" | ---
license: mit
dataset_info:
- config_name: challenge_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: challenge
num_bytes: 261636425.25
num_examples: 1590
download_size: 261271928
dataset_size: 261636425.25
- config_name: challenge_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: isBalanced
dtype: bool
splits:
- name: challenge
num_bytes: 50797705
num_examples: 713449
download_size: 19869828
dataset_size: 50797705
- config_name: challenge_balanced_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: challenge
num_bytes: 261636425.25
num_examples: 1590
download_size: 261333538
dataset_size: 261636425.25
- config_name: challenge_balanced_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: isBalanced
dtype: bool
splits:
- name: challenge
num_bytes: 3523973
num_examples: 50726
download_size: 1787024
dataset_size: 3523973
- config_name: submission_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: submission
num_bytes: 2314978438.875
num_examples: 15545
download_size: 2309217874
dataset_size: 2314978438.875
- config_name: submission_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: isBalanced
dtype: bool
splits:
- name: submission
num_bytes: 298875520
num_examples: 4237524
download_size: 121458425
dataset_size: 298875520
- config_name: test_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: test
num_bytes: 492571840.875
num_examples: 2993
download_size: 491611526
dataset_size: 492571840.875
- config_name: test_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: isBalanced
dtype: bool
splits:
- name: test
num_bytes: 95588974
num_examples: 1340048
download_size: 39561711
dataset_size: 95588974
- config_name: test_balanced_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: test
num_bytes: 491210370.625
num_examples: 2987
download_size: 490293506
dataset_size: 491210370.625
- config_name: test_balanced_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: isBalanced
dtype: bool
splits:
- name: test
num_bytes: 6622775
num_examples: 95336
download_size: 3401070
dataset_size: 6622775
- config_name: testdev_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: testdev
num_bytes: 65779269.0
num_examples: 398
download_size: 65670255
dataset_size: 65779269.0
- config_name: testdev_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: testdev
num_bytes: 86970760
num_examples: 172174
download_size: 23385535
dataset_size: 86970760
- config_name: testdev_balanced_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: testdev
num_bytes: 65779269.0
num_examples: 398
download_size: 65647745
dataset_size: 65779269.0
- config_name: testdev_balanced_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: testdev
num_bytes: 6113469
num_examples: 12578
download_size: 2090335
dataset_size: 6113469
- config_name: train_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 10509758457.0
num_examples: 74256
download_size: 10480239090
dataset_size: 10509758457.0
- config_name: train_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: train
num_bytes: 6891129609
num_examples: 14305356
download_size: 1874173198
dataset_size: 6891129609
- config_name: train_balanced_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 10200292415.5
num_examples: 72140
download_size: 10171627271
dataset_size: 10200292415.5
- config_name: train_balanced_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: train
num_bytes: 460429581
num_examples: 943000
download_size: 183979778
dataset_size: 460429581
- config_name: val_all_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: val
num_bytes: 1494990904.5
num_examples: 10564
download_size: 1490744689
dataset_size: 1494990904.5
- config_name: val_all_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: val
num_bytes: 967338322
num_examples: 2011853
download_size: 266476025
dataset_size: 967338322
- config_name: val_balanced_images
features:
- name: id
dtype: string
- name: image
dtype: image
splits:
- name: val
num_bytes: 1447074448.75
num_examples: 10234
download_size: 1443033919
dataset_size: 1447074448.75
- config_name: val_balanced_instructions
features:
- name: id
dtype: string
- name: imageId
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: isBalanced
dtype: bool
- name: groups
struct:
- name: global
dtype: string
- name: local
dtype: string
- name: entailed
dtype: string
- name: equivalent
dtype: string
- name: types
struct:
- name: structural
dtype: string
- name: semantic
dtype: string
- name: detailed
dtype: string
- name: annotations
sequence:
- name: question
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: answer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: fullAnswer
struct:
- name: objectId
dtype: string
- name: value
dtype: string
- name: semantic
list:
- name: operation
dtype: string
- name: argument
dtype: string
- name: dependencies
sequence: int32
- name: semanticStr
dtype: string
splits:
- name: val
num_bytes: 64498952
num_examples: 132062
download_size: 25794272
dataset_size: 64498952
configs:
- config_name: challenge_all_images
data_files:
- split: challenge
path: challenge_all_images/challenge-*
- config_name: challenge_all_instructions
data_files:
- split: challenge
path: challenge_all_instructions/challenge-*
- config_name: challenge_balanced_images
data_files:
- split: challenge
path: challenge_balanced_images/challenge-*
- config_name: challenge_balanced_instructions
data_files:
- split: challenge
path: challenge_balanced_instructions/challenge-*
- config_name: submission_all_images
data_files:
- split: submission
path: submission_all_images/submission-*
- config_name: submission_all_instructions
data_files:
- split: submission
path: submission_all_instructions/submission-*
- config_name: test_all_images
data_files:
- split: test
path: test_all_images/test-*
- config_name: test_all_instructions
data_files:
- split: test
path: test_all_instructions/test-*
- config_name: test_balanced_images
data_files:
- split: test
path: test_balanced_images/test-*
- config_name: test_balanced_instructions
data_files:
- split: test
path: test_balanced_instructions/test-*
- config_name: testdev_all_images
data_files:
- split: testdev
path: testdev_all_images/testdev-*
- config_name: testdev_all_instructions
data_files:
- split: testdev
path: testdev_all_instructions/testdev-*
- config_name: testdev_balanced_images
data_files:
- split: testdev
path: testdev_balanced_images/testdev-*
- config_name: testdev_balanced_instructions
data_files:
- split: testdev
path: testdev_balanced_instructions/testdev-*
- config_name: train_all_images
data_files:
- split: train
path: train_all_images/train-*
- config_name: train_all_instructions
data_files:
- split: train
path: train_all_instructions/train-*
- config_name: train_balanced_images
data_files:
- split: train
path: train_balanced_images/train-*
- config_name: train_balanced_instructions
data_files:
- split: train
path: train_balanced_instructions/train-*
- config_name: val_all_images
data_files:
- split: val
path: val_all_images/val-*
- config_name: val_all_instructions
data_files:
- split: val
path: val_all_instructions/val-*
- config_name: val_balanced_images
data_files:
- split: val
path: val_balanced_images/val-*
- config_name: val_balanced_instructions
data_files:
- split: val
path: val_balanced_instructions/val-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [GQA](hhttps://cs.stanford.edu/people/dorarad/gqa/about.html). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@inproceedings{hudson2019gqa,
title={Gqa: A new dataset for real-world visual reasoning and compositional question answering},
author={Hudson, Drew A and Manning, Christopher D},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={6700--6709},
year={2019}
}
``` |
allenai/s2-naip | allenai | "2024-05-31T21:06:47Z" | 15,427 | 18 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | null | "2024-03-06T03:10:43Z" | ---
license: apache-2.0
---
AI2-S2-NAIP is a remote sensing dataset consisting of aligned NAIP, Sentinel-2, Sentinel-1, and Landsat images spanning the entire continental US.
Data is divided into tiles.
Each tile spans 512x512 pixels at 1.25 m/pixel in one of the 10 UTM projections covering the continental US.
At each tile, the following data is available:
- [National Agriculture Imagery Program (NAIP)](https://www.usgs.gov/centers/eros/science/usgs-eros-archive-aerial-photography-national-agriculture-imagery-program-naip): an image from 2019-2021 at 1.25 m/pixel (512x512).
- [Sentinel-2 (L1C)](https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-2): between 16 and 32 images captured within a few months of the NAIP image at 10 m/pixel (64x64).
- [Sentinel-1](https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1): between 2 and 8 images captured within a few months of the NAIP image at 10 m/pixel (64x64).
- [Landsat-8/9](https://www.usgs.gov/landsat-missions/landsat-8): 4 images captured in the same year as the NAIP image at 10 m/pixel (64x64).
- [OpenStreetMap](https://www.openstreetmap.org): a GeoJSON containing buildings, roads, and 30 other categories. It uses pixel coordinates relative to the 512x512 NAIP image.
- [WorldCover](https://worldcover2021.esa.int/): the 2021 land cover image at 10 m/pixel (64x64).
AI2-S2-NAIP is applicable to several supervised and unsupervised tasks in remote sensing, including super-resolution (e.g. NAIP -> Sentinel-2), segmentation and detection (e.g. NAIP or Sentinel-2 -> OpenStreetMap or WorldCover), and multi-modal masked autoencoder pre-training.
For questions or feedback about AI2-S2-NAIP, please open an issue on Github at https://github.com/allenai/satlas.

Structure
---------
Once extracted, the dataset contains the different data types in different folders.
Each folder contains files named by a tile ID, which consists of the UTM projection, column, and row.
The column and row are based on tiles that are 512x512 pixels with pixel coordinates at 1.25 m/pixel, e.g. `32612_960_-6049.png` spans (614400, -3871360) to (615040, -3870720) in EPSG:32612 projection units.
Here is an example of NAIP data:
```
naip/
32612_960_-6049.png
32612_960_-6050.png
32612_960_-6051.png
...
```
And an example of Sentinel-2 data:
```
sentinel2/
32612_960_-6049_16.tif
32612_960_-6049_32.tif
32612_960_-6049_8.tif
32612_960_-6050_16.tif
...
```
The Sentinel-2, Sentinel-1, and Landsat images are GeoTIFFS so they contain georeference metadata.
Other data does not have georeference metadata, but data at each tile is aligned, so the georeference metadata from the above images is applicable to the other data as well with only a resolution shift.
Mapping Longitude and Latitude to Tile
--------------------------------------
Here is an example of mapping longitude and latitude to a tile.
First install packages:
pip install rasterio shapely utm
Then launch Python shell:
from rasterio.crs import CRS
from rasterio.warp import transform_geom
import shapely
import utm
# Define source location.
src_crs = CRS.from_epsg(4326)
src_point = shapely.Point(-122.331711, 47.648450)
# Get UTM zone.
_, _, zone_suffix, _ = utm.from_latlon(src_point.y, src_point.x)
epsg_code = 32600 + zone_suffix
dst_crs = CRS.from_epsg(epsg_code)
# Transform to UTM CRS.
dst_point = transform_geom(src_crs, dst_crs, src_point)
dst_point = shapely.geometry.shape(dst_point)
# dst_point is in projection coordinates (meters).
# Now convert to pixel coordinates at 1.25 m/pixel.
col = int(dst_point.x/1.25)
row = int(dst_point.y/-1.25)
# Print the prefix for the image filenames.
print(f"{epsg_code}_{col//512}_{row//512}")
# Print the prefix for the tar filenames to know which one to download.
# These group together many 1.25 m/pixel 512x512 tiles into one tar file.
print(f"{epsg_code}_{col//512//32}_{row//512//32}")
So then you would download the tar file from the second prefix, extract it, and look at the file with name matching the first prefix.
See visualize_tile.py for example of visualizing the data at a particular tile.
Sentinel-2
----------
The 10 m/pixel (`_8.tif`), 20 m/pixel (`_16.tif`), and 60 m/pixel (`_32.tif`) bands are stored separately.
Pixel values are the L1C 16-bit values.
The band order is as follows:
- _8.tif (64x64): B02, B03, B04, B08
- _16.tif (32x32): B05, B06, B07, B8A, B11, B12
- _32.tif (16x16): B01, B09, B10
The GeoTIFFs contain multiple images concatenated along the channel axis.
The CSV shows the original Sentinel-2 scene ID of each image.
Sentinel-1
----------
The Sentinel-1 bands are 10 m/pixel and ordered VV then VH.
Only IW VV+VH scenes are used.
The pixel values are 32-bit floating point values representing decibels 10*log10(x).
We obtain the radiometric-calibrated and terrain-corrected images from Google Earth Engine so see
https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S1_GRD for details.
The GeoTIFFs contain multiple images concatenated along the channel axis.
The CSV shows the original Sentinel-1 scene ID of each image.
NAIP
----
The NAIP image is 512x512 with four 8-bit bands: R, G, B, IR.
It is encoded as PNG but the IR is alpha mask so cannot be visualized correctly in image viewer without removing the alpha mask.
There are two NAIP images available, one under "naip" (2019-2022) and one under "oldnaip" (2015-2018).
The CSV shows the original NAIP scene ID of each image.
Landsat
-------
We include OLI-TIRS images from Landsat-8 and Landsat-9.
As with Sentinel-2, we select Landsat images that were captured within a few months of the NAIP image.
We store the 15 m/pixel bands (i.e. B8) at 10 m/pixel, and the 30 m/pixel bands (all the others) at 20 m/pixel.
There are separate GeoTIFFs for the 10 m/pixel (`_8.tif`) and 20 m/pixel (`_16.tif`).
All pixel values are 16-bit.
The band order is as follows:
- _8.tif (64x64): B8
- _16.tif (32x32): B1, B2, B3, B4, B5, B6, B7, B9, B10, B11
The GeoTIFFS contain multiple images concatenated along the channel axis.
The CSV shows the original Landsat scene ID of each image.
|
TrevorDohm/Stack_Tokenized | TrevorDohm | "2024-04-16T00:19:53Z" | 15,418 | 0 | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:code",
"license:other",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | "2024-03-10T05:49:00Z" | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: The-Stack-Tokenized
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids: []
---
|
m-a-p/Matrix | m-a-p | "2025-02-25T14:39:17Z" | 15,402 | 160 | [
"task_categories:text-generation",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:1B<n<10B",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us",
"language model"
] | [
"text-generation"
] | "2024-05-08T12:49:11Z" | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
- zh
tags:
- language model
pretty_name: Matrix
size_categories:
- n>1T
---
# Matrix
An open-source pretraining dataset containing 4690 billion tokens, this bilingual dataset with both English and Chinese texts is used for training neo models.
## Dataset Composition
The dataset consists of several components, each originating from different sources and serving various purposes in language modeling and processing. Below is a brief overview of each component:
<p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/654907a4a1faff97850c4eff/1FWMF_t_Mhy0UQmu65Bb1.png" style="float: right; width: 400px; margin-left: 10px;">
<strong>Common Crawl</strong><br>
Extracts from the Common Crawl project, featuring a rich diversity of internet text including websites, blogs, news articles, and more.<br>
<strong>Code</strong><br>
A collection of coding-related data.<be>
<strong>Paper</strong><br>
Consists of academic and research papers covering a broad spectrum of disciplines, offering technical and domain-specific language.<br>
<strong>Book</strong><br>
Comprises texts from a range of published books, encompassing literature, non-fiction, textbooks, and more.<br>
<strong>Instruction</strong><br>
Features a collection of texts primarily in a Q&A format.<be>
<strong>Exam</strong><br>
Contains various educational materials and assessments used in academic examinations.<be>
<strong>News</strong><br>
A collection of texts from various journalistic sources, reporting on current events and news stories.<br>
<strong>Wiki</strong><br>
Articles from various encyclopedic sources, not limited to Wikipedia, covering a wide array of topics and information.<br>
<strong>Patent</strong><br>
Includes texts from patent documents, providing detailed descriptions of inventions and their applications.<br>
</p>
## Citation
```
@article{zhang2024mapneo,
title = {MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series},
author = {Ge Zhang and Scott Qu and Jiaheng Liu and Chenchen Zhang and Chenghua Lin and Chou Leuang Yu and Danny Pan and Esther Cheng and Jie Liu and Qunshu Lin and Raven Yuan and Tuney Zheng and Wei Pang and Xinrun Du and Yiming Liang and Yinghao Ma and Yizhi Li and Ziyang Ma and Bill Lin and Emmanouil Benetos and Huan Yang and Junting Zhou and Kaijing Ma and Minghao Liu and Morry Niu and Noah Wang and Quehry Que and Ruibo Liu and Sine Liu and Shawn Guo and Soren Gao and Wangchunshu Zhou and Xinyue Zhang and Yizhi Zhou and Yubo Wang and Yuelin Bai and Yuhan Zhang and Yuxiang Zhang and Zenith Wang and Zhenzhu Yang and Zijian Zhao and Jiajun Zhang and Wanli Ouyang and Wenhao Huang and Wenhu Chen},
year = {2024},
journal = {arXiv preprint arXiv: 2405.19327}
}
``` |
rayliuca/WikidataLabels | rayliuca | "2024-01-11T04:17:57Z" | 15,338 | 1 | [
"task_categories:translation",
"task_categories:text2text-generation",
"language:en",
"language:fr",
"language:de",
"language:ja",
"language:zh",
"language:hi",
"language:ar",
"language:bn",
"language:ru",
"language:es",
"license:cc0-1.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation",
"text2text-generation"
] | "2024-01-01T00:23:08Z" | ---
license: cc0-1.0
dataset_info:
- config_name: aa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13986211
num_examples: 436895
download_size: 9821312
dataset_size: 13986211
- config_name: ab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5012532
num_examples: 159908
download_size: 3013706
dataset_size: 5012532
- config_name: abs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252728
num_examples: 143986
download_size: 2567450
dataset_size: 4252728
- config_name: ace
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19105673
num_examples: 574712
download_size: 13573374
dataset_size: 19105673
- config_name: ady
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4444259
num_examples: 148627
download_size: 2705754
dataset_size: 4444259
- config_name: ady-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4412556
num_examples: 147884
download_size: 2682170
dataset_size: 4412556
- config_name: aeb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4305734
num_examples: 145198
download_size: 2606368
dataset_size: 4305734
- config_name: aeb-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4467930
num_examples: 148796
download_size: 2722169
dataset_size: 4467930
- config_name: aeb-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12770359
num_examples: 404946
download_size: 8886489
dataset_size: 12770359
- config_name: af
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58561042
num_examples: 1643153
download_size: 42539052
dataset_size: 58561042
- config_name: agq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1317
num_examples: 33
download_size: 2906
dataset_size: 1317
- config_name: ak
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14198715
num_examples: 443037
download_size: 9991525
dataset_size: 14198715
- config_name: aln
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13811116
num_examples: 432089
download_size: 9673418
dataset_size: 13811116
- config_name: als
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20691
num_examples: 543
download_size: 17540
dataset_size: 20691
- config_name: alt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 108390
num_examples: 1814
download_size: 59046
dataset_size: 108390
- config_name: am
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5231176
num_examples: 163038
download_size: 3187164
dataset_size: 5231176
- config_name: ami
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21519
num_examples: 686
download_size: 16640
dataset_size: 21519
- config_name: an
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 240345072
num_examples: 5921087
download_size: 164895205
dataset_size: 240345072
- config_name: ang
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14275715
num_examples: 443461
download_size: 10063758
dataset_size: 14275715
- config_name: anp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8558258
num_examples: 241612
download_size: 4381360
dataset_size: 8558258
- config_name: ar
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 291173732
num_examples: 5724064
download_size: 159369497
dataset_size: 291173732
- config_name: arc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4473283
num_examples: 150006
download_size: 2722619
dataset_size: 4473283
- config_name: arn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13879729
num_examples: 433912
download_size: 9715431
dataset_size: 13879729
- config_name: arq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4346991
num_examples: 146004
download_size: 2636972
dataset_size: 4346991
- config_name: ary
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5358568
num_examples: 171568
download_size: 3313402
dataset_size: 5358568
- config_name: arz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 81806333
num_examples: 1669699
download_size: 49423508
dataset_size: 81806333
- config_name: as
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21658610
num_examples: 450074
download_size: 9641626
dataset_size: 21658610
- config_name: ase
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252943
num_examples: 143986
download_size: 2568106
dataset_size: 4252943
- config_name: ast
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1385628786
num_examples: 20696237
download_size: 955908362
dataset_size: 1385628786
- config_name: atj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12996229
num_examples: 411639
download_size: 9057557
dataset_size: 12996229
- config_name: av
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4722934
num_examples: 153781
download_size: 2880103
dataset_size: 4722934
- config_name: avk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13194485
num_examples: 414598
download_size: 9200917
dataset_size: 13194485
- config_name: awa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8599312
num_examples: 242320
download_size: 4411751
dataset_size: 8599312
- config_name: ay
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14269432
num_examples: 443521
download_size: 10029939
dataset_size: 14269432
- config_name: az
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21049248
num_examples: 516732
download_size: 14117527
dataset_size: 21049248
- config_name: azb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30781587
num_examples: 607562
download_size: 16028687
dataset_size: 30781587
- config_name: ba
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11525351
num_examples: 261509
download_size: 6733777
dataset_size: 11525351
- config_name: ban
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13674052
num_examples: 426706
download_size: 9513747
dataset_size: 13674052
- config_name: ban-bali
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 50961
num_examples: 748
download_size: 25817
dataset_size: 50961
- config_name: bar
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 54783034
num_examples: 1566120
download_size: 40389830
dataset_size: 54783034
- config_name: bbc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12820895
num_examples: 406960
download_size: 8917054
dataset_size: 12820895
- config_name: bcc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8017228
num_examples: 241977
download_size: 4344579
dataset_size: 8017228
- config_name: be
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30978832
num_examples: 564184
download_size: 17461174
dataset_size: 30978832
- config_name: be-tarask
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18931909
num_examples: 374396
download_size: 10871239
dataset_size: 18931909
- config_name: bg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 200628708
num_examples: 4383953
download_size: 137745533
dataset_size: 200628708
- config_name: bgn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7999280
num_examples: 241566
download_size: 4331249
dataset_size: 7999280
- config_name: bi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14040026
num_examples: 438382
download_size: 9867032
dataset_size: 14040026
- config_name: bjn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8375348
num_examples: 254558
download_size: 5722334
dataset_size: 8375348
- config_name: bm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18145787
num_examples: 549694
download_size: 13129193
dataset_size: 18145787
- config_name: bn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 815803977
num_examples: 9767284
download_size: 261147329
dataset_size: 815803977
- config_name: bo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11671330
num_examples: 278307
download_size: 5669602
dataset_size: 11671330
- config_name: bpy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15497749
num_examples: 347458
download_size: 6991190
dataset_size: 15497749
- config_name: bqi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8017455
num_examples: 241984
download_size: 4345123
dataset_size: 8017455
- config_name: br
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58304963
num_examples: 1653800
download_size: 42722031
dataset_size: 58304963
- config_name: brh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5328437
num_examples: 171504
download_size: 3376189
dataset_size: 5328437
- config_name: bs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30441466
num_examples: 858190
download_size: 21606575
dataset_size: 30441466
- config_name: btm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252525
num_examples: 143980
download_size: 2567218
dataset_size: 4252525
- config_name: bto
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12841721
num_examples: 407470
download_size: 8934218
dataset_size: 12841721
- config_name: bug
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7595464
num_examples: 235268
download_size: 5129941
dataset_size: 7595464
- config_name: bxr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4713699
num_examples: 153707
download_size: 2869313
dataset_size: 4713699
- config_name: ca
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 408509932
num_examples: 9936886
download_size: 288474980
dataset_size: 408509932
- config_name: cbk-zam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14108232
num_examples: 440345
download_size: 9920793
dataset_size: 14108232
- config_name: cdo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6503254
num_examples: 201362
download_size: 4137841
dataset_size: 6503254
- config_name: ce
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 28093148
num_examples: 607767
download_size: 16367596
dataset_size: 28093148
- config_name: ceb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 332947091
num_examples: 7769402
download_size: 219525737
dataset_size: 332947091
- config_name: ch
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13983906
num_examples: 436785
download_size: 9817385
dataset_size: 13983906
- config_name: cho
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13950786
num_examples: 435869
download_size: 9791296
dataset_size: 13950786
- config_name: chr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5386793
num_examples: 172855
download_size: 3419676
dataset_size: 5386793
- config_name: chy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13994916
num_examples: 437007
download_size: 9830465
dataset_size: 13994916
- config_name: ckb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23343034
num_examples: 511183
download_size: 11459344
dataset_size: 23343034
- config_name: co
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47080480
num_examples: 1346929
download_size: 34551346
dataset_size: 47080480
- config_name: cps
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12849864
num_examples: 407695
download_size: 8941921
dataset_size: 12849864
- config_name: cr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5516556
num_examples: 176667
download_size: 3532952
dataset_size: 5516556
- config_name: crh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10864382
num_examples: 336709
download_size: 7542853
dataset_size: 10864382
- config_name: crh-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4419064
num_examples: 148046
download_size: 2688683
dataset_size: 4419064
- config_name: crh-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14201429
num_examples: 442905
download_size: 9986290
dataset_size: 14201429
- config_name: cs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 140189244
num_examples: 3384048
download_size: 97516751
dataset_size: 140189244
- config_name: csb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20177120
num_examples: 619275
download_size: 14528772
dataset_size: 20177120
- config_name: cv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8047221
num_examples: 215611
download_size: 4857718
dataset_size: 8047221
- config_name: cy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 89241808
num_examples: 2244550
download_size: 62686006
dataset_size: 89241808
- config_name: da
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 130931077
num_examples: 3448894
download_size: 98202417
dataset_size: 130931077
- config_name: dag
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2664957
num_examples: 78534
download_size: 2052615
dataset_size: 2664957
- config_name: de
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 765398522
num_examples: 17531361
download_size: 527642124
dataset_size: 765398522
- config_name: de-at
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53043722
num_examples: 1515373
download_size: 38761571
dataset_size: 53043722
- config_name: de-ch
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53480908
num_examples: 1528137
download_size: 39349412
dataset_size: 53480908
- config_name: de-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4256391
num_examples: 144061
download_size: 2571862
dataset_size: 4256391
- config_name: din
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12819746
num_examples: 406591
download_size: 8922303
dataset_size: 12819746
- config_name: diq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7570161
num_examples: 232674
download_size: 5057742
dataset_size: 7570161
- config_name: dsb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16135830
num_examples: 491423
download_size: 11412316
dataset_size: 16135830
- config_name: dtp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13867373
num_examples: 433733
download_size: 9720699
dataset_size: 13867373
- config_name: dty
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8839082
num_examples: 246026
download_size: 4551845
dataset_size: 8839082
- config_name: dua
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2631
num_examples: 87
download_size: 3877
dataset_size: 2631
- config_name: dv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 81396462
num_examples: 2103276
download_size: 45332104
dataset_size: 81396462
- config_name: dz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8590239
num_examples: 242196
download_size: 4406353
dataset_size: 8590239
- config_name: ee
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14377017
num_examples: 447208
download_size: 10136064
dataset_size: 14377017
- config_name: egl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13068224
num_examples: 413551
download_size: 9121776
dataset_size: 13068224
- config_name: el
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32978562
num_examples: 592016
download_size: 19577876
dataset_size: 32978562
- config_name: eml
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14768563
num_examples: 458847
download_size: 10453636
dataset_size: 14768563
- config_name: en
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6327454281
num_examples: 81801560
download_size: 4224231068
dataset_size: 6327454281
- config_name: en-ca
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 73305274
num_examples: 1909970
download_size: 53060194
dataset_size: 73305274
- config_name: en-gb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 115978412
num_examples: 2520405
download_size: 78924421
dataset_size: 115978412
- config_name: en-us
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14815
num_examples: 332
download_size: 9953
dataset_size: 14815
- config_name: eo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 256196064
num_examples: 6285304
download_size: 177219679
dataset_size: 256196064
- config_name: es
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 730214298
num_examples: 17233968
download_size: 514588069
dataset_size: 730214298
- config_name: es-419
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4355180
num_examples: 146476
download_size: 2659218
dataset_size: 4355180
- config_name: es-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4280933
num_examples: 144717
download_size: 2592085
dataset_size: 4280933
- config_name: et
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 65123623
num_examples: 1820762
download_size: 48197302
dataset_size: 65123623
- config_name: eu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 290282374
num_examples: 7109758
download_size: 197889378
dataset_size: 290282374
- config_name: ext
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 223257222
num_examples: 5359047
download_size: 147078789
dataset_size: 223257222
- config_name: fa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 123727757
num_examples: 2142642
download_size: 65952114
dataset_size: 123727757
- config_name: ff
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14116652
num_examples: 440614
download_size: 9920388
dataset_size: 14116652
- config_name: fi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 286539944
num_examples: 6905698
download_size: 209916638
dataset_size: 286539944
- config_name: fit
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20217258
num_examples: 620391
download_size: 14566702
dataset_size: 20217258
- config_name: fj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14159041
num_examples: 441745
download_size: 9956108
dataset_size: 14159041
- config_name: fkv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4328482
num_examples: 145988
download_size: 2619845
dataset_size: 4328482
- config_name: fo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24474476
num_examples: 731732
download_size: 17876981
dataset_size: 24474476
- config_name: fr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 774128723
num_examples: 17908351
download_size: 534489308
dataset_size: 774128723
- config_name: frc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17896106
num_examples: 547258
download_size: 12953740
dataset_size: 17896106
- config_name: frp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40902510
num_examples: 1191134
download_size: 29778105
dataset_size: 40902510
- config_name: frr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16979214
num_examples: 515350
download_size: 12069637
dataset_size: 16979214
- config_name: fur
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42077410
num_examples: 1221071
download_size: 30714082
dataset_size: 42077410
- config_name: ga
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 471527543
num_examples: 11524282
download_size: 320967189
dataset_size: 471527543
- config_name: gag
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14149375
num_examples: 440732
download_size: 9940551
dataset_size: 14149375
- config_name: gan
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31572161
num_examples: 905186
download_size: 18909564
dataset_size: 31572161
- config_name: gan-hans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31004794
num_examples: 889875
download_size: 18566811
dataset_size: 31004794
- config_name: gan-hant
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4374444
num_examples: 147098
download_size: 2657182
dataset_size: 4374444
- config_name: gcr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4311409
num_examples: 145829
download_size: 2618211
dataset_size: 4311409
- config_name: gd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 49316935
num_examples: 1429457
download_size: 36220978
dataset_size: 49316935
- config_name: gl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 289484839
num_examples: 7052226
download_size: 197315151
dataset_size: 289484839
- config_name: glk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8327018
num_examples: 249115
download_size: 4538325
dataset_size: 8327018
- config_name: gn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14212974
num_examples: 442765
download_size: 10004863
dataset_size: 14212974
- config_name: gom
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4584575
num_examples: 150273
download_size: 2780570
dataset_size: 4584575
- config_name: gom-deva
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8585678
num_examples: 242131
download_size: 4400578
dataset_size: 8585678
- config_name: gom-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12783006
num_examples: 405302
download_size: 8897342
dataset_size: 12783006
- config_name: gor
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14667616
num_examples: 454512
download_size: 10319196
dataset_size: 14667616
- config_name: got
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5432139
num_examples: 172951
download_size: 3435531
dataset_size: 5432139
- config_name: grc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4494817
num_examples: 149631
download_size: 2746170
dataset_size: 4494817
- config_name: gu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23788894
num_examples: 486140
download_size: 10779200
dataset_size: 23788894
- config_name: guc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1419
num_examples: 38
download_size: 3054
dataset_size: 1419
- config_name: guw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 118
num_examples: 4
download_size: 1864
dataset_size: 118
- config_name: gv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20683485
num_examples: 631005
download_size: 14894590
dataset_size: 20683485
- config_name: ha
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14716168
num_examples: 455836
download_size: 10421790
dataset_size: 14716168
- config_name: hak
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6128644
num_examples: 193036
download_size: 3991729
dataset_size: 6128644
- config_name: haw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14158084
num_examples: 441511
download_size: 9952975
dataset_size: 14158084
- config_name: he
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43629050
num_examples: 884809
download_size: 27221301
dataset_size: 43629050
- config_name: hi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 37237187
num_examples: 668964
download_size: 17804873
dataset_size: 37237187
- config_name: hif
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14457954
num_examples: 449009
download_size: 10166264
dataset_size: 14457954
- config_name: hif-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14519845
num_examples: 454037
download_size: 10240704
dataset_size: 14519845
- config_name: hil
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12928914
num_examples: 409962
download_size: 9009705
dataset_size: 12928914
- config_name: ho
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13950504
num_examples: 435857
download_size: 9790849
dataset_size: 13950504
- config_name: hr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 61272623
num_examples: 1720527
download_size: 45307411
dataset_size: 61272623
- config_name: hrx
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12869295
num_examples: 407823
download_size: 8964114
dataset_size: 12869295
- config_name: hsb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23720349
num_examples: 707100
download_size: 17145693
dataset_size: 23720349
- config_name: ht
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16835529
num_examples: 509955
download_size: 11880404
dataset_size: 16835529
- config_name: hu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 85054175
num_examples: 2200589
download_size: 64143342
dataset_size: 85054175
- config_name: hu-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252810
num_examples: 143986
download_size: 2567582
dataset_size: 4252810
- config_name: hy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 39339286
num_examples: 773925
download_size: 22108994
dataset_size: 39339286
- config_name: hyw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5443608
num_examples: 166902
download_size: 3238370
dataset_size: 5443608
- config_name: hz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13948574
num_examples: 435804
download_size: 9788697
dataset_size: 13948574
- config_name: ia
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 229143237
num_examples: 5616433
download_size: 155877454
dataset_size: 229143237
- config_name: id
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 95220928
num_examples: 2512331
download_size: 69525046
dataset_size: 95220928
- config_name: ie
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 225725262
num_examples: 5533032
download_size: 153371930
dataset_size: 225725262
- config_name: ig
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20109388
num_examples: 617044
download_size: 14475407
dataset_size: 20109388
- config_name: ii
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4310418
num_examples: 145332
download_size: 2609723
dataset_size: 4310418
- config_name: ik
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13989609
num_examples: 436958
download_size: 9823174
dataset_size: 13989609
- config_name: ike-cans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4352278
num_examples: 146355
download_size: 2645174
dataset_size: 4352278
- config_name: ike-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13851135
num_examples: 432932
download_size: 9714057
dataset_size: 13851135
- config_name: ilo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15955483
num_examples: 480555
download_size: 11141942
dataset_size: 15955483
- config_name: inh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4634360
num_examples: 152226
download_size: 2831580
dataset_size: 4634360
- config_name: io
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 233656822
num_examples: 5757440
download_size: 159720058
dataset_size: 233656822
- config_name: is
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 51679396
num_examples: 1483610
download_size: 37965494
dataset_size: 51679396
- config_name: it
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 536601426
num_examples: 12631487
download_size: 375025347
dataset_size: 536601426
- config_name: iu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5360588
num_examples: 172215
download_size: 3402239
dataset_size: 5360588
- config_name: ja
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 140641579
num_examples: 2917962
download_size: 92145329
dataset_size: 140641579
- config_name: jam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18849751
num_examples: 571777
download_size: 13684422
dataset_size: 18849751
- config_name: jbo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14301985
num_examples: 446512
download_size: 9994516
dataset_size: 14301985
- config_name: jv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27232302
num_examples: 794181
download_size: 19651565
dataset_size: 27232302
- config_name: ka
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24073345
num_examples: 399546
download_size: 11679979
dataset_size: 24073345
- config_name: kaa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14082184
num_examples: 439411
download_size: 9902820
dataset_size: 14082184
- config_name: kab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18459676
num_examples: 557857
download_size: 13384218
dataset_size: 18459676
- config_name: kbd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4594409
num_examples: 149733
download_size: 2759503
dataset_size: 4594409
- config_name: kbd-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4417661
num_examples: 148017
download_size: 2687531
dataset_size: 4417661
- config_name: kbp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12873178
num_examples: 408039
download_size: 8965474
dataset_size: 12873178
- config_name: kea
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12793700
num_examples: 405901
download_size: 8896866
dataset_size: 12793700
- config_name: kg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40949149
num_examples: 1193499
download_size: 29766747
dataset_size: 40949149
- config_name: khw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4308653
num_examples: 145279
download_size: 2608581
dataset_size: 4308653
- config_name: ki
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14056900
num_examples: 439015
download_size: 9875534
dataset_size: 14056900
- config_name: kj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13881723
num_examples: 433861
download_size: 9733715
dataset_size: 13881723
- config_name: kjp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8504302
num_examples: 240339
download_size: 4341523
dataset_size: 8504302
- config_name: kk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19216115
num_examples: 428880
download_size: 11577682
dataset_size: 19216115
- config_name: kk-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7241749
num_examples: 211731
download_size: 4487032
dataset_size: 7241749
- config_name: kk-kz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4937945
num_examples: 160027
download_size: 3062906
dataset_size: 4937945
- config_name: kk-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22197825
num_examples: 677162
download_size: 16072332
dataset_size: 22197825
- config_name: kk-tr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20060635
num_examples: 616521
download_size: 14438929
dataset_size: 20060635
- config_name: ko
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 60335212
num_examples: 1364440
download_size: 39186630
dataset_size: 60335212
- config_name: ko-kp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4338717
num_examples: 146150
download_size: 2630925
dataset_size: 4338717
- config_name: koi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4737590
num_examples: 155082
download_size: 2894674
dataset_size: 4737590
- config_name: kr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13886057
num_examples: 433990
download_size: 9737602
dataset_size: 13886057
- config_name: krc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4646136
num_examples: 151026
download_size: 2785454
dataset_size: 4646136
- config_name: kri
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12798530
num_examples: 406032
download_size: 8902330
dataset_size: 12798530
- config_name: krj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13850324
num_examples: 433444
download_size: 9703460
dataset_size: 13850324
- config_name: krl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12788020
num_examples: 405729
download_size: 8893337
dataset_size: 12788020
- config_name: ks
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4390604
num_examples: 147033
download_size: 2671069
dataset_size: 4390604
- config_name: ks-deva
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8567518
num_examples: 241832
download_size: 4387687
dataset_size: 8567518
- config_name: ksh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20394712
num_examples: 624523
download_size: 14698860
dataset_size: 20394712
- config_name: ku
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8037777
num_examples: 239515
download_size: 5306097
dataset_size: 8037777
- config_name: ku-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4577826
num_examples: 151290
download_size: 2796159
dataset_size: 4577826
- config_name: ku-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14683841
num_examples: 458802
download_size: 10371977
dataset_size: 14683841
- config_name: kum
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252739
num_examples: 143985
download_size: 2567503
dataset_size: 4252739
- config_name: kv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4946978
num_examples: 158888
download_size: 2997865
dataset_size: 4946978
- config_name: kw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20245535
num_examples: 621432
download_size: 14581378
dataset_size: 20245535
- config_name: ky
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8909613
num_examples: 235165
download_size: 5462115
dataset_size: 8909613
- config_name: la
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 299766395
num_examples: 7085082
download_size: 201477460
dataset_size: 299766395
- config_name: lad
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20336417
num_examples: 622775
download_size: 14653199
dataset_size: 20336417
- config_name: lb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 56473066
num_examples: 1601093
download_size: 41410732
dataset_size: 56473066
- config_name: lbe
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4501470
num_examples: 149898
download_size: 2744786
dataset_size: 4501470
- config_name: lez
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4890798
num_examples: 155936
download_size: 2959653
dataset_size: 4890798
- config_name: lfn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14709210
num_examples: 456719
download_size: 10408539
dataset_size: 14709210
- config_name: lg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13979286
num_examples: 436009
download_size: 9802779
dataset_size: 13979286
- config_name: li
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43476868
num_examples: 1253970
download_size: 31750932
dataset_size: 43476868
- config_name: lij
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42327066
num_examples: 1227346
download_size: 30898971
dataset_size: 42327066
- config_name: liv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12781331
num_examples: 405236
download_size: 8895889
dataset_size: 12781331
- config_name: lki
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8039166
num_examples: 242526
download_size: 4363703
dataset_size: 8039166
- config_name: lld
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 90305
num_examples: 2634
download_size: 69672
dataset_size: 90305
- config_name: lmo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18287638
num_examples: 545398
download_size: 13130119
dataset_size: 18287638
- config_name: ln
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14123637
num_examples: 439731
download_size: 9915851
dataset_size: 14123637
- config_name: lo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9905189
num_examples: 271710
download_size: 5313218
dataset_size: 9905189
- config_name: loz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13695602
num_examples: 428723
download_size: 9581113
dataset_size: 13695602
- config_name: lt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 39902419
num_examples: 1096727
download_size: 29185765
dataset_size: 39902419
- config_name: ltg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13884707
num_examples: 433453
download_size: 9736637
dataset_size: 13884707
- config_name: lus
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13695197
num_examples: 428712
download_size: 9580538
dataset_size: 13695197
- config_name: luz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8459036
num_examples: 253454
download_size: 4687414
dataset_size: 8459036
- config_name: lv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27242119
num_examples: 764753
download_size: 19676667
dataset_size: 27242119
- config_name: lzh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 25067538
num_examples: 685152
download_size: 14998856
dataset_size: 25067538
- config_name: mdf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4634268
num_examples: 152141
download_size: 2820744
dataset_size: 4634268
- config_name: mg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43863002
num_examples: 1271074
download_size: 32016826
dataset_size: 43863002
- config_name: mh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13775721
num_examples: 431162
download_size: 9644397
dataset_size: 13775721
- config_name: mi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20857040
num_examples: 637118
download_size: 15060301
dataset_size: 20857040
- config_name: min
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53044258
num_examples: 1464128
download_size: 38587450
dataset_size: 53044258
- config_name: mk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24087229
num_examples: 449241
download_size: 12217912
dataset_size: 24087229
- config_name: ml
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 189266798
num_examples: 2664923
download_size: 71344031
dataset_size: 189266798
- config_name: mn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9311543
num_examples: 219695
download_size: 5272784
dataset_size: 9311543
- config_name: mni
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8696893
num_examples: 243616
download_size: 4470994
dataset_size: 8696893
- config_name: mnw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8861861
num_examples: 244906
download_size: 4517726
dataset_size: 8861861
- config_name: mo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5377009
num_examples: 172144
download_size: 3405661
dataset_size: 5377009
- config_name: mr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 26855182
num_examples: 526220
download_size: 12358679
dataset_size: 26855182
- config_name: mrh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 68
num_examples: 2
download_size: 1820
dataset_size: 68
- config_name: mrj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5007903
num_examples: 160889
download_size: 3073431
dataset_size: 5007903
- config_name: ms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 64674328
num_examples: 1803714
download_size: 47165217
dataset_size: 64674328
- config_name: ms-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 136496
num_examples: 2961
download_size: 92316
dataset_size: 136496
- config_name: mt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22632686
num_examples: 682867
download_size: 16352572
dataset_size: 22632686
- config_name: mus
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14013416
num_examples: 437688
download_size: 9835239
dataset_size: 14013416
- config_name: mwl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14493299
num_examples: 448926
download_size: 10225888
dataset_size: 14493299
- config_name: my
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16182182
num_examples: 345096
download_size: 7981905
dataset_size: 16182182
- config_name: mzn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17973941
num_examples: 447870
download_size: 9174617
dataset_size: 17973941
- config_name: na
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13992666
num_examples: 436956
download_size: 9823328
dataset_size: 13992666
- config_name: nah
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14490294
num_examples: 449748
download_size: 10192501
dataset_size: 14490294
- config_name: nan-hani
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 191
num_examples: 6
download_size: 1925
dataset_size: 191
- config_name: nap
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42362346
num_examples: 1229161
download_size: 30918265
dataset_size: 42362346
- config_name: nb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 142554768
num_examples: 3688026
download_size: 105549981
dataset_size: 142554768
- config_name: nds
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58766114
num_examples: 1666813
download_size: 43421948
dataset_size: 58766114
- config_name: nds-nl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 44121756
num_examples: 1273149
download_size: 32201410
dataset_size: 44121756
- config_name: ne
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11925386
num_examples: 295006
download_size: 6265232
dataset_size: 11925386
- config_name: new
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16906308
num_examples: 350362
download_size: 7680329
dataset_size: 16906308
- config_name: ng
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13870754
num_examples: 433582
download_size: 9723795
dataset_size: 13870754
- config_name: nia
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20649
num_examples: 515
download_size: 16535
dataset_size: 20649
- config_name: niu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12794247
num_examples: 405902
download_size: 8897260
dataset_size: 12794247
- config_name: nl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5016576732
num_examples: 61931959
download_size: 3380404239
dataset_size: 5016576732
- config_name: nn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 99997815
num_examples: 2708994
download_size: 74736304
dataset_size: 99997815
- config_name: 'no'
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2934
num_examples: 64
download_size: 4108
dataset_size: 2934
- config_name: nod
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4322068
num_examples: 145566
download_size: 2618106
dataset_size: 4322068
- config_name: nov
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14150434
num_examples: 440903
download_size: 9947798
dataset_size: 14150434
- config_name: nqo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8094271
num_examples: 243184
download_size: 4398836
dataset_size: 8094271
- config_name: nrm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 41330956
num_examples: 1203295
download_size: 30084065
dataset_size: 41330956
- config_name: nso
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14178321
num_examples: 443205
download_size: 9959708
dataset_size: 14178321
- config_name: nv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15351770
num_examples: 455188
download_size: 10472240
dataset_size: 15351770
- config_name: ny
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13989813
num_examples: 436764
download_size: 9821588
dataset_size: 13989813
- config_name: nys
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13092059
num_examples: 413241
download_size: 9153100
dataset_size: 13092059
- config_name: oc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 266612548
num_examples: 6569770
download_size: 180156462
dataset_size: 266612548
- config_name: olo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13200388
num_examples: 416935
download_size: 9214968
dataset_size: 13200388
- config_name: om
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5476389
num_examples: 175314
download_size: 3496637
dataset_size: 5476389
- config_name: or
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22798709
num_examples: 470237
download_size: 10322832
dataset_size: 22798709
- config_name: os
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5946062
num_examples: 177054
download_size: 3583703
dataset_size: 5946062
- config_name: ota
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8015024
num_examples: 241903
download_size: 4343478
dataset_size: 8015024
- config_name: pa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20505754
num_examples: 481522
download_size: 10552147
dataset_size: 20505754
- config_name: pam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14527964
num_examples: 451253
download_size: 10242443
dataset_size: 14527964
- config_name: pap
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 54505401
num_examples: 1449881
download_size: 40415776
dataset_size: 54505401
- config_name: pcd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42132826
num_examples: 1221362
download_size: 30766812
dataset_size: 42132826
- config_name: pdc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14435256
num_examples: 448055
download_size: 10178322
dataset_size: 14435256
- config_name: pdt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13994892
num_examples: 437200
download_size: 9819388
dataset_size: 13994892
- config_name: pfl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15461023
num_examples: 474198
download_size: 10893651
dataset_size: 15461023
- config_name: pi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8913354
num_examples: 250251
download_size: 4651392
dataset_size: 8913354
- config_name: pih
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13971081
num_examples: 436214
download_size: 9810653
dataset_size: 13971081
- config_name: pl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 426030491
num_examples: 10025139
download_size: 295767506
dataset_size: 426030491
- config_name: pms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 51268512
num_examples: 1477043
download_size: 37698831
dataset_size: 51268512
- config_name: pnb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16192682
num_examples: 409037
download_size: 9196626
dataset_size: 16192682
- config_name: pnt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4439173
num_examples: 148336
download_size: 2703117
dataset_size: 4439173
- config_name: prg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17940420
num_examples: 544030
download_size: 12958482
dataset_size: 17940420
- config_name: ps
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8860902
num_examples: 259186
download_size: 4916502
dataset_size: 8860902
- config_name: pt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 491184040
num_examples: 11574568
download_size: 340831923
dataset_size: 491184040
- config_name: pt-br
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 318857431
num_examples: 7782980
download_size: 223442911
dataset_size: 318857431
- config_name: pwn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8500
num_examples: 269
download_size: 8738
dataset_size: 8500
- config_name: qu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15254702
num_examples: 468823
download_size: 10750388
dataset_size: 15254702
- config_name: quc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32
num_examples: 1
download_size: 1772
dataset_size: 32
- config_name: qug
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13798264
num_examples: 431733
download_size: 9661685
dataset_size: 13798264
- config_name: rgn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17001688
num_examples: 519871
download_size: 12258201
dataset_size: 17001688
- config_name: rif
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13792951
num_examples: 431588
download_size: 9657698
dataset_size: 13792951
- config_name: rm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 44450577
num_examples: 1284908
download_size: 32519630
dataset_size: 44450577
- config_name: rmc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 159
num_examples: 4
download_size: 1963
dataset_size: 159
- config_name: rmy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5610156
num_examples: 179191
download_size: 3608283
dataset_size: 5610156
- config_name: rn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13935534
num_examples: 435271
download_size: 9779486
dataset_size: 13935534
- config_name: ro
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 247469452
num_examples: 5878366
download_size: 177525205
dataset_size: 247469452
- config_name: roa-tara
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14425120
num_examples: 448972
download_size: 10152875
dataset_size: 14425120
- config_name: ru
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 405103215
num_examples: 7485811
download_size: 257215625
dataset_size: 405103215
- config_name: rue
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4953403
num_examples: 159530
download_size: 3037824
dataset_size: 4953403
- config_name: rup
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14459686
num_examples: 450345
download_size: 10198398
dataset_size: 14459686
- config_name: ruq-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4434290
num_examples: 148404
download_size: 2700920
dataset_size: 4434290
- config_name: ruq-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13783683
num_examples: 430978
download_size: 9656941
dataset_size: 13783683
- config_name: rw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14090196
num_examples: 439172
download_size: 9901257
dataset_size: 14090196
- config_name: rwr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8568706
num_examples: 241841
download_size: 4388475
dataset_size: 8568706
- config_name: ryu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2852
num_examples: 82
download_size: 4237
dataset_size: 2852
- config_name: sa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21404327
num_examples: 455674
download_size: 9692464
dataset_size: 21404327
- config_name: sat
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10810040
num_examples: 284911
download_size: 5750917
dataset_size: 10810040
- config_name: sc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47195572
num_examples: 1348137
download_size: 34521764
dataset_size: 47195572
- config_name: scn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43458983
num_examples: 1259067
download_size: 31775157
dataset_size: 43458983
- config_name: sco
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 56960413
num_examples: 1611092
download_size: 41724559
dataset_size: 56960413
- config_name: sd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14257513
num_examples: 363318
download_size: 7844047
dataset_size: 14257513
- config_name: sdc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13975497
num_examples: 436913
download_size: 9800517
dataset_size: 13975497
- config_name: se
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23962268
num_examples: 711439
download_size: 17409387
dataset_size: 23962268
- config_name: sei
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13827581
num_examples: 432520
download_size: 9684192
dataset_size: 13827581
- config_name: sg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13913524
num_examples: 434751
download_size: 9761739
dataset_size: 13913524
- config_name: sh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30173635
num_examples: 746207
download_size: 20133594
dataset_size: 30173635
- config_name: shi-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13783218
num_examples: 430968
download_size: 9656828
dataset_size: 13783218
- config_name: shi-tfng
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4308577
num_examples: 145279
download_size: 2608525
dataset_size: 4308577
- config_name: shn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10139002
num_examples: 260808
download_size: 4952168
dataset_size: 10139002
- config_name: shy-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4255322
num_examples: 144058
download_size: 2570625
dataset_size: 4255322
- config_name: si
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7405400
num_examples: 189718
download_size: 4270591
dataset_size: 7405400
- config_name: sjd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4300688
num_examples: 145047
download_size: 2604357
dataset_size: 4300688
- config_name: sje
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20970223
num_examples: 637639
download_size: 15120381
dataset_size: 20970223
- config_name: sju
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4315103
num_examples: 145655
download_size: 2620763
dataset_size: 4315103
- config_name: sk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 75586366
num_examples: 2050873
download_size: 54951330
dataset_size: 75586366
- config_name: skr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4274062
num_examples: 144443
download_size: 2585286
dataset_size: 4274062
- config_name: sl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 157883240
num_examples: 4112048
download_size: 118047353
dataset_size: 157883240
- config_name: sli
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13909208
num_examples: 434986
download_size: 9745964
dataset_size: 13909208
- config_name: sm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13984823
num_examples: 436830
download_size: 9817472
dataset_size: 13984823
- config_name: sma
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20653595
num_examples: 630437
download_size: 14902319
dataset_size: 20653595
- config_name: smj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19640206
num_examples: 604326
download_size: 14133964
dataset_size: 19640206
- config_name: smn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10902411
num_examples: 337543
download_size: 7576850
dataset_size: 10902411
- config_name: sms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4462345
num_examples: 149355
download_size: 2741038
dataset_size: 4462345
- config_name: sn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20116601
num_examples: 618231
download_size: 14463728
dataset_size: 20116601
- config_name: sq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 304708913
num_examples: 7311820
download_size: 225592169
dataset_size: 304708913
- config_name: sr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 52787253
num_examples: 1018361
download_size: 31364006
dataset_size: 52787253
- config_name: sr-ec
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9237541
num_examples: 248556
download_size: 5875548
dataset_size: 9237541
- config_name: sr-el
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 48848162
num_examples: 1418824
download_size: 35859120
dataset_size: 48848162
- config_name: srq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12796525
num_examples: 405957
download_size: 8899493
dataset_size: 12796525
- config_name: ss
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13823630
num_examples: 432423
download_size: 9682165
dataset_size: 13823630
- config_name: st
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13938937
num_examples: 435419
download_size: 9785161
dataset_size: 13938937
- config_name: stq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14484394
num_examples: 449885
download_size: 10228446
dataset_size: 14484394
- config_name: su
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20025826
num_examples: 583096
download_size: 14042822
dataset_size: 20025826
- config_name: sv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 339074900
num_examples: 8115455
download_size: 236022796
dataset_size: 339074900
- config_name: sw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 50612064
num_examples: 1465385
download_size: 37096369
dataset_size: 50612064
- config_name: szl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16772062
num_examples: 500107
download_size: 11868254
dataset_size: 16772062
- config_name: szy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4332021
num_examples: 146136
download_size: 2633271
dataset_size: 4332021
- config_name: ta
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31251824
num_examples: 546558
download_size: 15157673
dataset_size: 31251824
- config_name: tay
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4345269
num_examples: 146938
download_size: 2632535
dataset_size: 4345269
- config_name: tcy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8723594
num_examples: 244350
download_size: 4487471
dataset_size: 8723594
- config_name: te
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27587665
num_examples: 569615
download_size: 13669398
dataset_size: 27587665
- config_name: tet
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15092299
num_examples: 466244
download_size: 10702917
dataset_size: 15092299
- config_name: tg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12643125
num_examples: 304625
download_size: 7622522
dataset_size: 12643125
- config_name: tg-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4504034
num_examples: 149533
download_size: 2755000
dataset_size: 4504034
- config_name: tg-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19845835
num_examples: 610020
download_size: 14264492
dataset_size: 19845835
- config_name: th
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32693750
num_examples: 537447
download_size: 15849247
dataset_size: 32693750
- config_name: ti
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4366995
num_examples: 146479
download_size: 2648869
dataset_size: 4366995
- config_name: tk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5797050
num_examples: 184302
download_size: 3728802
dataset_size: 5797050
- config_name: tl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13661554
num_examples: 387377
download_size: 9456413
dataset_size: 13661554
- config_name: tly
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4309748
num_examples: 145312
download_size: 2609307
dataset_size: 4309748
- config_name: tly-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 35
num_examples: 1
download_size: 1793
dataset_size: 35
- config_name: tn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13936132
num_examples: 435219
download_size: 9780279
dataset_size: 13936132
- config_name: to
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13980327
num_examples: 436460
download_size: 9810650
dataset_size: 13980327
- config_name: tpi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14169019
num_examples: 442133
download_size: 9961827
dataset_size: 14169019
- config_name: tr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 72134544
num_examples: 1770267
download_size: 51032484
dataset_size: 72134544
- config_name: tru
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5322844
num_examples: 171327
download_size: 3371105
dataset_size: 5322844
- config_name: trv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 94285
num_examples: 3109
download_size: 65138
dataset_size: 94285
- config_name: ts
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13943481
num_examples: 435408
download_size: 9783789
dataset_size: 13943481
- config_name: tt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24182976
num_examples: 548502
download_size: 14868166
dataset_size: 24182976
- config_name: tt-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4943914
num_examples: 158198
download_size: 3048932
dataset_size: 4943914
- config_name: tt-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13842972
num_examples: 432513
download_size: 9702714
dataset_size: 13842972
- config_name: tum
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13924159
num_examples: 435110
download_size: 9770501
dataset_size: 13924159
- config_name: tw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13830508
num_examples: 432669
download_size: 9688164
dataset_size: 13830508
- config_name: ty
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16816401
num_examples: 507332
download_size: 12098154
dataset_size: 16816401
- config_name: tyv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4583082
num_examples: 149929
download_size: 2779632
dataset_size: 4583082
- config_name: tzm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4253588
num_examples: 144002
download_size: 2569067
dataset_size: 4253588
- config_name: udm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4854947
num_examples: 156300
download_size: 2958444
dataset_size: 4854947
- config_name: ug-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4316690
num_examples: 145443
download_size: 2614962
dataset_size: 4316690
- config_name: ug-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13786474
num_examples: 431056
download_size: 9659723
dataset_size: 13786474
- config_name: uk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 251058352
num_examples: 5108733
download_size: 168140976
dataset_size: 251058352
- config_name: ur
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 57063750
num_examples: 987011
download_size: 28328459
dataset_size: 57063750
- config_name: uz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11731793
num_examples: 344615
download_size: 8102734
dataset_size: 11731793
- config_name: uz-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252574
num_examples: 143981
download_size: 2567325
dataset_size: 4252574
- config_name: ve
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13932174
num_examples: 435216
download_size: 9777266
dataset_size: 13932174
- config_name: vec
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 52081230
num_examples: 1466867
download_size: 37307805
dataset_size: 52081230
- config_name: vep
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6174898
num_examples: 192298
download_size: 3994582
dataset_size: 6174898
- config_name: vi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 246835524
num_examples: 5743737
download_size: 172949263
dataset_size: 246835524
- config_name: vls
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42789297
num_examples: 1239359
download_size: 31228294
dataset_size: 42789297
- config_name: vmf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18352990
num_examples: 555205
download_size: 13289296
dataset_size: 18352990
- config_name: vo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 228352533
num_examples: 5610875
download_size: 155496988
dataset_size: 228352533
- config_name: vot
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5406190
num_examples: 173486
download_size: 3439433
dataset_size: 5406190
- config_name: wa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 49235347
num_examples: 1426584
download_size: 36167816
dataset_size: 49235347
- config_name: war
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 190306474
num_examples: 4449062
download_size: 133786270
dataset_size: 190306474
- config_name: wls
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4033
num_examples: 104
download_size: 5150
dataset_size: 4033
- config_name: wo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40961626
num_examples: 1193626
download_size: 29778666
dataset_size: 40961626
- config_name: wuu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40570130
num_examples: 1127741
download_size: 24209117
dataset_size: 40570130
- config_name: wya
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 28
num_examples: 1
download_size: 1740
dataset_size: 28
- config_name: xal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4475344
num_examples: 149984
download_size: 2722459
dataset_size: 4475344
- config_name: xh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20036194
num_examples: 615514
download_size: 14405310
dataset_size: 20036194
- config_name: xmf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5943645
num_examples: 169507
download_size: 3418593
dataset_size: 5943645
- config_name: xsy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4262789
num_examples: 144305
download_size: 2573349
dataset_size: 4262789
- config_name: yav
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4070
num_examples: 102
download_size: 4718
dataset_size: 4070
- config_name: yi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5495313
num_examples: 170277
download_size: 3373820
dataset_size: 5495313
- config_name: yo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 25424749
num_examples: 724345
download_size: 18086773
dataset_size: 25424749
- config_name: za
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15159230
num_examples: 365892
download_size: 7774767
dataset_size: 15159230
- config_name: zea
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14538518
num_examples: 451577
download_size: 10262897
dataset_size: 14538518
- config_name: zgh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4253917
num_examples: 144006
download_size: 2569373
dataset_size: 4253917
- config_name: zh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 264353677
num_examples: 5424320
download_size: 174420118
dataset_size: 264353677
- config_name: zh-cn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42868611
num_examples: 1158755
download_size: 27243799
dataset_size: 42868611
- config_name: zh-hans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 57233156
num_examples: 1483225
download_size: 36583522
dataset_size: 57233156
- config_name: zh-hant
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53502814
num_examples: 1356560
download_size: 36755083
dataset_size: 53502814
- config_name: zh-hk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15325323
num_examples: 408391
download_size: 10455809
dataset_size: 15325323
- config_name: zh-mo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6568267
num_examples: 180950
download_size: 3547260
dataset_size: 6568267
- config_name: zh-my
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32637498
num_examples: 916876
download_size: 19289581
dataset_size: 32637498
- config_name: zh-sg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 35325327
num_examples: 979652
download_size: 21150070
dataset_size: 35325327
- config_name: zh-tw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17500668
num_examples: 443057
download_size: 11121104
dataset_size: 17500668
- config_name: zh-yue
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1352
num_examples: 30
download_size: 2963
dataset_size: 1352
- config_name: zu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47349379
num_examples: 1380550
download_size: 34649660
dataset_size: 47349379
configs:
- config_name: aa
data_files:
- split: label
path: aa/label-*
- config_name: ab
data_files:
- split: label
path: ab/label-*
- config_name: abs
data_files:
- split: label
path: abs/label-*
- config_name: ace
data_files:
- split: label
path: ace/label-*
- config_name: ady
data_files:
- split: label
path: ady/label-*
- config_name: ady-cyrl
data_files:
- split: label
path: ady-cyrl/label-*
- config_name: aeb
data_files:
- split: label
path: aeb/label-*
- config_name: aeb-arab
data_files:
- split: label
path: aeb-arab/label-*
- config_name: aeb-latn
data_files:
- split: label
path: aeb-latn/label-*
- config_name: af
data_files:
- split: label
path: af/label-*
- config_name: agq
data_files:
- split: label
path: agq/label-*
- config_name: ak
data_files:
- split: label
path: ak/label-*
- config_name: aln
data_files:
- split: label
path: aln/label-*
- config_name: als
data_files:
- split: label
path: als/label-*
- config_name: alt
data_files:
- split: label
path: alt/label-*
- config_name: am
data_files:
- split: label
path: am/label-*
- config_name: ami
data_files:
- split: label
path: ami/label-*
- config_name: an
data_files:
- split: label
path: an/label-*
- config_name: ang
data_files:
- split: label
path: ang/label-*
- config_name: anp
data_files:
- split: label
path: anp/label-*
- config_name: ar
data_files:
- split: label
path: ar/label-*
- config_name: arc
data_files:
- split: label
path: arc/label-*
- config_name: arn
data_files:
- split: label
path: arn/label-*
- config_name: arq
data_files:
- split: label
path: arq/label-*
- config_name: ary
data_files:
- split: label
path: ary/label-*
- config_name: arz
data_files:
- split: label
path: arz/label-*
- config_name: as
data_files:
- split: label
path: as/label-*
- config_name: ase
data_files:
- split: label
path: ase/label-*
- config_name: ast
data_files:
- split: label
path: ast/label-*
- config_name: atj
data_files:
- split: label
path: atj/label-*
- config_name: av
data_files:
- split: label
path: av/label-*
- config_name: avk
data_files:
- split: label
path: avk/label-*
- config_name: awa
data_files:
- split: label
path: awa/label-*
- config_name: ay
data_files:
- split: label
path: ay/label-*
- config_name: az
data_files:
- split: label
path: az/label-*
- config_name: azb
data_files:
- split: label
path: azb/label-*
- config_name: ba
data_files:
- split: label
path: ba/label-*
- config_name: ban
data_files:
- split: label
path: ban/label-*
- config_name: ban-bali
data_files:
- split: label
path: ban-bali/label-*
- config_name: bar
data_files:
- split: label
path: bar/label-*
- config_name: bbc
data_files:
- split: label
path: bbc/label-*
- config_name: bcc
data_files:
- split: label
path: bcc/label-*
- config_name: be
data_files:
- split: label
path: be/label-*
- config_name: be-tarask
data_files:
- split: label
path: be-tarask/label-*
- config_name: bg
data_files:
- split: label
path: bg/label-*
- config_name: bgn
data_files:
- split: label
path: bgn/label-*
- config_name: bi
data_files:
- split: label
path: bi/label-*
- config_name: bjn
data_files:
- split: label
path: bjn/label-*
- config_name: bm
data_files:
- split: label
path: bm/label-*
- config_name: bn
data_files:
- split: label
path: bn/label-*
- config_name: bo
data_files:
- split: label
path: bo/label-*
- config_name: bpy
data_files:
- split: label
path: bpy/label-*
- config_name: bqi
data_files:
- split: label
path: bqi/label-*
- config_name: br
data_files:
- split: label
path: br/label-*
- config_name: brh
data_files:
- split: label
path: brh/label-*
- config_name: bs
data_files:
- split: label
path: bs/label-*
- config_name: btm
data_files:
- split: label
path: btm/label-*
- config_name: bto
data_files:
- split: label
path: bto/label-*
- config_name: bug
data_files:
- split: label
path: bug/label-*
- config_name: bxr
data_files:
- split: label
path: bxr/label-*
- config_name: ca
data_files:
- split: label
path: ca/label-*
- config_name: cbk-zam
data_files:
- split: label
path: cbk-zam/label-*
- config_name: cdo
data_files:
- split: label
path: cdo/label-*
- config_name: ce
data_files:
- split: label
path: ce/label-*
- config_name: ceb
data_files:
- split: label
path: ceb/label-*
- config_name: ch
data_files:
- split: label
path: ch/label-*
- config_name: cho
data_files:
- split: label
path: cho/label-*
- config_name: chr
data_files:
- split: label
path: chr/label-*
- config_name: chy
data_files:
- split: label
path: chy/label-*
- config_name: ckb
data_files:
- split: label
path: ckb/label-*
- config_name: co
data_files:
- split: label
path: co/label-*
- config_name: cps
data_files:
- split: label
path: cps/label-*
- config_name: cr
data_files:
- split: label
path: cr/label-*
- config_name: crh
data_files:
- split: label
path: crh/label-*
- config_name: crh-cyrl
data_files:
- split: label
path: crh-cyrl/label-*
- config_name: crh-latn
data_files:
- split: label
path: crh-latn/label-*
- config_name: cs
data_files:
- split: label
path: cs/label-*
- config_name: csb
data_files:
- split: label
path: csb/label-*
- config_name: cv
data_files:
- split: label
path: cv/label-*
- config_name: cy
data_files:
- split: label
path: cy/label-*
- config_name: da
data_files:
- split: label
path: da/label-*
- config_name: dag
data_files:
- split: label
path: dag/label-*
- config_name: de
data_files:
- split: label
path: de/label-*
- config_name: de-at
data_files:
- split: label
path: de-at/label-*
- config_name: de-ch
data_files:
- split: label
path: de-ch/label-*
- config_name: de-formal
data_files:
- split: label
path: de-formal/label-*
- config_name: din
data_files:
- split: label
path: din/label-*
- config_name: diq
data_files:
- split: label
path: diq/label-*
- config_name: dsb
data_files:
- split: label
path: dsb/label-*
- config_name: dtp
data_files:
- split: label
path: dtp/label-*
- config_name: dty
data_files:
- split: label
path: dty/label-*
- config_name: dua
data_files:
- split: label
path: dua/label-*
- config_name: dv
data_files:
- split: label
path: dv/label-*
- config_name: dz
data_files:
- split: label
path: dz/label-*
- config_name: ee
data_files:
- split: label
path: ee/label-*
- config_name: egl
data_files:
- split: label
path: egl/label-*
- config_name: el
data_files:
- split: label
path: el/label-*
- config_name: eml
data_files:
- split: label
path: eml/label-*
- config_name: en
data_files:
- split: label
path: en/label-*
default: true
- config_name: en-ca
data_files:
- split: label
path: en-ca/label-*
- config_name: en-gb
data_files:
- split: label
path: en-gb/label-*
- config_name: en-us
data_files:
- split: label
path: en-us/label-*
- config_name: eo
data_files:
- split: label
path: eo/label-*
- config_name: es
data_files:
- split: label
path: es/label-*
- config_name: es-419
data_files:
- split: label
path: es-419/label-*
- config_name: es-formal
data_files:
- split: label
path: es-formal/label-*
- config_name: et
data_files:
- split: label
path: et/label-*
- config_name: eu
data_files:
- split: label
path: eu/label-*
- config_name: ext
data_files:
- split: label
path: ext/label-*
- config_name: fa
data_files:
- split: label
path: fa/label-*
- config_name: ff
data_files:
- split: label
path: ff/label-*
- config_name: fi
data_files:
- split: label
path: fi/label-*
- config_name: fit
data_files:
- split: label
path: fit/label-*
- config_name: fj
data_files:
- split: label
path: fj/label-*
- config_name: fkv
data_files:
- split: label
path: fkv/label-*
- config_name: fo
data_files:
- split: label
path: fo/label-*
- config_name: fr
data_files:
- split: label
path: fr/label-*
- config_name: frc
data_files:
- split: label
path: frc/label-*
- config_name: frp
data_files:
- split: label
path: frp/label-*
- config_name: frr
data_files:
- split: label
path: frr/label-*
- config_name: fur
data_files:
- split: label
path: fur/label-*
- config_name: ga
data_files:
- split: label
path: ga/label-*
- config_name: gag
data_files:
- split: label
path: gag/label-*
- config_name: gan
data_files:
- split: label
path: gan/label-*
- config_name: gan-hans
data_files:
- split: label
path: gan-hans/label-*
- config_name: gan-hant
data_files:
- split: label
path: gan-hant/label-*
- config_name: gcr
data_files:
- split: label
path: gcr/label-*
- config_name: gd
data_files:
- split: label
path: gd/label-*
- config_name: gl
data_files:
- split: label
path: gl/label-*
- config_name: glk
data_files:
- split: label
path: glk/label-*
- config_name: gn
data_files:
- split: label
path: gn/label-*
- config_name: gom
data_files:
- split: label
path: gom/label-*
- config_name: gom-deva
data_files:
- split: label
path: gom-deva/label-*
- config_name: gom-latn
data_files:
- split: label
path: gom-latn/label-*
- config_name: gor
data_files:
- split: label
path: gor/label-*
- config_name: got
data_files:
- split: label
path: got/label-*
- config_name: grc
data_files:
- split: label
path: grc/label-*
- config_name: gu
data_files:
- split: label
path: gu/label-*
- config_name: guc
data_files:
- split: label
path: guc/label-*
- config_name: guw
data_files:
- split: label
path: guw/label-*
- config_name: gv
data_files:
- split: label
path: gv/label-*
- config_name: ha
data_files:
- split: label
path: ha/label-*
- config_name: hak
data_files:
- split: label
path: hak/label-*
- config_name: haw
data_files:
- split: label
path: haw/label-*
- config_name: he
data_files:
- split: label
path: he/label-*
- config_name: hi
data_files:
- split: label
path: hi/label-*
- config_name: hif
data_files:
- split: label
path: hif/label-*
- config_name: hif-latn
data_files:
- split: label
path: hif-latn/label-*
- config_name: hil
data_files:
- split: label
path: hil/label-*
- config_name: ho
data_files:
- split: label
path: ho/label-*
- config_name: hr
data_files:
- split: label
path: hr/label-*
- config_name: hrx
data_files:
- split: label
path: hrx/label-*
- config_name: hsb
data_files:
- split: label
path: hsb/label-*
- config_name: ht
data_files:
- split: label
path: ht/label-*
- config_name: hu
data_files:
- split: label
path: hu/label-*
- config_name: hu-formal
data_files:
- split: label
path: hu-formal/label-*
- config_name: hy
data_files:
- split: label
path: hy/label-*
- config_name: hyw
data_files:
- split: label
path: hyw/label-*
- config_name: hz
data_files:
- split: label
path: hz/label-*
- config_name: ia
data_files:
- split: label
path: ia/label-*
- config_name: id
data_files:
- split: label
path: id/label-*
- config_name: ie
data_files:
- split: label
path: ie/label-*
- config_name: ig
data_files:
- split: label
path: ig/label-*
- config_name: ii
data_files:
- split: label
path: ii/label-*
- config_name: ik
data_files:
- split: label
path: ik/label-*
- config_name: ike-cans
data_files:
- split: label
path: ike-cans/label-*
- config_name: ike-latn
data_files:
- split: label
path: ike-latn/label-*
- config_name: ilo
data_files:
- split: label
path: ilo/label-*
- config_name: inh
data_files:
- split: label
path: inh/label-*
- config_name: io
data_files:
- split: label
path: io/label-*
- config_name: is
data_files:
- split: label
path: is/label-*
- config_name: it
data_files:
- split: label
path: it/label-*
- config_name: iu
data_files:
- split: label
path: iu/label-*
- config_name: ja
data_files:
- split: label
path: ja/label-*
- config_name: jam
data_files:
- split: label
path: jam/label-*
- config_name: jbo
data_files:
- split: label
path: jbo/label-*
- config_name: jv
data_files:
- split: label
path: jv/label-*
- config_name: ka
data_files:
- split: label
path: ka/label-*
- config_name: kaa
data_files:
- split: label
path: kaa/label-*
- config_name: kab
data_files:
- split: label
path: kab/label-*
- config_name: kbd
data_files:
- split: label
path: kbd/label-*
- config_name: kbd-cyrl
data_files:
- split: label
path: kbd-cyrl/label-*
- config_name: kbp
data_files:
- split: label
path: kbp/label-*
- config_name: kea
data_files:
- split: label
path: kea/label-*
- config_name: kg
data_files:
- split: label
path: kg/label-*
- config_name: khw
data_files:
- split: label
path: khw/label-*
- config_name: ki
data_files:
- split: label
path: ki/label-*
- config_name: kj
data_files:
- split: label
path: kj/label-*
- config_name: kjp
data_files:
- split: label
path: kjp/label-*
- config_name: kk
data_files:
- split: label
path: kk/label-*
- config_name: kk-arab
data_files:
- split: label
path: kk-arab/label-*
- config_name: kk-kz
data_files:
- split: label
path: kk-kz/label-*
- config_name: kk-latn
data_files:
- split: label
path: kk-latn/label-*
- config_name: kk-tr
data_files:
- split: label
path: kk-tr/label-*
- config_name: ko
data_files:
- split: label
path: ko/label-*
- config_name: ko-kp
data_files:
- split: label
path: ko-kp/label-*
- config_name: koi
data_files:
- split: label
path: koi/label-*
- config_name: kr
data_files:
- split: label
path: kr/label-*
- config_name: krc
data_files:
- split: label
path: krc/label-*
- config_name: kri
data_files:
- split: label
path: kri/label-*
- config_name: krj
data_files:
- split: label
path: krj/label-*
- config_name: krl
data_files:
- split: label
path: krl/label-*
- config_name: ks
data_files:
- split: label
path: ks/label-*
- config_name: ks-deva
data_files:
- split: label
path: ks-deva/label-*
- config_name: ksh
data_files:
- split: label
path: ksh/label-*
- config_name: ku
data_files:
- split: label
path: ku/label-*
- config_name: ku-arab
data_files:
- split: label
path: ku-arab/label-*
- config_name: ku-latn
data_files:
- split: label
path: ku-latn/label-*
- config_name: kum
data_files:
- split: label
path: kum/label-*
- config_name: kv
data_files:
- split: label
path: kv/label-*
- config_name: kw
data_files:
- split: label
path: kw/label-*
- config_name: ky
data_files:
- split: label
path: ky/label-*
- config_name: la
data_files:
- split: label
path: la/label-*
- config_name: lad
data_files:
- split: label
path: lad/label-*
- config_name: lb
data_files:
- split: label
path: lb/label-*
- config_name: lbe
data_files:
- split: label
path: lbe/label-*
- config_name: lez
data_files:
- split: label
path: lez/label-*
- config_name: lfn
data_files:
- split: label
path: lfn/label-*
- config_name: lg
data_files:
- split: label
path: lg/label-*
- config_name: li
data_files:
- split: label
path: li/label-*
- config_name: lij
data_files:
- split: label
path: lij/label-*
- config_name: liv
data_files:
- split: label
path: liv/label-*
- config_name: lki
data_files:
- split: label
path: lki/label-*
- config_name: lld
data_files:
- split: label
path: lld/label-*
- config_name: lmo
data_files:
- split: label
path: lmo/label-*
- config_name: ln
data_files:
- split: label
path: ln/label-*
- config_name: lo
data_files:
- split: label
path: lo/label-*
- config_name: loz
data_files:
- split: label
path: loz/label-*
- config_name: lt
data_files:
- split: label
path: lt/label-*
- config_name: ltg
data_files:
- split: label
path: ltg/label-*
- config_name: lus
data_files:
- split: label
path: lus/label-*
- config_name: luz
data_files:
- split: label
path: luz/label-*
- config_name: lv
data_files:
- split: label
path: lv/label-*
- config_name: lzh
data_files:
- split: label
path: lzh/label-*
- config_name: mdf
data_files:
- split: label
path: mdf/label-*
- config_name: mg
data_files:
- split: label
path: mg/label-*
- config_name: mh
data_files:
- split: label
path: mh/label-*
- config_name: mi
data_files:
- split: label
path: mi/label-*
- config_name: min
data_files:
- split: label
path: min/label-*
- config_name: mk
data_files:
- split: label
path: mk/label-*
- config_name: ml
data_files:
- split: label
path: ml/label-*
- config_name: mn
data_files:
- split: label
path: mn/label-*
- config_name: mni
data_files:
- split: label
path: mni/label-*
- config_name: mnw
data_files:
- split: label
path: mnw/label-*
- config_name: mo
data_files:
- split: label
path: mo/label-*
- config_name: mr
data_files:
- split: label
path: mr/label-*
- config_name: mrh
data_files:
- split: label
path: mrh/label-*
- config_name: mrj
data_files:
- split: label
path: mrj/label-*
- config_name: ms
data_files:
- split: label
path: ms/label-*
- config_name: ms-arab
data_files:
- split: label
path: ms-arab/label-*
- config_name: mt
data_files:
- split: label
path: mt/label-*
- config_name: mus
data_files:
- split: label
path: mus/label-*
- config_name: mwl
data_files:
- split: label
path: mwl/label-*
- config_name: my
data_files:
- split: label
path: my/label-*
- config_name: mzn
data_files:
- split: label
path: mzn/label-*
- config_name: na
data_files:
- split: label
path: na/label-*
- config_name: nah
data_files:
- split: label
path: nah/label-*
- config_name: nan-hani
data_files:
- split: label
path: nan-hani/label-*
- config_name: nap
data_files:
- split: label
path: nap/label-*
- config_name: nb
data_files:
- split: label
path: nb/label-*
- config_name: nds
data_files:
- split: label
path: nds/label-*
- config_name: nds-nl
data_files:
- split: label
path: nds-nl/label-*
- config_name: ne
data_files:
- split: label
path: ne/label-*
- config_name: new
data_files:
- split: label
path: new/label-*
- config_name: ng
data_files:
- split: label
path: ng/label-*
- config_name: nia
data_files:
- split: label
path: nia/label-*
- config_name: niu
data_files:
- split: label
path: niu/label-*
- config_name: nl
data_files:
- split: label
path: nl/label-*
- config_name: nn
data_files:
- split: label
path: nn/label-*
- config_name: 'no'
data_files:
- split: label
path: no/label-*
- config_name: nod
data_files:
- split: label
path: nod/label-*
- config_name: nov
data_files:
- split: label
path: nov/label-*
- config_name: nqo
data_files:
- split: label
path: nqo/label-*
- config_name: nrm
data_files:
- split: label
path: nrm/label-*
- config_name: nso
data_files:
- split: label
path: nso/label-*
- config_name: nv
data_files:
- split: label
path: nv/label-*
- config_name: ny
data_files:
- split: label
path: ny/label-*
- config_name: nys
data_files:
- split: label
path: nys/label-*
- config_name: oc
data_files:
- split: label
path: oc/label-*
- config_name: olo
data_files:
- split: label
path: olo/label-*
- config_name: om
data_files:
- split: label
path: om/label-*
- config_name: or
data_files:
- split: label
path: or/label-*
- config_name: os
data_files:
- split: label
path: os/label-*
- config_name: ota
data_files:
- split: label
path: ota/label-*
- config_name: pa
data_files:
- split: label
path: pa/label-*
- config_name: pam
data_files:
- split: label
path: pam/label-*
- config_name: pap
data_files:
- split: label
path: pap/label-*
- config_name: pcd
data_files:
- split: label
path: pcd/label-*
- config_name: pdc
data_files:
- split: label
path: pdc/label-*
- config_name: pdt
data_files:
- split: label
path: pdt/label-*
- config_name: pfl
data_files:
- split: label
path: pfl/label-*
- config_name: pi
data_files:
- split: label
path: pi/label-*
- config_name: pih
data_files:
- split: label
path: pih/label-*
- config_name: pl
data_files:
- split: label
path: pl/label-*
- config_name: pms
data_files:
- split: label
path: pms/label-*
- config_name: pnb
data_files:
- split: label
path: pnb/label-*
- config_name: pnt
data_files:
- split: label
path: pnt/label-*
- config_name: prg
data_files:
- split: label
path: prg/label-*
- config_name: ps
data_files:
- split: label
path: ps/label-*
- config_name: pt
data_files:
- split: label
path: pt/label-*
- config_name: pt-br
data_files:
- split: label
path: pt-br/label-*
- config_name: pwn
data_files:
- split: label
path: pwn/label-*
- config_name: qu
data_files:
- split: label
path: qu/label-*
- config_name: quc
data_files:
- split: label
path: quc/label-*
- config_name: qug
data_files:
- split: label
path: qug/label-*
- config_name: rgn
data_files:
- split: label
path: rgn/label-*
- config_name: rif
data_files:
- split: label
path: rif/label-*
- config_name: rm
data_files:
- split: label
path: rm/label-*
- config_name: rmc
data_files:
- split: label
path: rmc/label-*
- config_name: rmy
data_files:
- split: label
path: rmy/label-*
- config_name: rn
data_files:
- split: label
path: rn/label-*
- config_name: ro
data_files:
- split: label
path: ro/label-*
- config_name: roa-tara
data_files:
- split: label
path: roa-tara/label-*
- config_name: ru
data_files:
- split: label
path: ru/label-*
- config_name: rue
data_files:
- split: label
path: rue/label-*
- config_name: rup
data_files:
- split: label
path: rup/label-*
- config_name: ruq-cyrl
data_files:
- split: label
path: ruq-cyrl/label-*
- config_name: ruq-latn
data_files:
- split: label
path: ruq-latn/label-*
- config_name: rw
data_files:
- split: label
path: rw/label-*
- config_name: rwr
data_files:
- split: label
path: rwr/label-*
- config_name: ryu
data_files:
- split: label
path: ryu/label-*
- config_name: sa
data_files:
- split: label
path: sa/label-*
- config_name: sat
data_files:
- split: label
path: sat/label-*
- config_name: sc
data_files:
- split: label
path: sc/label-*
- config_name: scn
data_files:
- split: label
path: scn/label-*
- config_name: sco
data_files:
- split: label
path: sco/label-*
- config_name: sd
data_files:
- split: label
path: sd/label-*
- config_name: sdc
data_files:
- split: label
path: sdc/label-*
- config_name: se
data_files:
- split: label
path: se/label-*
- config_name: sei
data_files:
- split: label
path: sei/label-*
- config_name: sg
data_files:
- split: label
path: sg/label-*
- config_name: sh
data_files:
- split: label
path: sh/label-*
- config_name: shi-latn
data_files:
- split: label
path: shi-latn/label-*
- config_name: shi-tfng
data_files:
- split: label
path: shi-tfng/label-*
- config_name: shn
data_files:
- split: label
path: shn/label-*
- config_name: shy-latn
data_files:
- split: label
path: shy-latn/label-*
- config_name: si
data_files:
- split: label
path: si/label-*
- config_name: sjd
data_files:
- split: label
path: sjd/label-*
- config_name: sje
data_files:
- split: label
path: sje/label-*
- config_name: sju
data_files:
- split: label
path: sju/label-*
- config_name: sk
data_files:
- split: label
path: sk/label-*
- config_name: skr
data_files:
- split: label
path: skr/label-*
- config_name: sl
data_files:
- split: label
path: sl/label-*
- config_name: sli
data_files:
- split: label
path: sli/label-*
- config_name: sm
data_files:
- split: label
path: sm/label-*
- config_name: sma
data_files:
- split: label
path: sma/label-*
- config_name: smj
data_files:
- split: label
path: smj/label-*
- config_name: smn
data_files:
- split: label
path: smn/label-*
- config_name: sms
data_files:
- split: label
path: sms/label-*
- config_name: sn
data_files:
- split: label
path: sn/label-*
- config_name: sq
data_files:
- split: label
path: sq/label-*
- config_name: sr
data_files:
- split: label
path: sr/label-*
- config_name: sr-ec
data_files:
- split: label
path: sr-ec/label-*
- config_name: sr-el
data_files:
- split: label
path: sr-el/label-*
- config_name: srq
data_files:
- split: label
path: srq/label-*
- config_name: ss
data_files:
- split: label
path: ss/label-*
- config_name: st
data_files:
- split: label
path: st/label-*
- config_name: stq
data_files:
- split: label
path: stq/label-*
- config_name: su
data_files:
- split: label
path: su/label-*
- config_name: sv
data_files:
- split: label
path: sv/label-*
- config_name: sw
data_files:
- split: label
path: sw/label-*
- config_name: szl
data_files:
- split: label
path: szl/label-*
- config_name: szy
data_files:
- split: label
path: szy/label-*
- config_name: ta
data_files:
- split: label
path: ta/label-*
- config_name: tay
data_files:
- split: label
path: tay/label-*
- config_name: tcy
data_files:
- split: label
path: tcy/label-*
- config_name: te
data_files:
- split: label
path: te/label-*
- config_name: tet
data_files:
- split: label
path: tet/label-*
- config_name: tg
data_files:
- split: label
path: tg/label-*
- config_name: tg-cyrl
data_files:
- split: label
path: tg-cyrl/label-*
- config_name: tg-latn
data_files:
- split: label
path: tg-latn/label-*
- config_name: th
data_files:
- split: label
path: th/label-*
- config_name: ti
data_files:
- split: label
path: ti/label-*
- config_name: tk
data_files:
- split: label
path: tk/label-*
- config_name: tl
data_files:
- split: label
path: tl/label-*
- config_name: tly
data_files:
- split: label
path: tly/label-*
- config_name: tly-cyrl
data_files:
- split: label
path: tly-cyrl/label-*
- config_name: tn
data_files:
- split: label
path: tn/label-*
- config_name: to
data_files:
- split: label
path: to/label-*
- config_name: tpi
data_files:
- split: label
path: tpi/label-*
- config_name: tr
data_files:
- split: label
path: tr/label-*
- config_name: tru
data_files:
- split: label
path: tru/label-*
- config_name: trv
data_files:
- split: label
path: trv/label-*
- config_name: ts
data_files:
- split: label
path: ts/label-*
- config_name: tt
data_files:
- split: label
path: tt/label-*
- config_name: tt-cyrl
data_files:
- split: label
path: tt-cyrl/label-*
- config_name: tt-latn
data_files:
- split: label
path: tt-latn/label-*
- config_name: tum
data_files:
- split: label
path: tum/label-*
- config_name: tw
data_files:
- split: label
path: tw/label-*
- config_name: ty
data_files:
- split: label
path: ty/label-*
- config_name: tyv
data_files:
- split: label
path: tyv/label-*
- config_name: tzm
data_files:
- split: label
path: tzm/label-*
- config_name: udm
data_files:
- split: label
path: udm/label-*
- config_name: ug-arab
data_files:
- split: label
path: ug-arab/label-*
- config_name: ug-latn
data_files:
- split: label
path: ug-latn/label-*
- config_name: uk
data_files:
- split: label
path: uk/label-*
- config_name: ur
data_files:
- split: label
path: ur/label-*
- config_name: uz
data_files:
- split: label
path: uz/label-*
- config_name: uz-cyrl
data_files:
- split: label
path: uz-cyrl/label-*
- config_name: ve
data_files:
- split: label
path: ve/label-*
- config_name: vec
data_files:
- split: label
path: vec/label-*
- config_name: vep
data_files:
- split: label
path: vep/label-*
- config_name: vi
data_files:
- split: label
path: vi/label-*
- config_name: vls
data_files:
- split: label
path: vls/label-*
- config_name: vmf
data_files:
- split: label
path: vmf/label-*
- config_name: vo
data_files:
- split: label
path: vo/label-*
- config_name: vot
data_files:
- split: label
path: vot/label-*
- config_name: wa
data_files:
- split: label
path: wa/label-*
- config_name: war
data_files:
- split: label
path: war/label-*
- config_name: wls
data_files:
- split: label
path: wls/label-*
- config_name: wo
data_files:
- split: label
path: wo/label-*
- config_name: wuu
data_files:
- split: label
path: wuu/label-*
- config_name: wya
data_files:
- split: label
path: wya/label-*
- config_name: xal
data_files:
- split: label
path: xal/label-*
- config_name: xh
data_files:
- split: label
path: xh/label-*
- config_name: xmf
data_files:
- split: label
path: xmf/label-*
- config_name: xsy
data_files:
- split: label
path: xsy/label-*
- config_name: yav
data_files:
- split: label
path: yav/label-*
- config_name: yi
data_files:
- split: label
path: yi/label-*
- config_name: yo
data_files:
- split: label
path: yo/label-*
- config_name: za
data_files:
- split: label
path: za/label-*
- config_name: zea
data_files:
- split: label
path: zea/label-*
- config_name: zgh
data_files:
- split: label
path: zgh/label-*
- config_name: zh
data_files:
- split: label
path: zh/label-*
- config_name: zh-cn
data_files:
- split: label
path: zh-cn/label-*
- config_name: zh-hans
data_files:
- split: label
path: zh-hans/label-*
- config_name: zh-hant
data_files:
- split: label
path: zh-hant/label-*
- config_name: zh-hk
data_files:
- split: label
path: zh-hk/label-*
- config_name: zh-mo
data_files:
- split: label
path: zh-mo/label-*
- config_name: zh-my
data_files:
- split: label
path: zh-my/label-*
- config_name: zh-sg
data_files:
- split: label
path: zh-sg/label-*
- config_name: zh-tw
data_files:
- split: label
path: zh-tw/label-*
- config_name: zh-yue
data_files:
- split: label
path: zh-yue/label-*
- config_name: zu
data_files:
- split: label
path: zu/label-*
task_categories:
- translation
- text2text-generation
language:
- en
- fr
- de
- ja
- zh
- hi
- ar
- bn
- ru
- es
---
# Wikidata Labels
Large parallel corpus for machine translation
- Entity label data extracted from Wikidata (2022-01-03), filtered for item entities only
- Only download the languages you need with `datasets>=2.14.0`
- Similar dataset: https://huggingface.co/datasets/wmt/wikititles (18 Wikipedia titles pairs instead of all Wikidata entities)
## Dataset Details
### Dataset Sources
- Wikidata JSON dump (wikidata-20220103-all.json.gz) https://www.wikidata.org/wiki/Wikidata:Database_download
## Uses
You can generate parallel text examples from this dataset like below:
```python
from datasets import load_dataset
import pandas as pd
def parallel_labels(lang_codes: list, how="inner", repo_id="rayliuca/wikidata_entity_label", merge_config={}, datasets_config={}) -> pd.DataFrame:
out_df = None
for lc in lang_codes:
dataset = load_dataset(repo_id, lc, **datasets_config)
dataset_df = dataset['label'].to_pandas().rename(columns={"label":lc}).drop(columns=['lastrevid'])
if out_df is None:
out_df = dataset_df
else:
out_df = out_df.merge(
dataset_df,
on='wikidata_id',
how=how,
**merge_config
)
return out_df
# Note: the "en" subset is >4GB
parallel_labels(['en', 'fr', 'ja', 'zh']).head()
```
### Output
| | wikidata_id | en | fr | ja | zh |
|---:|:--------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------|:---------------------------------------------|
| 0 | Q109739412 | SARS-CoV-2 Omicron variant | variant Omicron du SARS-CoV-2 | SARSコロナウイルス2-オミクロン株 | 嚴重急性呼吸道症候群冠狀病毒2型Omicron變異株 |
| 1 | Q108460606 | Ulughbegsaurus | Ulughbegsaurus | ウルグベグサウルス | 兀魯伯龍屬 |
| 2 | Q108556886 | AUKUS | AUKUS | AUKUS | AUKUS |
| 3 | Q106496152 | Claude Joseph | Claude Joseph | クロード・ジョゼフ | 克洛德·约瑟夫 |
| 4 | Q105519361 | The World's Finest Assassin Gets Reincarnated in a Different World as an Aristocrat | The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat | 世界最高の暗殺者、異世界貴族に転生する | 世界頂尖的暗殺者轉生為異世界貴族 |
Note: this example table above shows a quirk(?) of the Wiki data. The French Wikipedia page [The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat](https://fr.wikipedia.org/wiki/The_World%27s_Finest_Assassin_Gets_Reincarnated_in_Another_World_as_an_Aristocrat) uses English for its title. While this could be disadvantageous for direct translation training, it also provides insights into how native speakers might call this entity instead of the literal translation on the Wiki page as well
## Dataset Structure
Each language has its own subset (aka config), which means you only have to download the languages you need with `datasets>=2.14.0`
Each subset has these fields:
- wikidata_id
- lastrevid
- label
## Dataset Creation
#### Data Collection and Processing
- Filtered for item entities only
- Ignored the descriptions as those texts are not very parallel
## Bias, Risks, and Limitations
- Might be slightly outdated (2022)
- Popular languages have more entries
- Labels are not guaranteed to be literal translations (see examples above) |
gvecchio/MatSynth | gvecchio | "2024-04-16T13:44:10Z" | 15,307 | 47 | [
"task_categories:image-to-image",
"task_categories:unconditional-image-generation",
"task_categories:image-classification",
"task_categories:text-to-image",
"language:en",
"size_categories:1K<n<10K",
"arxiv:2401.06056",
"region:us",
"materials",
"pbr",
"4d",
"graphics",
"rendering",
"svbrdf",
"synthetic"
] | [
"image-to-image",
"unconditional-image-generation",
"image-classification",
"text-to-image"
] | "2023-11-29T11:24:57Z" | ---
language:
- en
size_categories:
- 1K<n<10K
task_categories:
- image-to-image
- unconditional-image-generation
- image-classification
- text-to-image
pretty_name: MatSynth
dataset_info:
features:
- name: name
dtype: string
- name: category
dtype:
class_label:
names:
'0': ceramic
'1': concrete
'2': fabric
'3': ground
'4': leather
'5': marble
'6': metal
'7': misc
'8': plaster
'9': plastic
'10': stone
'11': terracotta
'12': wood
- name: metadata
struct:
- name: authors
sequence: string
- name: category
dtype: string
- name: description
dtype: string
- name: height_factor
dtype: float32
- name: height_mean
dtype: float32
- name: license
dtype: string
- name: link
dtype: string
- name: maps
sequence: string
- name: method
dtype: string
- name: name
dtype: string
- name: physical_size
dtype: float32
- name: source
dtype: string
- name: stationary
dtype: bool
- name: tags
sequence: string
- name: version_date
dtype: string
- name: basecolor
dtype: image
- name: diffuse
dtype: image
- name: displacement
dtype: image
- name: height
dtype: image
- name: metallic
dtype: image
- name: normal
dtype: image
- name: opacity
dtype: image
- name: roughness
dtype: image
- name: specular
dtype: image
- name: blend_mask
dtype: image
splits:
- name: test
num_bytes: 7443356066.0
num_examples: 89
- name: train
num_bytes: 430581667965.1
num_examples: 5700
download_size: 440284274332
dataset_size: 438025024031.1
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
tags:
- materials
- pbr
- 4d
- graphics
- rendering
- svbrdf
- synthetic
viewer: false
---
# MatSynth
MatSynth is a Physically Based Rendering (PBR) materials dataset designed for modern AI applications.
This dataset consists of over 4,000 ultra-high resolution, offering unparalleled scale, diversity, and detail.
Meticulously collected and curated, MatSynth is poised to drive innovation in material acquisition and generation applications, providing a rich resource for researchers, developers, and enthusiasts in computer graphics and related fields.
For further information, refer to our paper: ["MatSynth: A Modern PBR Materials Dataset"](https://arxiv.org/abs/2401.06056) available on arXiv.
<center>
<img src="https://gvecchio.com/matsynth/static/images/teaser.png" style="border-radius:10px">
</center>
## 🔍 Dataset Details
### Dataset Description
MatSynth is a new large-scale dataset comprising over 4,000 ultra-high resolution Physically Based Rendering (PBR) materials,
all released under permissive licensing.
All materials in the dataset are represented by a common set of maps (*Basecolor*, *Diffuse*, *Normal*, *Height*, *Roughness*, *Metallic*, *Specular* and, when useful, *Opacity*),
modelling both the reflectance and mesostructure of the material.
Each material in the dataset comes with rich metadata, including information on its origin, licensing details, category, tags, creation method,
and, when available, descriptions and physical size.
This comprehensive metadata facilitates precise material selection and usage, catering to the specific needs of users.
<center>
<img src="https://gvecchio.com/matsynth/static/images/data.png" style="border-radius:10px">
</center>
## 📂 Dataset Structure
The MatSynth dataset is divided into two splits: the test split, containing 89 materials, and the train split, consisting of 3,980 materials.
## 🔨 Dataset Creation
The MatSynth dataset is designed to support modern, learning-based techniques for a variety of material-related tasks including,
but not limited to, material acquisition, material generation and synthetic data generation e.g. for retrieval or segmentation.
### 🗃️ Source Data
The MatSynth dataset is the result of an extensively collection of data from multiple online sources operating under the CC0 and CC-BY licensing framework.
This collection strategy allows to capture a broad spectrum of materials,
from commonly used ones to more niche or specialized variants while guaranteeing that the data can be used for a variety of usecases.
Materials under CC0 license were collected from [AmbientCG](https://ambientcg.com/), [CGBookCase](https://www.cgbookcase.com/), [PolyHeaven](https://polyhaven.com/),
[ShateTexture](https://www.sharetextures.com/), and [TextureCan](https://www.texturecan.com/).
The dataset also includes limited set of materials from the artist [Julio Sillet](https://juliosillet.gumroad.com/), distributed under CC-BY license.
We collected over 6000 materials which we meticulously filter to keep only tileable, 4K materials.
This high resolution allows us to extract many different crops from each sample at different scale for augmentation.
Additionally, we discard blurry or low-quality materials (by visual inspection).
The resulting dataset consists of 3736 unique materials which we augment by blending semantically compatible materials (e.g.: snow over ground).
In total, our dataset contains 4069 unique 4K materials.
### ✒️ Annotations
The dataset is composed of material maps (Basecolor, Diffuse, Normal, Height, Roughness, Metallic, Specular and, when useful, opacity)
and associated renderings under varying environmental illuminations, and multi-scale crops.
We adopt the OpenGL standard for the Normal map (Y-axis pointing upward).
The Height map is given in a 16-bit single channel format for higher precision.
In addition to these maps, the dataset includes other annotations providing context to each material:
the capture method (photogrammetry, procedural generation, or approximation);
list of descriptive tags; source name (website); source link;
licensing and a timestamps for eventual future versioning.
For a subset of materials, when the information is available, we also provide the author name (387), text description (572) and a physical size,
presented as the length of the edge in centimeters (358).
## 🧑💻 Usage
MatSynth is accessible through the datasets python library.
Following a usage example:
```python
import torchvision.transforms.functional as TF
from datasets import load_dataset
from torch.utils.data import DataLoader
# image processing function
def process_img(x):
x = TF.resize(x, (1024, 1024))
x = TF.to_tensor(x)
return x
# item processing function
def process_batch(examples):
examples["basecolor"] = [process_img(x) for x in examples["basecolor"]]
return examples
# load the dataset in streaming mode
ds = load_dataset(
"gvecchio/MatSynth",
streaming = True,
)
# remove unwanted columns
ds = ds.remove_columns(["diffuse", "specular", "displacement", "opacity", "blend_mask"])
# or keep only specified columns
ds = ds.select_columns(["metadata", "basecolor"])
# shuffle data
ds = ds.shuffle(buffer_size=100)
# filter data matching a specific criteria, e.g.: only CC0 materials
ds = ds.filter(lambda x: x["metadata"]["license"] == "CC0")
# filter out data from Deschaintre et al. 2018
ds = ds.filter(lambda x: x["metadata"]["source"] != "deschaintre_2020")
# Set up processing
ds = ds.map(process_batch, batched=True, batch_size=8)
# set format for usage in torch
ds = ds.with_format("torch")
# iterate over the dataset
for x in ds:
print(x)
```
⚠️ **Note**: Streaming can be slow. We strongly suggest to cache data locally.
## 📜 Citation
```
@inproceedings{vecchio2023matsynth,
title={MatSynth: A Modern PBR Materials Dataset},
author={Vecchio, Giuseppe and Deschaintre, Valentin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2024}
}
```
If you use the data from Deschaintre et al. contained in this dataset, please also cite:
```
@article{deschaintre2018single,
title={Single-image svbrdf capture with a rendering-aware deep network},
author={Deschaintre, Valentin and Aittala, Miika and Durand, Fredo and Drettakis, George and Bousseau, Adrien},
journal={ACM Transactions on Graphics (ToG)},
volume={37},
number={4},
pages={1--15},
year={2018},
publisher={ACM New York, NY, USA}
}
``` |
jkot/merged_preprocessed_parliament_commonvoice | jkot | "2023-05-01T13:35:28Z" | 15,272 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-05-01T10:37:03Z" | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 210499135424
num_examples: 219101
- name: test
num_bytes: 11099630080
num_examples: 11555
download_size: 65027813279
dataset_size: 221598765504
---
# Dataset Card for "merged_preprocessed_parliament_commonvoice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hails/bigbench | hails | "2023-11-17T16:05:10Z" | 15,215 | 4 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-10-03T19:55:51Z" | ---
dataset_info:
- config_name: abstract_narrative_understanding_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 6560069
num_examples: 3000
- name: train
num_bytes: 5249819
num_examples: 2400
- name: validation
num_bytes: 1310250
num_examples: 600
download_size: 0
dataset_size: 13120138
- config_name: anachronisms_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 48826
num_examples: 230
- name: train
num_bytes: 39116
num_examples: 184
- name: validation
num_bytes: 9710
num_examples: 46
download_size: 0
dataset_size: 97652
- config_name: analogical_similarity_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1373815
num_examples: 323
- name: train
num_bytes: 1101512
num_examples: 259
- name: validation
num_bytes: 272303
num_examples: 64
download_size: 0
dataset_size: 2747630
- config_name: analytic_entailment_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 17316
num_examples: 70
- name: train
num_bytes: 13368
num_examples: 54
- name: validation
num_bytes: 3948
num_examples: 16
download_size: 0
dataset_size: 34632
- config_name: arithmetic_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 3833272
num_examples: 15023
- name: train
num_bytes: 3066775
num_examples: 12019
- name: validation
num_bytes: 766497
num_examples: 3004
download_size: 0
dataset_size: 7666544
- config_name: ascii_word_recognition_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 4984662
num_examples: 5000
- name: train
num_bytes: 3997273
num_examples: 4000
- name: validation
num_bytes: 987389
num_examples: 1000
download_size: 0
dataset_size: 9969324
- config_name: authorship_verification_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 14118592
num_examples: 880
- name: train
num_bytes: 11288481
num_examples: 704
- name: validation
num_bytes: 2830111
num_examples: 176
download_size: 0
dataset_size: 28237184
- config_name: auto_categorization_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 40549
num_examples: 328
- name: train
num_bytes: 32992
num_examples: 263
- name: validation
num_bytes: 7557
num_examples: 65
download_size: 0
dataset_size: 81098
- config_name: auto_debugging_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 5112
num_examples: 34
- name: train
num_bytes: 2651
num_examples: 18
- name: validation
num_bytes: 2461
num_examples: 16
download_size: 0
dataset_size: 10224
- config_name: bbq_lite_json_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 6890493
num_examples: 16076
- name: train
num_bytes: 5508584
num_examples: 12866
- name: validation
num_bytes: 1381909
num_examples: 3210
download_size: 0
dataset_size: 13780986
- config_name: bridging_anaphora_resolution_barqa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1971015
num_examples: 648
- name: train
num_bytes: 1537264
num_examples: 519
- name: validation
num_bytes: 433751
num_examples: 129
download_size: 0
dataset_size: 3942030
- config_name: causal_judgment_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 204878
num_examples: 190
- name: train
num_bytes: 164940
num_examples: 152
- name: validation
num_bytes: 39938
num_examples: 38
download_size: 0
dataset_size: 409756
- config_name: cause_and_effect_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 49314
num_examples: 153
- name: train
num_bytes: 39620
num_examples: 123
- name: validation
num_bytes: 9694
num_examples: 30
download_size: 0
dataset_size: 98628
- config_name: checkmate_in_one_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 3123256
num_examples: 3498
- name: train
num_bytes: 2502314
num_examples: 2799
- name: validation
num_bytes: 620942
num_examples: 699
download_size: 0
dataset_size: 6246512
- config_name: chess_state_tracking_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 3269932
num_examples: 6000
- name: train
num_bytes: 2616294
num_examples: 4800
- name: validation
num_bytes: 653638
num_examples: 1200
download_size: 0
dataset_size: 6539864
- config_name: chinese_remainder_theorem_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 153222
num_examples: 500
- name: train
num_bytes: 122601
num_examples: 400
- name: validation
num_bytes: 30621
num_examples: 100
download_size: 0
dataset_size: 306444
- config_name: cifar10_classification_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 111022200
num_examples: 20000
- name: train
num_bytes: 88782724
num_examples: 16000
- name: validation
num_bytes: 22239476
num_examples: 4000
download_size: 0
dataset_size: 222044400
- config_name: code_line_description_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 33670
num_examples: 60
- name: train
num_bytes: 25530
num_examples: 44
- name: validation
num_bytes: 8140
num_examples: 16
download_size: 0
dataset_size: 67340
- config_name: codenames_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 25195
num_examples: 85
- name: train
num_bytes: 19964
num_examples: 68
- name: validation
num_bytes: 5231
num_examples: 17
download_size: 0
dataset_size: 50390
- config_name: color_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1633263
num_examples: 4000
- name: train
num_bytes: 1306663
num_examples: 3200
- name: validation
num_bytes: 326600
num_examples: 800
download_size: 0
dataset_size: 3266526
- config_name: common_morpheme_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 12388
num_examples: 50
- name: train
num_bytes: 8444
num_examples: 34
- name: validation
num_bytes: 3944
num_examples: 16
download_size: 0
dataset_size: 24776
- config_name: conceptual_combinations_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 58859
num_examples: 103
- name: train
num_bytes: 48010
num_examples: 84
- name: validation
num_bytes: 10849
num_examples: 19
download_size: 0
dataset_size: 117718
- config_name: conlang_translation_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 215190
num_examples: 164
- name: train
num_bytes: 173024
num_examples: 132
- name: validation
num_bytes: 42166
num_examples: 32
download_size: 0
dataset_size: 430380
- config_name: contextual_parametric_knowledge_conflicts_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 14587554
num_examples: 17528
- name: train
num_bytes: 11666236
num_examples: 14023
- name: validation
num_bytes: 2921318
num_examples: 3505
download_size: 0
dataset_size: 29175108
- config_name: crash_blossom_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 12194
num_examples: 38
- name: train
num_bytes: 6999
num_examples: 22
- name: validation
num_bytes: 5195
num_examples: 16
download_size: 0
dataset_size: 24388
- config_name: crass_ai_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 22870
num_examples: 44
- name: train
num_bytes: 14130
num_examples: 28
- name: validation
num_bytes: 8740
num_examples: 16
download_size: 0
dataset_size: 45740
- config_name: cryobiology_spanish_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 38674
num_examples: 146
- name: train
num_bytes: 31129
num_examples: 117
- name: validation
num_bytes: 7545
num_examples: 29
download_size: 0
dataset_size: 77348
- config_name: cryptonite_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 2844402
num_examples: 26157
- name: train
num_bytes: 2275724
num_examples: 20926
- name: validation
num_bytes: 568678
num_examples: 5231
download_size: 0
dataset_size: 5688804
- config_name: cs_algorithms_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 272435
num_examples: 1320
- name: train
num_bytes: 218192
num_examples: 1056
- name: validation
num_bytes: 54243
num_examples: 264
download_size: 0
dataset_size: 544870
- config_name: dark_humor_detection_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 26556
num_examples: 80
- name: train
num_bytes: 21267
num_examples: 64
- name: validation
num_bytes: 5289
num_examples: 16
download_size: 0
dataset_size: 53112
- config_name: date_understanding_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 94908
num_examples: 369
- name: train
num_bytes: 76165
num_examples: 296
- name: validation
num_bytes: 18743
num_examples: 73
download_size: 0
dataset_size: 189816
- config_name: disambiguation_qa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 122471
num_examples: 258
- name: train
num_bytes: 98687
num_examples: 207
- name: validation
num_bytes: 23784
num_examples: 51
download_size: 0
dataset_size: 244942
- config_name: discourse_marker_prediction_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 2090684
num_examples: 857
- name: train
num_bytes: 1666052
num_examples: 686
- name: validation
num_bytes: 424632
num_examples: 171
download_size: 0
dataset_size: 4181368
- config_name: disfl_qa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 7964775
num_examples: 8000
- name: train
num_bytes: 6376511
num_examples: 6400
- name: validation
num_bytes: 1588264
num_examples: 1600
download_size: 0
dataset_size: 15929550
- config_name: dyck_languages_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1227916
num_examples: 1000
- name: train
num_bytes: 982680
num_examples: 800
- name: validation
num_bytes: 245236
num_examples: 200
download_size: 0
dataset_size: 2455832
- config_name: elementary_math_qa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 13442550
num_examples: 38160
- name: train
num_bytes: 10766969
num_examples: 30531
- name: validation
num_bytes: 2675581
num_examples: 7629
download_size: 0
dataset_size: 26885100
- config_name: emoji_movie_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 33667
num_examples: 100
- name: train
num_bytes: 26987
num_examples: 80
- name: validation
num_bytes: 6680
num_examples: 20
download_size: 0
dataset_size: 67334
- config_name: emojis_emotion_prediction_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 47983
num_examples: 131
- name: train
num_bytes: 38458
num_examples: 105
- name: validation
num_bytes: 9525
num_examples: 26
download_size: 0
dataset_size: 95966
- config_name: empirical_judgments_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 47499
num_examples: 99
- name: train
num_bytes: 38346
num_examples: 80
- name: validation
num_bytes: 9153
num_examples: 19
download_size: 0
dataset_size: 94998
- config_name: english_proverbs_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 22530
num_examples: 34
- name: train
num_bytes: 12066
num_examples: 18
- name: validation
num_bytes: 10464
num_examples: 16
download_size: 0
dataset_size: 45060
- config_name: english_russian_proverbs_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 59900
num_examples: 80
- name: train
num_bytes: 48051
num_examples: 64
- name: validation
num_bytes: 11849
num_examples: 16
download_size: 0
dataset_size: 119800
- config_name: entailed_polarity_hindi_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 57052
num_examples: 138
- name: train
num_bytes: 45829
num_examples: 111
- name: validation
num_bytes: 11223
num_examples: 27
download_size: 0
dataset_size: 114104
- config_name: entailed_polarity_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 25421
num_examples: 148
- name: train
num_bytes: 20350
num_examples: 119
- name: validation
num_bytes: 5071
num_examples: 29
download_size: 0
dataset_size: 50842
- config_name: epistemic_reasoning_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 887158
num_examples: 2000
- name: train
num_bytes: 710107
num_examples: 1600
- name: validation
num_bytes: 177051
num_examples: 400
download_size: 0
dataset_size: 1774316
- config_name: evaluating_information_essentiality_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 77488
num_examples: 68
- name: train
num_bytes: 59596
num_examples: 52
- name: validation
num_bytes: 17892
num_examples: 16
download_size: 0
dataset_size: 154976
- config_name: fact_checker_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1337384
num_examples: 7154
- name: train
num_bytes: 1070750
num_examples: 5724
- name: validation
num_bytes: 266634
num_examples: 1430
download_size: 0
dataset_size: 2674768
- config_name: fantasy_reasoning_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 75886
num_examples: 201
- name: train
num_bytes: 61398
num_examples: 161
- name: validation
num_bytes: 14488
num_examples: 40
download_size: 0
dataset_size: 151772
- config_name: few_shot_nlg_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 75937
num_examples: 153
- name: train
num_bytes: 61862
num_examples: 123
- name: validation
num_bytes: 14075
num_examples: 30
download_size: 0
dataset_size: 151874
- config_name: figure_of_speech_detection_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 21717
num_examples: 59
- name: train
num_bytes: 15962
num_examples: 43
- name: validation
num_bytes: 5755
num_examples: 16
download_size: 0
dataset_size: 43434
- config_name: formal_fallacies_syllogisms_negation_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 8314653
num_examples: 14200
- name: train
num_bytes: 6652955
num_examples: 11360
- name: validation
num_bytes: 1661698
num_examples: 2840
download_size: 0
dataset_size: 16629306
- config_name: gem_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 36065281
num_examples: 14802
- name: train
num_bytes: 28819497
num_examples: 11845
- name: validation
num_bytes: 7245784
num_examples: 2957
download_size: 0
dataset_size: 72130562
- config_name: gender_inclusive_sentences_german_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 126881
num_examples: 200
- name: train
num_bytes: 100628
num_examples: 160
- name: validation
num_bytes: 26253
num_examples: 40
download_size: 0
dataset_size: 253762
- config_name: general_knowledge_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 21828
num_examples: 70
- name: train
num_bytes: 16818
num_examples: 54
- name: validation
num_bytes: 5010
num_examples: 16
download_size: 0
dataset_size: 43656
- config_name: geometric_shapes_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 180094
num_examples: 359
- name: train
num_bytes: 144602
num_examples: 288
- name: validation
num_bytes: 35492
num_examples: 71
download_size: 0
dataset_size: 360188
- config_name: goal_step_wikihow_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 3567615
num_examples: 7053
- name: train
num_bytes: 2853871
num_examples: 5643
- name: validation
num_bytes: 713744
num_examples: 1410
download_size: 0
dataset_size: 7135230
- config_name: gre_reading_comprehension_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 94273
num_examples: 31
- name: train
num_bytes: 44458
num_examples: 15
- name: validation
num_bytes: 49815
num_examples: 16
download_size: 0
dataset_size: 188546
- config_name: hhh_alignment_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 272898
num_examples: 221
- name: train
num_bytes: 212488
num_examples: 179
- name: validation
num_bytes: 60410
num_examples: 42
download_size: 0
dataset_size: 545796
- config_name: hindi_question_answering_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 15154954
num_examples: 6610
- name: train
num_bytes: 11983837
num_examples: 5288
- name: validation
num_bytes: 3171117
num_examples: 1322
download_size: 0
dataset_size: 30309908
- config_name: hindu_knowledge_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 44092
num_examples: 175
- name: train
num_bytes: 35392
num_examples: 140
- name: validation
num_bytes: 8700
num_examples: 35
download_size: 0
dataset_size: 88184
- config_name: hinglish_toxicity_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 60613
num_examples: 200
- name: train
num_bytes: 49997
num_examples: 160
- name: validation
num_bytes: 10616
num_examples: 40
download_size: 0
dataset_size: 121226
- config_name: human_organs_senses_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 7944
num_examples: 42
- name: train
num_bytes: 4873
num_examples: 26
- name: validation
num_bytes: 3071
num_examples: 16
download_size: 0
dataset_size: 15888
- config_name: hyperbaton_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 9383986
num_examples: 50000
- name: train
num_bytes: 7509334
num_examples: 40000
- name: validation
num_bytes: 1874652
num_examples: 10000
download_size: 0
dataset_size: 18767972
- config_name: identify_math_theorems_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 104841
num_examples: 53
- name: train
num_bytes: 70295
num_examples: 37
- name: validation
num_bytes: 34546
num_examples: 16
download_size: 0
dataset_size: 209682
- config_name: identify_odd_metaphor_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 27602
num_examples: 47
- name: train
num_bytes: 18138
num_examples: 31
- name: validation
num_bytes: 9464
num_examples: 16
download_size: 0
dataset_size: 55204
- config_name: implicatures_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 91683
num_examples: 492
- name: train
num_bytes: 73416
num_examples: 394
- name: validation
num_bytes: 18267
num_examples: 98
download_size: 0
dataset_size: 183366
- config_name: implicit_relations_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 79710
num_examples: 85
- name: train
num_bytes: 64346
num_examples: 68
- name: validation
num_bytes: 15364
num_examples: 17
download_size: 0
dataset_size: 159420
- config_name: intent_recognition_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 322371
num_examples: 693
- name: train
num_bytes: 257864
num_examples: 555
- name: validation
num_bytes: 64507
num_examples: 138
download_size: 0
dataset_size: 644742
- config_name: international_phonetic_alphabet_nli_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 79320
num_examples: 126
- name: train
num_bytes: 63288
num_examples: 101
- name: validation
num_bytes: 16032
num_examples: 25
download_size: 0
dataset_size: 158640
- config_name: international_phonetic_alphabet_transliterate_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 275938
num_examples: 1003
- name: train
num_bytes: 220784
num_examples: 803
- name: validation
num_bytes: 55154
num_examples: 200
download_size: 0
dataset_size: 551876
- config_name: intersect_geometry_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 211674752
num_examples: 249999
- name: train
num_bytes: 169332898
num_examples: 200000
- name: validation
num_bytes: 42341854
num_examples: 49999
download_size: 0
dataset_size: 423349504
- config_name: irony_identification_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 28178
num_examples: 99
- name: train
num_bytes: 22918
num_examples: 80
- name: validation
num_bytes: 5260
num_examples: 19
download_size: 0
dataset_size: 56356
- config_name: kanji_ascii_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 366946
num_examples: 1092
- name: train
num_bytes: 293933
num_examples: 875
- name: validation
num_bytes: 73013
num_examples: 217
download_size: 0
dataset_size: 733892
- config_name: kannada_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 140638
num_examples: 316
- name: train
num_bytes: 111865
num_examples: 253
- name: validation
num_bytes: 28773
num_examples: 63
download_size: 0
dataset_size: 281276
- config_name: key_value_maps_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 105136
num_examples: 101
- name: train
num_bytes: 84317
num_examples: 80
- name: validation
num_bytes: 20819
num_examples: 21
download_size: 0
dataset_size: 210272
- config_name: known_unknowns_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 7960
num_examples: 46
- name: train
num_bytes: 5130
num_examples: 30
- name: validation
num_bytes: 2830
num_examples: 16
download_size: 0
dataset_size: 15920
- config_name: language_games_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 979619
num_examples: 2128
- name: train
num_bytes: 783111
num_examples: 1704
- name: validation
num_bytes: 196508
num_examples: 424
download_size: 0
dataset_size: 1959238
- config_name: language_identification_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 7376223
num_examples: 10000
- name: train
num_bytes: 5908808
num_examples: 8000
- name: validation
num_bytes: 1467415
num_examples: 2000
download_size: 0
dataset_size: 14752446
- config_name: linguistic_mappings_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1325186
num_examples: 15527
- name: train
num_bytes: 1060088
num_examples: 12426
- name: validation
num_bytes: 265098
num_examples: 3101
download_size: 0
dataset_size: 2650372
- config_name: linguistics_puzzles_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1746024
num_examples: 2000
- name: train
num_bytes: 1398113
num_examples: 1600
- name: validation
num_bytes: 347911
num_examples: 400
download_size: 0
dataset_size: 3492048
- config_name: list_functions_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 2678136
num_examples: 10750
- name: train
num_bytes: 2161065
num_examples: 8700
- name: validation
num_bytes: 517071
num_examples: 2050
download_size: 0
dataset_size: 5356272
- config_name: logic_grid_puzzle_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1456218
num_examples: 1000
- name: train
num_bytes: 1160137
num_examples: 800
- name: validation
num_bytes: 296081
num_examples: 200
download_size: 0
dataset_size: 2912436
- config_name: logical_args_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 43582
num_examples: 32
- name: train
num_bytes: 21072
num_examples: 16
- name: validation
num_bytes: 22510
num_examples: 16
download_size: 0
dataset_size: 87164
- config_name: logical_deduction_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1056716
num_examples: 1500
- name: train
num_bytes: 841788
num_examples: 1200
- name: validation
num_bytes: 214928
num_examples: 300
download_size: 0
dataset_size: 2113432
- config_name: logical_fallacy_detection_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 720286
num_examples: 2800
- name: train
num_bytes: 576295
num_examples: 2240
- name: validation
num_bytes: 143991
num_examples: 560
download_size: 0
dataset_size: 1440572
- config_name: logical_sequence_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 22722
num_examples: 39
- name: train
num_bytes: 12648
num_examples: 23
- name: validation
num_bytes: 10074
num_examples: 16
download_size: 0
dataset_size: 45444
- config_name: mathematical_induction_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 19018
num_examples: 69
- name: train
num_bytes: 14983
num_examples: 53
- name: validation
num_bytes: 4035
num_examples: 16
download_size: 0
dataset_size: 38036
- config_name: matrixshapes_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1130574
num_examples: 4462
- name: train
num_bytes: 906061
num_examples: 3570
- name: validation
num_bytes: 224513
num_examples: 892
download_size: 0
dataset_size: 2261148
- config_name: metaphor_boolean_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 213848
num_examples: 680
- name: train
num_bytes: 170765
num_examples: 544
- name: validation
num_bytes: 43083
num_examples: 136
download_size: 0
dataset_size: 427696
- config_name: metaphor_understanding_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 200862
num_examples: 234
- name: train
num_bytes: 162101
num_examples: 188
- name: validation
num_bytes: 38761
num_examples: 46
download_size: 0
dataset_size: 401724
- config_name: minute_mysteries_qa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 3245190
num_examples: 477
- name: train
num_bytes: 2623703
num_examples: 383
- name: validation
num_bytes: 621487
num_examples: 94
download_size: 0
dataset_size: 6490380
- config_name: misconceptions_russian_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 16991
num_examples: 49
- name: train
num_bytes: 10970
num_examples: 33
- name: validation
num_bytes: 6021
num_examples: 16
download_size: 0
dataset_size: 33982
- config_name: misconceptions_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 45816
num_examples: 219
- name: train
num_bytes: 37246
num_examples: 176
- name: validation
num_bytes: 8570
num_examples: 43
download_size: 0
dataset_size: 91632
- config_name: mnist_ascii_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 61739808
num_examples: 69984
- name: train
num_bytes: 49419928
num_examples: 55988
- name: validation
num_bytes: 12319880
num_examples: 13996
download_size: 0
dataset_size: 123479616
- config_name: modified_arithmetic_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1220993
num_examples: 6000
- name: train
num_bytes: 976859
num_examples: 4800
- name: validation
num_bytes: 244134
num_examples: 1200
download_size: 0
dataset_size: 2441986
- config_name: moral_permissibility_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 162068
num_examples: 342
- name: train
num_bytes: 128790
num_examples: 274
- name: validation
num_bytes: 33278
num_examples: 68
download_size: 0
dataset_size: 324136
- config_name: movie_dialog_same_or_different_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 28645997
num_examples: 50000
- name: train
num_bytes: 22889061
num_examples: 40000
- name: validation
num_bytes: 5756936
num_examples: 10000
download_size: 0
dataset_size: 57291994
- config_name: movie_recommendation_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 173557
num_examples: 500
- name: train
num_bytes: 138936
num_examples: 400
- name: validation
num_bytes: 34621
num_examples: 100
download_size: 0
dataset_size: 347114
- config_name: mult_data_wrangling_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 625422
num_examples: 7854
- name: train
num_bytes: 507838
num_examples: 6380
- name: validation
num_bytes: 117584
num_examples: 1474
download_size: 0
dataset_size: 1250844
- config_name: multiemo_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 650173925
num_examples: 1437281
- name: train
num_bytes: 520172185
num_examples: 1149873
- name: validation
num_bytes: 130001740
num_examples: 287408
download_size: 0
dataset_size: 1300347850
- config_name: natural_instructions_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 355938370
num_examples: 193250
- name: train
num_bytes: 284920096
num_examples: 154615
- name: validation
num_bytes: 71018274
num_examples: 38635
download_size: 0
dataset_size: 711876740
- config_name: navigate_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 225813
num_examples: 1000
- name: train
num_bytes: 180958
num_examples: 800
- name: validation
num_bytes: 44855
num_examples: 200
download_size: 83744
dataset_size: 451626
- config_name: nonsense_words_grammar_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 11102
num_examples: 50
- name: train
num_bytes: 7582
num_examples: 34
- name: validation
num_bytes: 3520
num_examples: 16
download_size: 24107
dataset_size: 22204
- config_name: novel_concepts_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 16065
num_examples: 32
- name: train
num_bytes: 8128
num_examples: 16
- name: validation
num_bytes: 7937
num_examples: 16
download_size: 25919
dataset_size: 32130
- config_name: object_counting_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 149555
num_examples: 1000
- name: train
num_bytes: 119609
num_examples: 800
- name: validation
num_bytes: 29946
num_examples: 200
download_size: 91852
dataset_size: 299110
- config_name: odd_one_out_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 13843
num_examples: 86
- name: train
num_bytes: 11217
num_examples: 69
- name: validation
num_bytes: 2626
num_examples: 17
download_size: 25796
dataset_size: 27686
- config_name: operators_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 32435
num_examples: 210
- name: train
num_bytes: 25937
num_examples: 168
- name: validation
num_bytes: 6498
num_examples: 42
download_size: 24728
dataset_size: 64870
- config_name: paragraph_segmentation_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 56846507
num_examples: 9000
- name: train
num_bytes: 45674320
num_examples: 7200
- name: validation
num_bytes: 11172187
num_examples: 1800
download_size: 61123049
dataset_size: 113693014
- config_name: parsinlu_qa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 456189
num_examples: 1050
- name: train
num_bytes: 366577
num_examples: 840
- name: validation
num_bytes: 89612
num_examples: 210
download_size: 465963
dataset_size: 912378
- config_name: parsinlu_reading_comprehension_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 573798
num_examples: 518
- name: train
num_bytes: 455828
num_examples: 415
- name: validation
num_bytes: 117970
num_examples: 103
download_size: 572992
dataset_size: 1147596
- config_name: penguins_in_a_table_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 75985
num_examples: 149
- name: train
num_bytes: 61321
num_examples: 120
- name: validation
num_bytes: 14664
num_examples: 29
download_size: 32039
dataset_size: 151970
- config_name: periodic_elements_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 285204
num_examples: 654
- name: train
num_bytes: 229481
num_examples: 524
- name: validation
num_bytes: 55723
num_examples: 130
download_size: 41084
dataset_size: 570408
- config_name: persian_idioms_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 28592
num_examples: 66
- name: train
num_bytes: 21684
num_examples: 50
- name: validation
num_bytes: 6908
num_examples: 16
download_size: 34341
dataset_size: 57184
- config_name: phrase_relatedness_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 30190
num_examples: 100
- name: train
num_bytes: 23773
num_examples: 80
- name: validation
num_bytes: 6417
num_examples: 20
download_size: 40334
dataset_size: 60380
- config_name: physical_intuition_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 23734
num_examples: 81
- name: train
num_bytes: 19307
num_examples: 65
- name: validation
num_bytes: 4427
num_examples: 16
download_size: 28462
dataset_size: 47468
- config_name: physics_questions_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 18372
num_examples: 54
- name: train
num_bytes: 13402
num_examples: 38
- name: validation
num_bytes: 4970
num_examples: 16
download_size: 35187
dataset_size: 36744
- config_name: physics_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 120239
num_examples: 229
- name: train
num_bytes: 96122
num_examples: 184
- name: validation
num_bytes: 24117
num_examples: 45
download_size: 69885
dataset_size: 240478
- config_name: play_dialog_same_or_different_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 3142468
num_examples: 3264
- name: train
num_bytes: 2516052
num_examples: 2612
- name: validation
num_bytes: 626416
num_examples: 652
download_size: 1710264
dataset_size: 6284936
- config_name: polish_sequence_labeling_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 18081112
num_examples: 12812
- name: train
num_bytes: 14470720
num_examples: 10250
- name: validation
num_bytes: 3610392
num_examples: 2562
download_size: 5242934
dataset_size: 36162224
- config_name: presuppositions_as_nli_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 502522
num_examples: 735
- name: train
num_bytes: 400761
num_examples: 588
- name: validation
num_bytes: 101761
num_examples: 147
download_size: 240065
dataset_size: 1005044
- config_name: qa_wikidata_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1419042
num_examples: 20321
- name: train
num_bytes: 1134918
num_examples: 16257
- name: validation
num_bytes: 284124
num_examples: 4064
download_size: 1181835
dataset_size: 2838084
- config_name: question_selection_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 2487181
num_examples: 1582
- name: train
num_bytes: 1990094
num_examples: 1266
- name: validation
num_bytes: 497087
num_examples: 316
download_size: 1804283
dataset_size: 4974362
- config_name: real_or_fake_text_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 53663318
num_examples: 15088
- name: train
num_bytes: 42879846
num_examples: 12072
- name: validation
num_bytes: 10783472
num_examples: 3016
download_size: 47399045
dataset_size: 107326636
- config_name: reasoning_about_colored_objects_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 907474
num_examples: 2000
- name: train
num_bytes: 729609
num_examples: 1600
- name: validation
num_bytes: 177865
num_examples: 400
download_size: 273263
dataset_size: 1814948
- config_name: repeat_copy_logic_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 6678
num_examples: 32
- name: train
num_bytes: 3327
num_examples: 16
- name: validation
num_bytes: 3351
num_examples: 16
download_size: 18315
dataset_size: 13356
- config_name: rephrase_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 34222
num_examples: 78
- name: train
num_bytes: 27360
num_examples: 62
- name: validation
num_bytes: 6862
num_examples: 16
download_size: 41102
dataset_size: 68444
- config_name: riddle_sense_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 15507
num_examples: 49
- name: train
num_bytes: 10741
num_examples: 33
- name: validation
num_bytes: 4766
num_examples: 16
download_size: 32496
dataset_size: 31014
- config_name: ruin_names_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 144087
num_examples: 448
- name: train
num_bytes: 115171
num_examples: 359
- name: validation
num_bytes: 28916
num_examples: 89
download_size: 118193
dataset_size: 288174
- config_name: salient_translation_error_detection_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1141626
num_examples: 998
- name: train
num_bytes: 912819
num_examples: 799
- name: validation
num_bytes: 228807
num_examples: 199
download_size: 413634
dataset_size: 2283252
- config_name: scientific_press_release_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 13690
num_examples: 50
- name: train
num_bytes: 9254
num_examples: 34
- name: validation
num_bytes: 4436
num_examples: 16
download_size: 27293
dataset_size: 27380
- config_name: semantic_parsing_in_context_sparc_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1524852
num_examples: 1155
- name: train
num_bytes: 1248391
num_examples: 924
- name: validation
num_bytes: 276461
num_examples: 231
download_size: 440326
dataset_size: 3049704
- config_name: semantic_parsing_spider_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1265744
num_examples: 1034
- name: train
num_bytes: 973864
num_examples: 828
- name: validation
num_bytes: 291880
num_examples: 206
download_size: 358276
dataset_size: 2531488
- config_name: sentence_ambiguity_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 8168
num_examples: 60
- name: train
num_bytes: 5976
num_examples: 44
- name: validation
num_bytes: 2192
num_examples: 16
download_size: 18275
dataset_size: 16336
- config_name: similarities_abstraction_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 23416
num_examples: 76
- name: train
num_bytes: 18545
num_examples: 60
- name: validation
num_bytes: 4871
num_examples: 16
download_size: 31521
dataset_size: 46832
- config_name: simp_turing_concept_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1017646
num_examples: 6390
- name: train
num_bytes: 813220
num_examples: 5112
- name: validation
num_bytes: 204426
num_examples: 1278
download_size: 402574
dataset_size: 2035292
- config_name: simple_arithmetic_json_multiple_choice_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 924
num_examples: 8
- name: train
num_bytes: 0
num_examples: 0
- name: validation
num_bytes: 0
num_examples: 0
download_size: 7777
dataset_size: 924
- config_name: simple_arithmetic_json_subtasks_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1145
num_examples: 30
- name: train
num_bytes: 571
num_examples: 15
- name: validation
num_bytes: 574
num_examples: 15
download_size: 10460
dataset_size: 2290
- config_name: simple_arithmetic_json_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1145
num_examples: 30
- name: train
num_bytes: 540
num_examples: 14
- name: validation
num_bytes: 605
num_examples: 16
download_size: 10645
dataset_size: 2290
- config_name: simple_arithmetic_multiple_targets_json_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 414
num_examples: 10
- name: train
num_bytes: 0
num_examples: 0
- name: validation
num_bytes: 0
num_examples: 0
download_size: 7352
dataset_size: 414
- config_name: simple_ethical_questions_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 76518
num_examples: 115
- name: train
num_bytes: 60275
num_examples: 92
- name: validation
num_bytes: 16243
num_examples: 23
download_size: 81285
dataset_size: 153036
- config_name: simple_text_editing_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 27865
num_examples: 47
- name: train
num_bytes: 18469
num_examples: 31
- name: validation
num_bytes: 9396
num_examples: 16
download_size: 27100
dataset_size: 55730
- config_name: snarks_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 45717
num_examples: 181
- name: train
num_bytes: 36989
num_examples: 145
- name: validation
num_bytes: 8728
num_examples: 36
download_size: 45434
dataset_size: 91434
- config_name: social_iqa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 643162
num_examples: 1935
- name: train
num_bytes: 515686
num_examples: 1548
- name: validation
num_bytes: 127476
num_examples: 387
download_size: 684043
dataset_size: 1286324
- config_name: social_support_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 366705
num_examples: 897
- name: train
num_bytes: 294793
num_examples: 718
- name: validation
num_bytes: 71912
num_examples: 179
download_size: 288867
dataset_size: 733410
- config_name: sports_understanding_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 226654
num_examples: 986
- name: train
num_bytes: 181328
num_examples: 789
- name: validation
num_bytes: 45326
num_examples: 197
download_size: 82415
dataset_size: 453308
- config_name: strange_stories_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 120500
num_examples: 174
- name: train
num_bytes: 98055
num_examples: 140
- name: validation
num_bytes: 22445
num_examples: 34
download_size: 106428
dataset_size: 241000
- config_name: strategyqa_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 659967
num_examples: 2289
- name: train
num_bytes: 527670
num_examples: 1832
- name: validation
num_bytes: 132297
num_examples: 457
download_size: 814405
dataset_size: 1319934
- config_name: sufficient_information_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 9425
num_examples: 39
- name: train
num_bytes: 5594
num_examples: 23
- name: validation
num_bytes: 3831
num_examples: 16
download_size: 17766
dataset_size: 18850
- config_name: suicide_risk_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 37952
num_examples: 40
- name: train
num_bytes: 23067
num_examples: 24
- name: validation
num_bytes: 14885
num_examples: 16
download_size: 60518
dataset_size: 75904
- config_name: swahili_english_proverbs_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 90246
num_examples: 153
- name: train
num_bytes: 72467
num_examples: 123
- name: validation
num_bytes: 17779
num_examples: 30
download_size: 95186
dataset_size: 180492
- config_name: swedish_to_german_proverbs_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 35204
num_examples: 72
- name: train
num_bytes: 27266
num_examples: 56
- name: validation
num_bytes: 7938
num_examples: 16
download_size: 55102
dataset_size: 70408
- config_name: symbol_interpretation_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1148958
num_examples: 990
- name: train
num_bytes: 927326
num_examples: 795
- name: validation
num_bytes: 221632
num_examples: 195
download_size: 320412
dataset_size: 2297916
- config_name: temporal_sequences_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 687086
num_examples: 1000
- name: train
num_bytes: 549808
num_examples: 800
- name: validation
num_bytes: 137278
num_examples: 200
download_size: 295316
dataset_size: 1374172
- config_name: tense_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 43882
num_examples: 286
- name: train
num_bytes: 35466
num_examples: 229
- name: validation
num_bytes: 8416
num_examples: 57
download_size: 51466
dataset_size: 87764
- config_name: timedial_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 2763178
num_examples: 2550
- name: train
num_bytes: 2217190
num_examples: 2040
- name: validation
num_bytes: 545988
num_examples: 510
download_size: 2444115
dataset_size: 5526356
- config_name: topical_chat_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 30927758
num_examples: 22295
- name: train
num_bytes: 24827254
num_examples: 17836
- name: validation
num_bytes: 6100504
num_examples: 4459
download_size: 23505731
dataset_size: 61855516
- config_name: tracking_shuffled_objects_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 2775972
num_examples: 3750
- name: train
num_bytes: 2224037
num_examples: 3000
- name: validation
num_bytes: 551935
num_examples: 750
download_size: 738413
dataset_size: 5551944
- config_name: understanding_fables_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 227748
num_examples: 189
- name: train
num_bytes: 181000
num_examples: 152
- name: validation
num_bytes: 46748
num_examples: 37
download_size: 237036
dataset_size: 455496
- config_name: undo_permutation_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 196118
num_examples: 300
- name: train
num_bytes: 158562
num_examples: 240
- name: validation
num_bytes: 37556
num_examples: 60
download_size: 137204
dataset_size: 392236
- config_name: unit_conversion_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 4028628
num_examples: 23936
- name: train
num_bytes: 3230357
num_examples: 19151
- name: validation
num_bytes: 798271
num_examples: 4785
download_size: 3208622
dataset_size: 8057256
- config_name: unit_interpretation_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 37363
num_examples: 100
- name: train
num_bytes: 29939
num_examples: 80
- name: validation
num_bytes: 7424
num_examples: 20
download_size: 34926
dataset_size: 74726
- config_name: unnatural_in_context_learning_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 4599760
num_examples: 73420
- name: train
num_bytes: 3679822
num_examples: 58736
- name: validation
num_bytes: 919938
num_examples: 14684
download_size: 3840657
dataset_size: 9199520
- config_name: vitaminc_fact_verification_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 32361818
num_examples: 54668
- name: train
num_bytes: 25889850
num_examples: 43735
- name: validation
num_bytes: 6471968
num_examples: 10933
download_size: 14264790
dataset_size: 64723636
- config_name: what_is_the_tao_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 13268
num_examples: 36
- name: train
num_bytes: 7435
num_examples: 20
- name: validation
num_bytes: 5833
num_examples: 16
download_size: 27585
dataset_size: 26536
- config_name: which_wiki_edit_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 6331683
num_examples: 571
- name: train
num_bytes: 5233870
num_examples: 457
- name: validation
num_bytes: 1097813
num_examples: 114
download_size: 3914574
dataset_size: 12663366
- config_name: winowhy_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 1002434
num_examples: 2862
- name: train
num_bytes: 800520
num_examples: 2290
- name: validation
num_bytes: 201914
num_examples: 572
download_size: 449218
dataset_size: 2004868
- config_name: word_sorting_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 491054
num_examples: 1900
- name: train
num_bytes: 392738
num_examples: 1520
- name: validation
num_bytes: 98316
num_examples: 380
download_size: 641536
dataset_size: 982108
- config_name: word_unscrambling_zero_shot
features:
- name: idx
dtype: int32
- name: inputs
dtype: string
- name: targets
sequence: string
- name: multiple_choice_targets
sequence: string
- name: multiple_choice_scores
sequence: int32
splits:
- name: default
num_bytes: 882364
num_examples: 8917
- name: train
num_bytes: 705755
num_examples: 7134
- name: validation
num_bytes: 176609
num_examples: 1783
download_size: 563799
dataset_size: 1764728
configs:
- config_name: abstract_narrative_understanding_zero_shot
data_files:
- split: default
path: abstract_narrative_understanding_zero_shot/default-*
- split: train
path: abstract_narrative_understanding_zero_shot/train-*
- split: validation
path: abstract_narrative_understanding_zero_shot/validation-*
- config_name: anachronisms_zero_shot
data_files:
- split: default
path: anachronisms_zero_shot/default-*
- split: train
path: anachronisms_zero_shot/train-*
- split: validation
path: anachronisms_zero_shot/validation-*
- config_name: analogical_similarity_zero_shot
data_files:
- split: default
path: analogical_similarity_zero_shot/default-*
- split: train
path: analogical_similarity_zero_shot/train-*
- split: validation
path: analogical_similarity_zero_shot/validation-*
- config_name: analytic_entailment_zero_shot
data_files:
- split: default
path: analytic_entailment_zero_shot/default-*
- split: train
path: analytic_entailment_zero_shot/train-*
- split: validation
path: analytic_entailment_zero_shot/validation-*
- config_name: arithmetic_zero_shot
data_files:
- split: default
path: arithmetic_zero_shot/default-*
- split: train
path: arithmetic_zero_shot/train-*
- split: validation
path: arithmetic_zero_shot/validation-*
- config_name: ascii_word_recognition_zero_shot
data_files:
- split: default
path: ascii_word_recognition_zero_shot/default-*
- split: train
path: ascii_word_recognition_zero_shot/train-*
- split: validation
path: ascii_word_recognition_zero_shot/validation-*
- config_name: authorship_verification_zero_shot
data_files:
- split: default
path: authorship_verification_zero_shot/default-*
- split: train
path: authorship_verification_zero_shot/train-*
- split: validation
path: authorship_verification_zero_shot/validation-*
- config_name: auto_categorization_zero_shot
data_files:
- split: default
path: auto_categorization_zero_shot/default-*
- split: train
path: auto_categorization_zero_shot/train-*
- split: validation
path: auto_categorization_zero_shot/validation-*
- config_name: auto_debugging_zero_shot
data_files:
- split: default
path: auto_debugging_zero_shot/default-*
- split: train
path: auto_debugging_zero_shot/train-*
- split: validation
path: auto_debugging_zero_shot/validation-*
- config_name: bbq_lite_json_zero_shot
data_files:
- split: default
path: bbq_lite_json_zero_shot/default-*
- split: train
path: bbq_lite_json_zero_shot/train-*
- split: validation
path: bbq_lite_json_zero_shot/validation-*
- config_name: bridging_anaphora_resolution_barqa_zero_shot
data_files:
- split: default
path: bridging_anaphora_resolution_barqa_zero_shot/default-*
- split: train
path: bridging_anaphora_resolution_barqa_zero_shot/train-*
- split: validation
path: bridging_anaphora_resolution_barqa_zero_shot/validation-*
- config_name: causal_judgment_zero_shot
data_files:
- split: default
path: causal_judgment_zero_shot/default-*
- split: train
path: causal_judgment_zero_shot/train-*
- split: validation
path: causal_judgment_zero_shot/validation-*
- config_name: cause_and_effect_zero_shot
data_files:
- split: default
path: cause_and_effect_zero_shot/default-*
- split: train
path: cause_and_effect_zero_shot/train-*
- split: validation
path: cause_and_effect_zero_shot/validation-*
- config_name: checkmate_in_one_zero_shot
data_files:
- split: default
path: checkmate_in_one_zero_shot/default-*
- split: train
path: checkmate_in_one_zero_shot/train-*
- split: validation
path: checkmate_in_one_zero_shot/validation-*
- config_name: chess_state_tracking_zero_shot
data_files:
- split: default
path: chess_state_tracking_zero_shot/default-*
- split: train
path: chess_state_tracking_zero_shot/train-*
- split: validation
path: chess_state_tracking_zero_shot/validation-*
- config_name: chinese_remainder_theorem_zero_shot
data_files:
- split: default
path: chinese_remainder_theorem_zero_shot/default-*
- split: train
path: chinese_remainder_theorem_zero_shot/train-*
- split: validation
path: chinese_remainder_theorem_zero_shot/validation-*
- config_name: cifar10_classification_zero_shot
data_files:
- split: default
path: cifar10_classification_zero_shot/default-*
- split: train
path: cifar10_classification_zero_shot/train-*
- split: validation
path: cifar10_classification_zero_shot/validation-*
- config_name: code_line_description_zero_shot
data_files:
- split: default
path: code_line_description_zero_shot/default-*
- split: train
path: code_line_description_zero_shot/train-*
- split: validation
path: code_line_description_zero_shot/validation-*
- config_name: codenames_zero_shot
data_files:
- split: default
path: codenames_zero_shot/default-*
- split: train
path: codenames_zero_shot/train-*
- split: validation
path: codenames_zero_shot/validation-*
- config_name: color_zero_shot
data_files:
- split: default
path: color_zero_shot/default-*
- split: train
path: color_zero_shot/train-*
- split: validation
path: color_zero_shot/validation-*
- config_name: common_morpheme_zero_shot
data_files:
- split: default
path: common_morpheme_zero_shot/default-*
- split: train
path: common_morpheme_zero_shot/train-*
- split: validation
path: common_morpheme_zero_shot/validation-*
- config_name: conceptual_combinations_zero_shot
data_files:
- split: default
path: conceptual_combinations_zero_shot/default-*
- split: train
path: conceptual_combinations_zero_shot/train-*
- split: validation
path: conceptual_combinations_zero_shot/validation-*
- config_name: conlang_translation_zero_shot
data_files:
- split: default
path: conlang_translation_zero_shot/default-*
- split: train
path: conlang_translation_zero_shot/train-*
- split: validation
path: conlang_translation_zero_shot/validation-*
- config_name: contextual_parametric_knowledge_conflicts_zero_shot
data_files:
- split: default
path: contextual_parametric_knowledge_conflicts_zero_shot/default-*
- split: train
path: contextual_parametric_knowledge_conflicts_zero_shot/train-*
- split: validation
path: contextual_parametric_knowledge_conflicts_zero_shot/validation-*
- config_name: crash_blossom_zero_shot
data_files:
- split: default
path: crash_blossom_zero_shot/default-*
- split: train
path: crash_blossom_zero_shot/train-*
- split: validation
path: crash_blossom_zero_shot/validation-*
- config_name: crass_ai_zero_shot
data_files:
- split: default
path: crass_ai_zero_shot/default-*
- split: train
path: crass_ai_zero_shot/train-*
- split: validation
path: crass_ai_zero_shot/validation-*
- config_name: cryobiology_spanish_zero_shot
data_files:
- split: default
path: cryobiology_spanish_zero_shot/default-*
- split: train
path: cryobiology_spanish_zero_shot/train-*
- split: validation
path: cryobiology_spanish_zero_shot/validation-*
- config_name: cryptonite_zero_shot
data_files:
- split: default
path: cryptonite_zero_shot/default-*
- split: train
path: cryptonite_zero_shot/train-*
- split: validation
path: cryptonite_zero_shot/validation-*
- config_name: cs_algorithms_zero_shot
data_files:
- split: default
path: cs_algorithms_zero_shot/default-*
- split: train
path: cs_algorithms_zero_shot/train-*
- split: validation
path: cs_algorithms_zero_shot/validation-*
- config_name: dark_humor_detection_zero_shot
data_files:
- split: default
path: dark_humor_detection_zero_shot/default-*
- split: train
path: dark_humor_detection_zero_shot/train-*
- split: validation
path: dark_humor_detection_zero_shot/validation-*
- config_name: date_understanding_zero_shot
data_files:
- split: default
path: date_understanding_zero_shot/default-*
- split: train
path: date_understanding_zero_shot/train-*
- split: validation
path: date_understanding_zero_shot/validation-*
- config_name: disambiguation_qa_zero_shot
data_files:
- split: default
path: disambiguation_qa_zero_shot/default-*
- split: train
path: disambiguation_qa_zero_shot/train-*
- split: validation
path: disambiguation_qa_zero_shot/validation-*
- config_name: discourse_marker_prediction_zero_shot
data_files:
- split: default
path: discourse_marker_prediction_zero_shot/default-*
- split: train
path: discourse_marker_prediction_zero_shot/train-*
- split: validation
path: discourse_marker_prediction_zero_shot/validation-*
- config_name: disfl_qa_zero_shot
data_files:
- split: default
path: disfl_qa_zero_shot/default-*
- split: train
path: disfl_qa_zero_shot/train-*
- split: validation
path: disfl_qa_zero_shot/validation-*
- config_name: dyck_languages_zero_shot
data_files:
- split: default
path: dyck_languages_zero_shot/default-*
- split: train
path: dyck_languages_zero_shot/train-*
- split: validation
path: dyck_languages_zero_shot/validation-*
- config_name: elementary_math_qa_zero_shot
data_files:
- split: default
path: elementary_math_qa_zero_shot/default-*
- split: train
path: elementary_math_qa_zero_shot/train-*
- split: validation
path: elementary_math_qa_zero_shot/validation-*
- config_name: emoji_movie_zero_shot
data_files:
- split: default
path: emoji_movie_zero_shot/default-*
- split: train
path: emoji_movie_zero_shot/train-*
- split: validation
path: emoji_movie_zero_shot/validation-*
- config_name: emojis_emotion_prediction_zero_shot
data_files:
- split: default
path: emojis_emotion_prediction_zero_shot/default-*
- split: train
path: emojis_emotion_prediction_zero_shot/train-*
- split: validation
path: emojis_emotion_prediction_zero_shot/validation-*
- config_name: empirical_judgments_zero_shot
data_files:
- split: default
path: empirical_judgments_zero_shot/default-*
- split: train
path: empirical_judgments_zero_shot/train-*
- split: validation
path: empirical_judgments_zero_shot/validation-*
- config_name: english_proverbs_zero_shot
data_files:
- split: default
path: english_proverbs_zero_shot/default-*
- split: train
path: english_proverbs_zero_shot/train-*
- split: validation
path: english_proverbs_zero_shot/validation-*
- config_name: english_russian_proverbs_zero_shot
data_files:
- split: default
path: english_russian_proverbs_zero_shot/default-*
- split: train
path: english_russian_proverbs_zero_shot/train-*
- split: validation
path: english_russian_proverbs_zero_shot/validation-*
- config_name: entailed_polarity_hindi_zero_shot
data_files:
- split: default
path: entailed_polarity_hindi_zero_shot/default-*
- split: train
path: entailed_polarity_hindi_zero_shot/train-*
- split: validation
path: entailed_polarity_hindi_zero_shot/validation-*
- config_name: entailed_polarity_zero_shot
data_files:
- split: default
path: entailed_polarity_zero_shot/default-*
- split: train
path: entailed_polarity_zero_shot/train-*
- split: validation
path: entailed_polarity_zero_shot/validation-*
- config_name: epistemic_reasoning_zero_shot
data_files:
- split: default
path: epistemic_reasoning_zero_shot/default-*
- split: train
path: epistemic_reasoning_zero_shot/train-*
- split: validation
path: epistemic_reasoning_zero_shot/validation-*
- config_name: evaluating_information_essentiality_zero_shot
data_files:
- split: default
path: evaluating_information_essentiality_zero_shot/default-*
- split: train
path: evaluating_information_essentiality_zero_shot/train-*
- split: validation
path: evaluating_information_essentiality_zero_shot/validation-*
- config_name: fact_checker_zero_shot
data_files:
- split: default
path: fact_checker_zero_shot/default-*
- split: train
path: fact_checker_zero_shot/train-*
- split: validation
path: fact_checker_zero_shot/validation-*
- config_name: fantasy_reasoning_zero_shot
data_files:
- split: default
path: fantasy_reasoning_zero_shot/default-*
- split: train
path: fantasy_reasoning_zero_shot/train-*
- split: validation
path: fantasy_reasoning_zero_shot/validation-*
- config_name: few_shot_nlg_zero_shot
data_files:
- split: default
path: few_shot_nlg_zero_shot/default-*
- split: train
path: few_shot_nlg_zero_shot/train-*
- split: validation
path: few_shot_nlg_zero_shot/validation-*
- config_name: figure_of_speech_detection_zero_shot
data_files:
- split: default
path: figure_of_speech_detection_zero_shot/default-*
- split: train
path: figure_of_speech_detection_zero_shot/train-*
- split: validation
path: figure_of_speech_detection_zero_shot/validation-*
- config_name: formal_fallacies_syllogisms_negation_zero_shot
data_files:
- split: default
path: formal_fallacies_syllogisms_negation_zero_shot/default-*
- split: train
path: formal_fallacies_syllogisms_negation_zero_shot/train-*
- split: validation
path: formal_fallacies_syllogisms_negation_zero_shot/validation-*
- config_name: gem_zero_shot
data_files:
- split: default
path: gem_zero_shot/default-*
- split: train
path: gem_zero_shot/train-*
- split: validation
path: gem_zero_shot/validation-*
- config_name: gender_inclusive_sentences_german_zero_shot
data_files:
- split: default
path: gender_inclusive_sentences_german_zero_shot/default-*
- split: train
path: gender_inclusive_sentences_german_zero_shot/train-*
- split: validation
path: gender_inclusive_sentences_german_zero_shot/validation-*
- config_name: general_knowledge_zero_shot
data_files:
- split: default
path: general_knowledge_zero_shot/default-*
- split: train
path: general_knowledge_zero_shot/train-*
- split: validation
path: general_knowledge_zero_shot/validation-*
- config_name: geometric_shapes_zero_shot
data_files:
- split: default
path: geometric_shapes_zero_shot/default-*
- split: train
path: geometric_shapes_zero_shot/train-*
- split: validation
path: geometric_shapes_zero_shot/validation-*
- config_name: goal_step_wikihow_zero_shot
data_files:
- split: default
path: goal_step_wikihow_zero_shot/default-*
- split: train
path: goal_step_wikihow_zero_shot/train-*
- split: validation
path: goal_step_wikihow_zero_shot/validation-*
- config_name: gre_reading_comprehension_zero_shot
data_files:
- split: default
path: gre_reading_comprehension_zero_shot/default-*
- split: train
path: gre_reading_comprehension_zero_shot/train-*
- split: validation
path: gre_reading_comprehension_zero_shot/validation-*
- config_name: hhh_alignment_zero_shot
data_files:
- split: default
path: hhh_alignment_zero_shot/default-*
- split: train
path: hhh_alignment_zero_shot/train-*
- split: validation
path: hhh_alignment_zero_shot/validation-*
- config_name: hindi_question_answering_zero_shot
data_files:
- split: default
path: hindi_question_answering_zero_shot/default-*
- split: train
path: hindi_question_answering_zero_shot/train-*
- split: validation
path: hindi_question_answering_zero_shot/validation-*
- config_name: hindu_knowledge_zero_shot
data_files:
- split: default
path: hindu_knowledge_zero_shot/default-*
- split: train
path: hindu_knowledge_zero_shot/train-*
- split: validation
path: hindu_knowledge_zero_shot/validation-*
- config_name: hinglish_toxicity_zero_shot
data_files:
- split: default
path: hinglish_toxicity_zero_shot/default-*
- split: train
path: hinglish_toxicity_zero_shot/train-*
- split: validation
path: hinglish_toxicity_zero_shot/validation-*
- config_name: human_organs_senses_zero_shot
data_files:
- split: default
path: human_organs_senses_zero_shot/default-*
- split: train
path: human_organs_senses_zero_shot/train-*
- split: validation
path: human_organs_senses_zero_shot/validation-*
- config_name: hyperbaton_zero_shot
data_files:
- split: default
path: hyperbaton_zero_shot/default-*
- split: train
path: hyperbaton_zero_shot/train-*
- split: validation
path: hyperbaton_zero_shot/validation-*
- config_name: identify_math_theorems_zero_shot
data_files:
- split: default
path: identify_math_theorems_zero_shot/default-*
- split: train
path: identify_math_theorems_zero_shot/train-*
- split: validation
path: identify_math_theorems_zero_shot/validation-*
- config_name: identify_odd_metaphor_zero_shot
data_files:
- split: default
path: identify_odd_metaphor_zero_shot/default-*
- split: train
path: identify_odd_metaphor_zero_shot/train-*
- split: validation
path: identify_odd_metaphor_zero_shot/validation-*
- config_name: implicatures_zero_shot
data_files:
- split: default
path: implicatures_zero_shot/default-*
- split: train
path: implicatures_zero_shot/train-*
- split: validation
path: implicatures_zero_shot/validation-*
- config_name: implicit_relations_zero_shot
data_files:
- split: default
path: implicit_relations_zero_shot/default-*
- split: train
path: implicit_relations_zero_shot/train-*
- split: validation
path: implicit_relations_zero_shot/validation-*
- config_name: intent_recognition_zero_shot
data_files:
- split: default
path: intent_recognition_zero_shot/default-*
- split: train
path: intent_recognition_zero_shot/train-*
- split: validation
path: intent_recognition_zero_shot/validation-*
- config_name: international_phonetic_alphabet_nli_zero_shot
data_files:
- split: default
path: international_phonetic_alphabet_nli_zero_shot/default-*
- split: train
path: international_phonetic_alphabet_nli_zero_shot/train-*
- split: validation
path: international_phonetic_alphabet_nli_zero_shot/validation-*
- config_name: international_phonetic_alphabet_transliterate_zero_shot
data_files:
- split: default
path: international_phonetic_alphabet_transliterate_zero_shot/default-*
- split: train
path: international_phonetic_alphabet_transliterate_zero_shot/train-*
- split: validation
path: international_phonetic_alphabet_transliterate_zero_shot/validation-*
- config_name: intersect_geometry_zero_shot
data_files:
- split: default
path: intersect_geometry_zero_shot/default-*
- split: train
path: intersect_geometry_zero_shot/train-*
- split: validation
path: intersect_geometry_zero_shot/validation-*
- config_name: irony_identification_zero_shot
data_files:
- split: default
path: irony_identification_zero_shot/default-*
- split: train
path: irony_identification_zero_shot/train-*
- split: validation
path: irony_identification_zero_shot/validation-*
- config_name: kanji_ascii_zero_shot
data_files:
- split: default
path: kanji_ascii_zero_shot/default-*
- split: train
path: kanji_ascii_zero_shot/train-*
- split: validation
path: kanji_ascii_zero_shot/validation-*
- config_name: kannada_zero_shot
data_files:
- split: default
path: kannada_zero_shot/default-*
- split: train
path: kannada_zero_shot/train-*
- split: validation
path: kannada_zero_shot/validation-*
- config_name: key_value_maps_zero_shot
data_files:
- split: default
path: key_value_maps_zero_shot/default-*
- split: train
path: key_value_maps_zero_shot/train-*
- split: validation
path: key_value_maps_zero_shot/validation-*
- config_name: known_unknowns_zero_shot
data_files:
- split: default
path: known_unknowns_zero_shot/default-*
- split: train
path: known_unknowns_zero_shot/train-*
- split: validation
path: known_unknowns_zero_shot/validation-*
- config_name: language_games_zero_shot
data_files:
- split: default
path: language_games_zero_shot/default-*
- split: train
path: language_games_zero_shot/train-*
- split: validation
path: language_games_zero_shot/validation-*
- config_name: language_identification_zero_shot
data_files:
- split: default
path: language_identification_zero_shot/default-*
- split: train
path: language_identification_zero_shot/train-*
- split: validation
path: language_identification_zero_shot/validation-*
- config_name: linguistic_mappings_zero_shot
data_files:
- split: default
path: linguistic_mappings_zero_shot/default-*
- split: train
path: linguistic_mappings_zero_shot/train-*
- split: validation
path: linguistic_mappings_zero_shot/validation-*
- config_name: linguistics_puzzles_zero_shot
data_files:
- split: default
path: linguistics_puzzles_zero_shot/default-*
- split: train
path: linguistics_puzzles_zero_shot/train-*
- split: validation
path: linguistics_puzzles_zero_shot/validation-*
- config_name: list_functions_zero_shot
data_files:
- split: default
path: list_functions_zero_shot/default-*
- split: train
path: list_functions_zero_shot/train-*
- split: validation
path: list_functions_zero_shot/validation-*
- config_name: logic_grid_puzzle_zero_shot
data_files:
- split: default
path: logic_grid_puzzle_zero_shot/default-*
- split: train
path: logic_grid_puzzle_zero_shot/train-*
- split: validation
path: logic_grid_puzzle_zero_shot/validation-*
- config_name: logical_args_zero_shot
data_files:
- split: default
path: logical_args_zero_shot/default-*
- split: train
path: logical_args_zero_shot/train-*
- split: validation
path: logical_args_zero_shot/validation-*
- config_name: logical_deduction_zero_shot
data_files:
- split: default
path: logical_deduction_zero_shot/default-*
- split: train
path: logical_deduction_zero_shot/train-*
- split: validation
path: logical_deduction_zero_shot/validation-*
- config_name: logical_fallacy_detection_zero_shot
data_files:
- split: default
path: logical_fallacy_detection_zero_shot/default-*
- split: train
path: logical_fallacy_detection_zero_shot/train-*
- split: validation
path: logical_fallacy_detection_zero_shot/validation-*
- config_name: logical_sequence_zero_shot
data_files:
- split: default
path: logical_sequence_zero_shot/default-*
- split: train
path: logical_sequence_zero_shot/train-*
- split: validation
path: logical_sequence_zero_shot/validation-*
- config_name: mathematical_induction_zero_shot
data_files:
- split: default
path: mathematical_induction_zero_shot/default-*
- split: train
path: mathematical_induction_zero_shot/train-*
- split: validation
path: mathematical_induction_zero_shot/validation-*
- config_name: matrixshapes_zero_shot
data_files:
- split: default
path: matrixshapes_zero_shot/default-*
- split: train
path: matrixshapes_zero_shot/train-*
- split: validation
path: matrixshapes_zero_shot/validation-*
- config_name: metaphor_boolean_zero_shot
data_files:
- split: default
path: metaphor_boolean_zero_shot/default-*
- split: train
path: metaphor_boolean_zero_shot/train-*
- split: validation
path: metaphor_boolean_zero_shot/validation-*
- config_name: metaphor_understanding_zero_shot
data_files:
- split: default
path: metaphor_understanding_zero_shot/default-*
- split: train
path: metaphor_understanding_zero_shot/train-*
- split: validation
path: metaphor_understanding_zero_shot/validation-*
- config_name: minute_mysteries_qa_zero_shot
data_files:
- split: default
path: minute_mysteries_qa_zero_shot/default-*
- split: train
path: minute_mysteries_qa_zero_shot/train-*
- split: validation
path: minute_mysteries_qa_zero_shot/validation-*
- config_name: misconceptions_russian_zero_shot
data_files:
- split: default
path: misconceptions_russian_zero_shot/default-*
- split: train
path: misconceptions_russian_zero_shot/train-*
- split: validation
path: misconceptions_russian_zero_shot/validation-*
- config_name: misconceptions_zero_shot
data_files:
- split: default
path: misconceptions_zero_shot/default-*
- split: train
path: misconceptions_zero_shot/train-*
- split: validation
path: misconceptions_zero_shot/validation-*
- config_name: mnist_ascii_zero_shot
data_files:
- split: default
path: mnist_ascii_zero_shot/default-*
- split: train
path: mnist_ascii_zero_shot/train-*
- split: validation
path: mnist_ascii_zero_shot/validation-*
- config_name: modified_arithmetic_zero_shot
data_files:
- split: default
path: modified_arithmetic_zero_shot/default-*
- split: train
path: modified_arithmetic_zero_shot/train-*
- split: validation
path: modified_arithmetic_zero_shot/validation-*
- config_name: moral_permissibility_zero_shot
data_files:
- split: default
path: moral_permissibility_zero_shot/default-*
- split: train
path: moral_permissibility_zero_shot/train-*
- split: validation
path: moral_permissibility_zero_shot/validation-*
- config_name: movie_dialog_same_or_different_zero_shot
data_files:
- split: default
path: movie_dialog_same_or_different_zero_shot/default-*
- split: train
path: movie_dialog_same_or_different_zero_shot/train-*
- split: validation
path: movie_dialog_same_or_different_zero_shot/validation-*
- config_name: movie_recommendation_zero_shot
data_files:
- split: default
path: movie_recommendation_zero_shot/default-*
- split: train
path: movie_recommendation_zero_shot/train-*
- split: validation
path: movie_recommendation_zero_shot/validation-*
- config_name: mult_data_wrangling_zero_shot
data_files:
- split: default
path: mult_data_wrangling_zero_shot/default-*
- split: train
path: mult_data_wrangling_zero_shot/train-*
- split: validation
path: mult_data_wrangling_zero_shot/validation-*
- config_name: multiemo_zero_shot
data_files:
- split: default
path: multiemo_zero_shot/default-*
- split: train
path: multiemo_zero_shot/train-*
- split: validation
path: multiemo_zero_shot/validation-*
- config_name: natural_instructions_zero_shot
data_files:
- split: default
path: natural_instructions_zero_shot/default-*
- split: train
path: natural_instructions_zero_shot/train-*
- split: validation
path: natural_instructions_zero_shot/validation-*
- config_name: navigate_zero_shot
data_files:
- split: default
path: navigate_zero_shot/default-*
- split: train
path: navigate_zero_shot/train-*
- split: validation
path: navigate_zero_shot/validation-*
- config_name: nonsense_words_grammar_zero_shot
data_files:
- split: default
path: nonsense_words_grammar_zero_shot/default-*
- split: train
path: nonsense_words_grammar_zero_shot/train-*
- split: validation
path: nonsense_words_grammar_zero_shot/validation-*
- config_name: novel_concepts_zero_shot
data_files:
- split: default
path: novel_concepts_zero_shot/default-*
- split: train
path: novel_concepts_zero_shot/train-*
- split: validation
path: novel_concepts_zero_shot/validation-*
- config_name: object_counting_zero_shot
data_files:
- split: default
path: object_counting_zero_shot/default-*
- split: train
path: object_counting_zero_shot/train-*
- split: validation
path: object_counting_zero_shot/validation-*
- config_name: odd_one_out_zero_shot
data_files:
- split: default
path: odd_one_out_zero_shot/default-*
- split: train
path: odd_one_out_zero_shot/train-*
- split: validation
path: odd_one_out_zero_shot/validation-*
- config_name: operators_zero_shot
data_files:
- split: default
path: operators_zero_shot/default-*
- split: train
path: operators_zero_shot/train-*
- split: validation
path: operators_zero_shot/validation-*
- config_name: paragraph_segmentation_zero_shot
data_files:
- split: default
path: paragraph_segmentation_zero_shot/default-*
- split: train
path: paragraph_segmentation_zero_shot/train-*
- split: validation
path: paragraph_segmentation_zero_shot/validation-*
- config_name: parsinlu_qa_zero_shot
data_files:
- split: default
path: parsinlu_qa_zero_shot/default-*
- split: train
path: parsinlu_qa_zero_shot/train-*
- split: validation
path: parsinlu_qa_zero_shot/validation-*
- config_name: parsinlu_reading_comprehension_zero_shot
data_files:
- split: default
path: parsinlu_reading_comprehension_zero_shot/default-*
- split: train
path: parsinlu_reading_comprehension_zero_shot/train-*
- split: validation
path: parsinlu_reading_comprehension_zero_shot/validation-*
- config_name: penguins_in_a_table_zero_shot
data_files:
- split: default
path: penguins_in_a_table_zero_shot/default-*
- split: train
path: penguins_in_a_table_zero_shot/train-*
- split: validation
path: penguins_in_a_table_zero_shot/validation-*
- config_name: periodic_elements_zero_shot
data_files:
- split: default
path: periodic_elements_zero_shot/default-*
- split: train
path: periodic_elements_zero_shot/train-*
- split: validation
path: periodic_elements_zero_shot/validation-*
- config_name: persian_idioms_zero_shot
data_files:
- split: default
path: persian_idioms_zero_shot/default-*
- split: train
path: persian_idioms_zero_shot/train-*
- split: validation
path: persian_idioms_zero_shot/validation-*
- config_name: phrase_relatedness_zero_shot
data_files:
- split: default
path: phrase_relatedness_zero_shot/default-*
- split: train
path: phrase_relatedness_zero_shot/train-*
- split: validation
path: phrase_relatedness_zero_shot/validation-*
- config_name: physical_intuition_zero_shot
data_files:
- split: default
path: physical_intuition_zero_shot/default-*
- split: train
path: physical_intuition_zero_shot/train-*
- split: validation
path: physical_intuition_zero_shot/validation-*
- config_name: physics_questions_zero_shot
data_files:
- split: default
path: physics_questions_zero_shot/default-*
- split: train
path: physics_questions_zero_shot/train-*
- split: validation
path: physics_questions_zero_shot/validation-*
- config_name: physics_zero_shot
data_files:
- split: default
path: physics_zero_shot/default-*
- split: train
path: physics_zero_shot/train-*
- split: validation
path: physics_zero_shot/validation-*
- config_name: play_dialog_same_or_different_zero_shot
data_files:
- split: default
path: play_dialog_same_or_different_zero_shot/default-*
- split: train
path: play_dialog_same_or_different_zero_shot/train-*
- split: validation
path: play_dialog_same_or_different_zero_shot/validation-*
- config_name: polish_sequence_labeling_zero_shot
data_files:
- split: default
path: polish_sequence_labeling_zero_shot/default-*
- split: train
path: polish_sequence_labeling_zero_shot/train-*
- split: validation
path: polish_sequence_labeling_zero_shot/validation-*
- config_name: presuppositions_as_nli_zero_shot
data_files:
- split: default
path: presuppositions_as_nli_zero_shot/default-*
- split: train
path: presuppositions_as_nli_zero_shot/train-*
- split: validation
path: presuppositions_as_nli_zero_shot/validation-*
- config_name: qa_wikidata_zero_shot
data_files:
- split: default
path: qa_wikidata_zero_shot/default-*
- split: train
path: qa_wikidata_zero_shot/train-*
- split: validation
path: qa_wikidata_zero_shot/validation-*
- config_name: question_selection_zero_shot
data_files:
- split: default
path: question_selection_zero_shot/default-*
- split: train
path: question_selection_zero_shot/train-*
- split: validation
path: question_selection_zero_shot/validation-*
- config_name: real_or_fake_text_zero_shot
data_files:
- split: default
path: real_or_fake_text_zero_shot/default-*
- split: train
path: real_or_fake_text_zero_shot/train-*
- split: validation
path: real_or_fake_text_zero_shot/validation-*
- config_name: reasoning_about_colored_objects_zero_shot
data_files:
- split: default
path: reasoning_about_colored_objects_zero_shot/default-*
- split: train
path: reasoning_about_colored_objects_zero_shot/train-*
- split: validation
path: reasoning_about_colored_objects_zero_shot/validation-*
- config_name: repeat_copy_logic_zero_shot
data_files:
- split: default
path: repeat_copy_logic_zero_shot/default-*
- split: train
path: repeat_copy_logic_zero_shot/train-*
- split: validation
path: repeat_copy_logic_zero_shot/validation-*
- config_name: rephrase_zero_shot
data_files:
- split: default
path: rephrase_zero_shot/default-*
- split: train
path: rephrase_zero_shot/train-*
- split: validation
path: rephrase_zero_shot/validation-*
- config_name: riddle_sense_zero_shot
data_files:
- split: default
path: riddle_sense_zero_shot/default-*
- split: train
path: riddle_sense_zero_shot/train-*
- split: validation
path: riddle_sense_zero_shot/validation-*
- config_name: ruin_names_zero_shot
data_files:
- split: default
path: ruin_names_zero_shot/default-*
- split: train
path: ruin_names_zero_shot/train-*
- split: validation
path: ruin_names_zero_shot/validation-*
- config_name: salient_translation_error_detection_zero_shot
data_files:
- split: default
path: salient_translation_error_detection_zero_shot/default-*
- split: train
path: salient_translation_error_detection_zero_shot/train-*
- split: validation
path: salient_translation_error_detection_zero_shot/validation-*
- config_name: scientific_press_release_zero_shot
data_files:
- split: default
path: scientific_press_release_zero_shot/default-*
- split: train
path: scientific_press_release_zero_shot/train-*
- split: validation
path: scientific_press_release_zero_shot/validation-*
- config_name: semantic_parsing_in_context_sparc_zero_shot
data_files:
- split: default
path: semantic_parsing_in_context_sparc_zero_shot/default-*
- split: train
path: semantic_parsing_in_context_sparc_zero_shot/train-*
- split: validation
path: semantic_parsing_in_context_sparc_zero_shot/validation-*
- config_name: semantic_parsing_spider_zero_shot
data_files:
- split: default
path: semantic_parsing_spider_zero_shot/default-*
- split: train
path: semantic_parsing_spider_zero_shot/train-*
- split: validation
path: semantic_parsing_spider_zero_shot/validation-*
- config_name: sentence_ambiguity_zero_shot
data_files:
- split: default
path: sentence_ambiguity_zero_shot/default-*
- split: train
path: sentence_ambiguity_zero_shot/train-*
- split: validation
path: sentence_ambiguity_zero_shot/validation-*
- config_name: similarities_abstraction_zero_shot
data_files:
- split: default
path: similarities_abstraction_zero_shot/default-*
- split: train
path: similarities_abstraction_zero_shot/train-*
- split: validation
path: similarities_abstraction_zero_shot/validation-*
- config_name: simp_turing_concept_zero_shot
data_files:
- split: default
path: simp_turing_concept_zero_shot/default-*
- split: train
path: simp_turing_concept_zero_shot/train-*
- split: validation
path: simp_turing_concept_zero_shot/validation-*
- config_name: simple_arithmetic_json_multiple_choice_zero_shot
data_files:
- split: default
path: simple_arithmetic_json_multiple_choice_zero_shot/default-*
- split: train
path: simple_arithmetic_json_multiple_choice_zero_shot/train-*
- split: validation
path: simple_arithmetic_json_multiple_choice_zero_shot/validation-*
- config_name: simple_arithmetic_json_subtasks_zero_shot
data_files:
- split: default
path: simple_arithmetic_json_subtasks_zero_shot/default-*
- split: train
path: simple_arithmetic_json_subtasks_zero_shot/train-*
- split: validation
path: simple_arithmetic_json_subtasks_zero_shot/validation-*
- config_name: simple_arithmetic_json_zero_shot
data_files:
- split: default
path: simple_arithmetic_json_zero_shot/default-*
- split: train
path: simple_arithmetic_json_zero_shot/train-*
- split: validation
path: simple_arithmetic_json_zero_shot/validation-*
- config_name: simple_arithmetic_multiple_targets_json_zero_shot
data_files:
- split: default
path: simple_arithmetic_multiple_targets_json_zero_shot/default-*
- split: train
path: simple_arithmetic_multiple_targets_json_zero_shot/train-*
- split: validation
path: simple_arithmetic_multiple_targets_json_zero_shot/validation-*
- config_name: simple_ethical_questions_zero_shot
data_files:
- split: default
path: simple_ethical_questions_zero_shot/default-*
- split: train
path: simple_ethical_questions_zero_shot/train-*
- split: validation
path: simple_ethical_questions_zero_shot/validation-*
- config_name: simple_text_editing_zero_shot
data_files:
- split: default
path: simple_text_editing_zero_shot/default-*
- split: train
path: simple_text_editing_zero_shot/train-*
- split: validation
path: simple_text_editing_zero_shot/validation-*
- config_name: snarks_zero_shot
data_files:
- split: default
path: snarks_zero_shot/default-*
- split: train
path: snarks_zero_shot/train-*
- split: validation
path: snarks_zero_shot/validation-*
- config_name: social_iqa_zero_shot
data_files:
- split: default
path: social_iqa_zero_shot/default-*
- split: train
path: social_iqa_zero_shot/train-*
- split: validation
path: social_iqa_zero_shot/validation-*
- config_name: social_support_zero_shot
data_files:
- split: default
path: social_support_zero_shot/default-*
- split: train
path: social_support_zero_shot/train-*
- split: validation
path: social_support_zero_shot/validation-*
- config_name: sports_understanding_zero_shot
data_files:
- split: default
path: sports_understanding_zero_shot/default-*
- split: train
path: sports_understanding_zero_shot/train-*
- split: validation
path: sports_understanding_zero_shot/validation-*
- config_name: strange_stories_zero_shot
data_files:
- split: default
path: strange_stories_zero_shot/default-*
- split: train
path: strange_stories_zero_shot/train-*
- split: validation
path: strange_stories_zero_shot/validation-*
- config_name: strategyqa_zero_shot
data_files:
- split: default
path: strategyqa_zero_shot/default-*
- split: train
path: strategyqa_zero_shot/train-*
- split: validation
path: strategyqa_zero_shot/validation-*
- config_name: sufficient_information_zero_shot
data_files:
- split: default
path: sufficient_information_zero_shot/default-*
- split: train
path: sufficient_information_zero_shot/train-*
- split: validation
path: sufficient_information_zero_shot/validation-*
- config_name: suicide_risk_zero_shot
data_files:
- split: default
path: suicide_risk_zero_shot/default-*
- split: train
path: suicide_risk_zero_shot/train-*
- split: validation
path: suicide_risk_zero_shot/validation-*
- config_name: swahili_english_proverbs_zero_shot
data_files:
- split: default
path: swahili_english_proverbs_zero_shot/default-*
- split: train
path: swahili_english_proverbs_zero_shot/train-*
- split: validation
path: swahili_english_proverbs_zero_shot/validation-*
- config_name: swedish_to_german_proverbs_zero_shot
data_files:
- split: default
path: swedish_to_german_proverbs_zero_shot/default-*
- split: train
path: swedish_to_german_proverbs_zero_shot/train-*
- split: validation
path: swedish_to_german_proverbs_zero_shot/validation-*
- config_name: symbol_interpretation_zero_shot
data_files:
- split: default
path: symbol_interpretation_zero_shot/default-*
- split: train
path: symbol_interpretation_zero_shot/train-*
- split: validation
path: symbol_interpretation_zero_shot/validation-*
- config_name: temporal_sequences_zero_shot
data_files:
- split: default
path: temporal_sequences_zero_shot/default-*
- split: train
path: temporal_sequences_zero_shot/train-*
- split: validation
path: temporal_sequences_zero_shot/validation-*
- config_name: tense_zero_shot
data_files:
- split: default
path: tense_zero_shot/default-*
- split: train
path: tense_zero_shot/train-*
- split: validation
path: tense_zero_shot/validation-*
- config_name: timedial_zero_shot
data_files:
- split: default
path: timedial_zero_shot/default-*
- split: train
path: timedial_zero_shot/train-*
- split: validation
path: timedial_zero_shot/validation-*
- config_name: topical_chat_zero_shot
data_files:
- split: default
path: topical_chat_zero_shot/default-*
- split: train
path: topical_chat_zero_shot/train-*
- split: validation
path: topical_chat_zero_shot/validation-*
- config_name: tracking_shuffled_objects_zero_shot
data_files:
- split: default
path: tracking_shuffled_objects_zero_shot/default-*
- split: train
path: tracking_shuffled_objects_zero_shot/train-*
- split: validation
path: tracking_shuffled_objects_zero_shot/validation-*
- config_name: understanding_fables_zero_shot
data_files:
- split: default
path: understanding_fables_zero_shot/default-*
- split: train
path: understanding_fables_zero_shot/train-*
- split: validation
path: understanding_fables_zero_shot/validation-*
- config_name: undo_permutation_zero_shot
data_files:
- split: default
path: undo_permutation_zero_shot/default-*
- split: train
path: undo_permutation_zero_shot/train-*
- split: validation
path: undo_permutation_zero_shot/validation-*
- config_name: unit_conversion_zero_shot
data_files:
- split: default
path: unit_conversion_zero_shot/default-*
- split: train
path: unit_conversion_zero_shot/train-*
- split: validation
path: unit_conversion_zero_shot/validation-*
- config_name: unit_interpretation_zero_shot
data_files:
- split: default
path: unit_interpretation_zero_shot/default-*
- split: train
path: unit_interpretation_zero_shot/train-*
- split: validation
path: unit_interpretation_zero_shot/validation-*
- config_name: unnatural_in_context_learning_zero_shot
data_files:
- split: default
path: unnatural_in_context_learning_zero_shot/default-*
- split: train
path: unnatural_in_context_learning_zero_shot/train-*
- split: validation
path: unnatural_in_context_learning_zero_shot/validation-*
- config_name: vitaminc_fact_verification_zero_shot
data_files:
- split: default
path: vitaminc_fact_verification_zero_shot/default-*
- split: train
path: vitaminc_fact_verification_zero_shot/train-*
- split: validation
path: vitaminc_fact_verification_zero_shot/validation-*
- config_name: what_is_the_tao_zero_shot
data_files:
- split: default
path: what_is_the_tao_zero_shot/default-*
- split: train
path: what_is_the_tao_zero_shot/train-*
- split: validation
path: what_is_the_tao_zero_shot/validation-*
- config_name: which_wiki_edit_zero_shot
data_files:
- split: default
path: which_wiki_edit_zero_shot/default-*
- split: train
path: which_wiki_edit_zero_shot/train-*
- split: validation
path: which_wiki_edit_zero_shot/validation-*
- config_name: winowhy_zero_shot
data_files:
- split: default
path: winowhy_zero_shot/default-*
- split: train
path: winowhy_zero_shot/train-*
- split: validation
path: winowhy_zero_shot/validation-*
- config_name: word_sorting_zero_shot
data_files:
- split: default
path: word_sorting_zero_shot/default-*
- split: train
path: word_sorting_zero_shot/train-*
- split: validation
path: word_sorting_zero_shot/validation-*
- config_name: word_unscrambling_zero_shot
data_files:
- split: default
path: word_unscrambling_zero_shot/default-*
- split: train
path: word_unscrambling_zero_shot/train-*
- split: validation
path: word_unscrambling_zero_shot/validation-*
---
# Dataset Card for "bigbench"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bhatvineet/shrutilipi_mr | bhatvineet | "2023-03-13T18:54:45Z" | 15,204 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-03-13T07:12:52Z" | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcriptions
dtype: string
splits:
- name: train
num_bytes: 114253169328.11655
num_examples: 474332
- name: test
num_bytes: 39048725811.21545
num_examples: 158111
download_size: 147662822982
dataset_size: 153301895139.332
---
# Dataset Card for "shrutilipi_mr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TIGER-Lab/VideoFeedback | TIGER-Lab | "2024-08-10T23:30:40Z" | 15,180 | 27 | [
"task_categories:video-classification",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.15252",
"region:us",
"video"
] | [
"video-classification"
] | "2024-04-26T13:42:39Z" | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- video-classification
pretty_name: VideoFeedback
tags:
- video
dataset_info:
- config_name: annotated
features:
- name: id
dtype: string
- name: images
sequence: string
- name: text prompt
dtype: string
- name: video link
dtype: string
- name: visual quality
dtype: int64
- name: temporal consistency
dtype: int64
- name: dynamic degree
dtype: int64
- name: text-to-video alignment
dtype: int64
- name: factual consistency
dtype: int64
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: test
num_bytes: 1348268
num_examples: 680
- name: train
num_bytes: 65281005
num_examples: 32901
download_size: 45128599
dataset_size: 66629273
- config_name: real
features:
- name: id
dtype: string
- name: images
sequence: string
- name: text prompt
dtype: string
- name: video link
dtype: string
- name: visual quality
dtype: int64
- name: temporal consistency
dtype: int64
- name: dynamic degree
dtype: int64
- name: text-to-video alignment
dtype: int64
- name: factual consistency
dtype: int64
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 8072782
num_examples: 4000
- name: test
num_bytes: 162240
num_examples: 80
download_size: 3963450
dataset_size: 8235022
configs:
- config_name: annotated
data_files:
- split: train
path: annotated/train-*
- split: test
path: annotated/test-*
- config_name: real
data_files:
- split: test
path: real/test-*
- split: train
path: real/train-*
---
[📃Paper](https://arxiv.org/abs/2406.15252) | [🌐Website](https://tiger-ai-lab.github.io/VideoScore/) | [💻Github](https://github.com/TIGER-AI-Lab/VideoScore) | [🛢️Datasets](https://huggingface.co/datasets/TIGER-Lab/VideoFeedback) | [🤗Model](https://huggingface.co/TIGER-Lab/VideoScore) | [🤗Demo](https://huggingface.co/spaces/TIGER-Lab/VideoScore)
## Overview
VideoFeedback contains a total of 37.6K text-to-video pairs from 11 popular video generative models,
with some real-world videos as data augmentation.
The videos are annotated by raters for five evaluation dimensions:
Visual Quality, Temporal Consistency, Dynamic Degree,
Text-to-Video Alignment and Factual Consistency, in 1-4 scoring scale.
VideoFeedback is used to for trainging of [VideoScore](https://huggingface.co/TIGER-Lab/VideoScore)
Below we show the detailed description of our VideoFeedback dataset.

## Load Dataset
- Load certain split and certain config of dataset
```python
from datasets import load_dataset
dataset=load_dataset("TIGER-Lab/Video_feedback",name="annotated",split="test")
print(dataset[0].keys())
print(dataset[0]["id"])
"""
dict_keys(['id', 'images', 'text prompt', 'video link', 'visual quality', 'temporal consistency', 'dynamic degree', 'text-to-video alignment', 'factual consistency', 'conversations'])
7001917
"""
```
- Load all subsets/configs of certain split
```python
from datasets import get_dataset_config_names, load_dataset
for config_name in get_dataset_config_names("TIGER-Lab/VideoFeedback"):
test_data = load_dataset("TIGER-Lab/VideoFeedback", name=config_name, split="test")
print(test_data[0]["id"])
"""
7001917
p006515
"""
```
## Citation
```bibtex
@article{he2024videoscore,
title = {VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation},
author = {He, Xuan and Jiang, Dongfu and Zhang, Ge and Ku, Max and Soni, Achint and Siu, Sherman and Chen, Haonan and Chandra, Abhranil and Jiang, Ziyan and Arulraj, Aaran and Wang, Kai and Do, Quy Duc and Ni, Yuansheng and Lyu, Bohan and Narsupalli, Yaswanth and Fan, Rongqi and Lyu, Zhiheng and Lin, Yuchen and Chen, Wenhu},
journal = {ArXiv},
year = {2024},
volume={abs/2406.15252},
url = {https://arxiv.org/abs/2406.15252},
}
```
|
alvations/c4p0-x1-en-es | alvations | "2024-03-24T03:55:08Z" | 15,147 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-23T10:03:20Z" | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 2308
num_examples: 2
download_size: 15489
dataset_size: 2308
configs:
- config_name: default
data_files:
- split: train
path: 1f24f16745a166b0/train-*
---
|
openslr/librispeech_asr | openslr | "2024-08-14T10:48:50Z" | 15,144 | 140 | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"automatic-speech-recognition",
"audio-classification"
] | "2022-03-02T23:29:22Z" | ---
pretty_name: LibriSpeech
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: librispeech-1
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speaker-identification
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.100
num_bytes: 6619683041
num_examples: 28539
- name: train.360
num_bytes: 23898214592
num_examples: 104014
- name: validation
num_bytes: 359572231
num_examples: 2703
- name: test
num_bytes: 367705423
num_examples: 2620
download_size: 30121377654
dataset_size: 31245175287
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.500
num_bytes: 31810256902
num_examples: 148688
- name: validation
num_bytes: 337283304
num_examples: 2864
- name: test
num_bytes: 352396474
num_examples: 2939
download_size: 31236565377
dataset_size: 32499936680
- config_name: all
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 6627791685
num_examples: 28539
- name: train.clean.360
num_bytes: 23927767570
num_examples: 104014
- name: train.other.500
num_bytes: 31852502880
num_examples: 148688
- name: validation.clean
num_bytes: 359505691
num_examples: 2703
- name: validation.other
num_bytes: 337213112
num_examples: 2864
- name: test.clean
num_bytes: 368449831
num_examples: 2620
- name: test.other
num_bytes: 353231518
num_examples: 2939
download_size: 61357943031
dataset_size: 63826462287
---
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:[email protected])
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
omegalabsinc/omega-voice | omegalabsinc | "2025-03-26T01:25:33Z" | 15,135 | 1 | [
"license:mit",
"region:us"
] | null | "2024-11-16T01:44:48Z" | ---
license: mit
---
|
lmqg/qg_koquad | lmqg | "2022-12-02T18:53:42Z" | 15,115 | 8 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"source_datasets:squad_es",
"language:ko",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2210.03992",
"region:us",
"question-generation"
] | [
"text-generation"
] | "2022-06-02T23:42:21Z" | ---
license: cc-by-4.0
pretty_name: KorQuAD for question generation
language: ko
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: squad_es
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_korquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [KorQuAD](https://huggingface.co/datasets/squad_kor_v1) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Korean (ko)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"question": "함수해석학이 주목하는 탐구는?",
"paragraph": "변화에 대한 이해와 묘사는 자연과학에 있어서 일반적인 주제이며, 미적분학은 변화를 탐구하는 강력한 도구로서 발전되었다. 함수는 변화하는 양을 묘사함에 있어서 중추적인 개념으로써 떠오르게 된다. 실수와 실변수로 구성된 함수의 엄밀한 탐구가 실해석학이라는 분야로 알려지게 되었고, 복소수에 대한 이와 같은 탐구분야는 복소해석학이라고 한다. 함수해석학은 함수의 공간(특히 무한차원)의 탐구에 주목한다. 함수해석학의 많은 응용분야 중 하나가 양자역학이다. 많은 문제들이 자연스럽게 양과 그 양의 변화율의 관계로 귀착되고, 이러한 문제들이 미분방정식으로 다루어진다. 자연의 많은 현상들이 동역학계로 기술될 수 있다. 혼돈 이론은 이러한 예측 불가능한 현상을 탐구하는 데 상당한 기여를 한다.",
"answer": "함수의 공간(특히 무한차원)의 탐구",
"sentence": "함수해석학은 함수의 공간(특히 무한차원)의 탐구 에 주목한다.",
"paragraph_sentence": '변화에 대한 이해와 묘사는 자연과학에 있어서 일반적인 주제이며, 미적분학은 변화를 탐구하는 강력한 도구로서 발전되었다. 함수는 변화하는 양을 묘사함에 있어서 중추적인 개념으로써 떠오르게 된다. 실수와 실변수로 구성된 함수의 엄밀한 탐구가 실해석학이라는 분야로 알려지게 되었고, 복소수에 대한 이와 같은 탐구 분야는 복소해석학이라고 한다. <hl> 함수해석학은 함수의 공간(특히 무한차원)의 탐구 에 주목한다. <hl> 함수해석학의 많은 응용분야 중 하나가 양자역학이다. 많은 문제들이 자연스럽게 양과 그 양의 변화율의 관계로 귀착되고, 이러한 문제들이 미분방정식으로 다루어진다. 자연의 많은 현상들이 동역학계로 기술될 수 있다. 혼돈 이론은 이러한 예측 불가능한 현상을 탐구하는 데 상당한 기여를 한다.',
"paragraph_answer": '변화에 대한 이해와 묘사는 자연과학에 있어서 일반적인 주제이며, 미적분학은 변화를 탐구하는 강력한 도구로서 발전되었다. 함수는 변화하는 양을 묘사함에 있어서 중추적인 개념으로써 떠오르게 된다. 실수와 실변수로 구성된 함수의 엄밀한 탐구가 실해석학이라는 분야로 알려지게 되었고, 복소수에 대한 이와 같은 탐구 분야는 복소해석학이라고 한다. 함수해석학은 <hl> 함수의 공간(특히 무한차원)의 탐구 <hl>에 주목한다. 함수해석학의 많은 응용분야 중 하나가 양자역학이다. 많은 문제들이 자연스럽게 양과 그 양의 변화율의 관계로 귀착되고, 이러한 문제들이 미분방정식으로 다루어진다. 자연의 많은 현상들이 동역학계로 기술될 수 있다. 혼돈 이론은 이러한 예측 불가능한 현상을 탐구하는 데 상당한 기여를 한다.',
"sentence_answer": "함수해석학은 <hl> 함수의 공간(특히 무한차원)의 탐구 <hl> 에 주목한다."
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|54556| 5766 |5766 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration: {A} {U}nified {B}enchmark and {E}valuation",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
google-research-datasets/nq_open | google-research-datasets | "2024-03-22T08:43:41Z" | 15,031 | 22 | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:extended|natural_questions",
"language:en",
"license:cc-by-sa-3.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|natural_questions
task_categories:
- question-answering
task_ids:
- open-domain-qa
pretty_name: NQ-Open
dataset_info:
config_name: nq_open
features:
- name: question
dtype: string
- name: answer
sequence: string
splits:
- name: train
num_bytes: 6651236
num_examples: 87925
- name: validation
num_bytes: 313829
num_examples: 3610
download_size: 4678245
dataset_size: 6965065
configs:
- config_name: nq_open
data_files:
- split: train
path: nq_open/train-*
- split: validation
path: nq_open/validation-*
default: true
---
# Dataset Card for nq_open
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://efficientqa.github.io/
- **Repository:** https://github.com/google-research-datasets/natural-questions/tree/master/nq_open
- **Paper:** https://www.aclweb.org/anthology/P19-1612.pdf
- **Leaderboard:** https://ai.google.com/research/NaturalQuestions/efficientqa
- **Point of Contact:** [Mailing List]([email protected])
### Dataset Summary
The NQ-Open task, introduced by Lee et.al. 2019,
is an open domain question answering benchmark that is derived from Natural Questions.
The goal is to predict an English answer string for an input English question.
All questions can be answered using the contents of English Wikipedia.
### Supported Tasks and Leaderboards
Open Domain Question-Answering,
EfficientQA Leaderboard: https://ai.google.com/research/NaturalQuestions/efficientqa
### Languages
English (`en`)
## Dataset Structure
### Data Instances
```
{
"question": "names of the metropolitan municipalities in south africa",
"answer": [
"Mangaung Metropolitan Municipality",
"Nelson Mandela Bay Metropolitan Municipality",
"eThekwini Metropolitan Municipality",
"City of Tshwane Metropolitan Municipality",
"City of Johannesburg Metropolitan Municipality",
"Buffalo City Metropolitan Municipality",
"City of Ekurhuleni Metropolitan Municipality"
]
}
```
### Data Fields
- `question` - Input open domain question.
- `answer` - List of possible answers to the question
### Data Splits
- Train : 87925
- validation : 3610
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
Natural Questions contains question from aggregated queries to Google Search (Kwiatkowski et al., 2019). To gather an open version of this dataset, we only keep questions with short answers and discard the given evidence document. Answers with many tokens often resemble extractive snippets rather than canonical answers, so we discard answers with more than 5 tokens.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
Evaluating on this diverse set of question-answer pairs is crucial, because all existing datasets have inherent biases that are problematic for open domain QA systems with learned retrieval.
In the Natural Questions dataset the question askers do not already know the answer. This accurately reflects a distribution of genuine information-seeking questions.
However, annotators must separately find correct answers, which requires assistance from automatic tools and can introduce a moderate bias towards results from the tool.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
All of the Natural Questions data is released under the
[CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license.
### Citation Information
```
@article{doi:10.1162/tacl\_a\_00276,
author = {Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and Lee, Kenton and Toutanova, Kristina and Jones, Llion and Kelcey, Matthew and Chang, Ming-Wei and Dai, Andrew M. and Uszkoreit, Jakob and Le, Quoc and Petrov, Slav},
title = {Natural Questions: A Benchmark for Question Answering Research},
journal = {Transactions of the Association for Computational Linguistics},
volume = {7},
number = {},
pages = {453-466},
year = {2019},
doi = {10.1162/tacl\_a\_00276},
URL = {
https://doi.org/10.1162/tacl_a_00276
},
eprint = {
https://doi.org/10.1162/tacl_a_00276
},
abstract = { We present the Natural Questions corpus, a question answering data set. Questions consist of real anonymized, aggregated queries issued to the Google search engine. An annotator is presented with a question along with a Wikipedia page from the top 5 search results, and annotates a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or marks null if no long/short answer is present. The public release consists of 307,373 training examples with single annotations; 7,830 examples with 5-way annotations for development data; and a further 7,842 examples with 5-way annotated sequestered as test data. We present experiments validating quality of the data. We also describe analysis of 25-way annotations on 302 examples, giving insights into human variability on the annotation task. We introduce robust metrics for the purposes of evaluating question answering systems; demonstrate high human upper bounds on these metrics; and establish baseline results using competitive methods drawn from related literature. }
}
@inproceedings{lee-etal-2019-latent,
title = "Latent Retrieval for Weakly Supervised Open Domain Question Answering",
author = "Lee, Kenton and
Chang, Ming-Wei and
Toutanova, Kristina",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1612",
doi = "10.18653/v1/P19-1612",
pages = "6086--6096",
abstract = "Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match.",
}
```
### Contributions
Thanks to [@Nilanshrajput](https://github.com/Nilanshrajput) for adding this dataset. |
dair-ai/emotion | dair-ai | "2024-08-08T06:10:47Z" | 14,999 | 335 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"emotion-classification"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: emotion
pretty_name: Emotion
tags:
- emotion-classification
dataset_info:
- config_name: split
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
splits:
- name: train
num_bytes: 1741533
num_examples: 16000
- name: validation
num_bytes: 214695
num_examples: 2000
- name: test
num_bytes: 217173
num_examples: 2000
download_size: 1287193
dataset_size: 2173401
- config_name: unsplit
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
splits:
- name: train
num_bytes: 45444017
num_examples: 416809
download_size: 26888538
dataset_size: 45444017
configs:
- config_name: split
data_files:
- split: train
path: split/train-*
- split: validation
path: split/validation-*
- split: test
path: split/test-*
default: true
- config_name: unsplit
data_files:
- split: train
path: unsplit/train-*
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "emotion"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/dair-ai/emotion_dataset](https://github.com/dair-ai/emotion_dataset)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 16.13 MB
- **Size of the generated dataset:** 47.62 MB
- **Total amount of disk used:** 63.75 MB
### Dataset Summary
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
An example looks as follows.
```
{
"text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
"label": 0
}
```
### Data Fields
The data fields are:
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
### Data Splits
The dataset has 2 configurations:
- split: with a total of 20_000 examples split into train, validation and split
- unsplit: with a total of 416_809 examples in a single train split
| name | train | validation | test |
|---------|-------:|-----------:|-----:|
| split | 16000 | 2000 | 2000 |
| unsplit | 416809 | n/a | n/a |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset should be used for educational and research purposes only.
### Citation Information
If you use this dataset, please cite:
```
@inproceedings{saravia-etal-2018-carer,
title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
author = "Saravia, Elvis and
Liu, Hsien-Chi Toby and
Huang, Yen-Hao and
Wu, Junlin and
Chen, Yi-Shin",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1404",
doi = "10.18653/v1/D18-1404",
pages = "3687--3697",
abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
|
INo0121/low_quality_call_voice_preprocessed | INo0121 | "2023-09-21T13:25:07Z" | 14,860 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-09-08T05:10:37Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 64088254376
num_examples: 66720
- name: test
num_bytes: 7476961712
num_examples: 7784
- name: valid
num_bytes: 7476975416
num_examples: 7784
download_size: 521083513
dataset_size: 79042191504
---
# Dataset Card for "low_quality_call_voice_preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wecover/OPUS_TED2020 | wecover | "2024-01-31T11:42:51Z" | 14,860 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-31T07:37:04Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: '*/*/train.parquet'
- split: valid
path: '*/*/valid.parquet'
- split: test
path: '*/*/test.parquet'
- config_name: af
data_files:
- split: train
path: '*/*af*/train.parquet'
- split: test
path: '*/*af*/test.parquet'
- split: valid
path: '*/*af*/valid.parquet'
- config_name: am
data_files:
- split: train
path: '*/*am*/train.parquet'
- split: test
path: '*/*am*/test.parquet'
- split: valid
path: '*/*am*/valid.parquet'
- config_name: ar
data_files:
- split: train
path: '*/*ar*/train.parquet'
- split: test
path: '*/*ar*/test.parquet'
- split: valid
path: '*/*ar*/valid.parquet'
- config_name: az
data_files:
- split: train
path: '*/*az*/train.parquet'
- split: test
path: '*/*az*/test.parquet'
- split: valid
path: '*/*az*/valid.parquet'
- config_name: be
data_files:
- split: train
path: '*/*be*/train.parquet'
- split: test
path: '*/*be*/test.parquet'
- split: valid
path: '*/*be*/valid.parquet'
- config_name: bg
data_files:
- split: train
path: '*/*bg*/train.parquet'
- split: test
path: '*/*bg*/test.parquet'
- split: valid
path: '*/*bg*/valid.parquet'
- config_name: bn
data_files:
- split: train
path: '*/*bn*/train.parquet'
- split: test
path: '*/*bn*/test.parquet'
- split: valid
path: '*/*bn*/valid.parquet'
- config_name: bs
data_files:
- split: train
path: '*/*bs*/train.parquet'
- split: test
path: '*/*bs*/test.parquet'
- split: valid
path: '*/*bs*/valid.parquet'
- config_name: ca
data_files:
- split: train
path: '*/*ca*/train.parquet'
- split: test
path: '*/*ca*/test.parquet'
- split: valid
path: '*/*ca*/valid.parquet'
- config_name: cs
data_files:
- split: train
path: '*/*cs*/train.parquet'
- split: test
path: '*/*cs*/test.parquet'
- split: valid
path: '*/*cs*/valid.parquet'
- config_name: da
data_files:
- split: train
path: '*/*da*/train.parquet'
- split: test
path: '*/*da*/test.parquet'
- split: valid
path: '*/*da*/valid.parquet'
- config_name: de
data_files:
- split: train
path: '*/*de*/train.parquet'
- split: test
path: '*/*de*/test.parquet'
- split: valid
path: '*/*de*/valid.parquet'
- config_name: el
data_files:
- split: train
path: '*/*el*/train.parquet'
- split: test
path: '*/*el*/test.parquet'
- split: valid
path: '*/*el*/valid.parquet'
- config_name: en
data_files:
- split: train
path: '*/*en*/train.parquet'
- split: test
path: '*/*en*/test.parquet'
- split: valid
path: '*/*en*/valid.parquet'
- config_name: eo
data_files:
- split: train
path: '*/*eo*/train.parquet'
- split: test
path: '*/*eo*/test.parquet'
- split: valid
path: '*/*eo*/valid.parquet'
- config_name: es
data_files:
- split: train
path: '*/*es*/train.parquet'
- split: test
path: '*/*es*/test.parquet'
- split: valid
path: '*/*es*/valid.parquet'
- config_name: et
data_files:
- split: train
path: '*/*et*/train.parquet'
- split: test
path: '*/*et*/test.parquet'
- split: valid
path: '*/*et*/valid.parquet'
- config_name: eu
data_files:
- split: train
path: '*/*eu*/train.parquet'
- split: test
path: '*/*eu*/test.parquet'
- split: valid
path: '*/*eu*/valid.parquet'
- config_name: fa
data_files:
- split: train
path: '*/*fa*/train.parquet'
- split: test
path: '*/*fa*/test.parquet'
- split: valid
path: '*/*fa*/valid.parquet'
- config_name: fi
data_files:
- split: train
path: '*/*fi*/train.parquet'
- split: test
path: '*/*fi*/test.parquet'
- split: valid
path: '*/*fi*/valid.parquet'
- config_name: fr
data_files:
- split: train
path: '*/*fr*/train.parquet'
- split: test
path: '*/*fr*/test.parquet'
- split: valid
path: '*/*fr*/valid.parquet'
- config_name: ga
data_files:
- split: train
path: '*/*ga*/train.parquet'
- split: test
path: '*/*ga*/test.parquet'
- split: valid
path: '*/*ga*/valid.parquet'
- config_name: gl
data_files:
- split: train
path: '*/*gl*/train.parquet'
- split: test
path: '*/*gl*/test.parquet'
- split: valid
path: '*/*gl*/valid.parquet'
- config_name: he
data_files:
- split: train
path: '*/*he*/train.parquet'
- split: test
path: '*/*he*/test.parquet'
- split: valid
path: '*/*he*/valid.parquet'
- config_name: hi
data_files:
- split: train
path: '*/*hi*/train.parquet'
- split: test
path: '*/*hi*/test.parquet'
- split: valid
path: '*/*hi*/valid.parquet'
- config_name: hr
data_files:
- split: train
path: '*/*hr*/train.parquet'
- split: test
path: '*/*hr*/test.parquet'
- split: valid
path: '*/*hr*/valid.parquet'
- config_name: hu
data_files:
- split: train
path: '*/*hu*/train.parquet'
- split: test
path: '*/*hu*/test.parquet'
- split: valid
path: '*/*hu*/valid.parquet'
- config_name: hy
data_files:
- split: train
path: '*/*hy*/train.parquet'
- split: test
path: '*/*hy*/test.parquet'
- split: valid
path: '*/*hy*/valid.parquet'
- config_name: id
data_files:
- split: train
path: '*/*id*/train.parquet'
- split: test
path: '*/*id*/test.parquet'
- split: valid
path: '*/*id*/valid.parquet'
- config_name: is
data_files:
- split: train
path: '*/*is*/train.parquet'
- split: test
path: '*/*is*/test.parquet'
- split: valid
path: '*/*is*/valid.parquet'
- config_name: it
data_files:
- split: train
path: '*/*it*/train.parquet'
- split: test
path: '*/*it*/test.parquet'
- split: valid
path: '*/*it*/valid.parquet'
- config_name: ja
data_files:
- split: train
path: '*/*ja*/train.parquet'
- split: test
path: '*/*ja*/test.parquet'
- split: valid
path: '*/*ja*/valid.parquet'
- config_name: ka
data_files:
- split: train
path: '*/*ka*/train.parquet'
- split: test
path: '*/*ka*/test.parquet'
- split: valid
path: '*/*ka*/valid.parquet'
- config_name: kk
data_files:
- split: train
path: '*/*kk*/train.parquet'
- split: test
path: '*/*kk*/test.parquet'
- split: valid
path: '*/*kk*/valid.parquet'
- config_name: km
data_files:
- split: train
path: '*/*km*/train.parquet'
- split: test
path: '*/*km*/test.parquet'
- split: valid
path: '*/*km*/valid.parquet'
- config_name: kn
data_files:
- split: train
path: '*/*kn*/train.parquet'
- split: test
path: '*/*kn*/test.parquet'
- split: valid
path: '*/*kn*/valid.parquet'
- config_name: ko
data_files:
- split: train
path: '*/*ko*/train.parquet'
- split: test
path: '*/*ko*/test.parquet'
- split: valid
path: '*/*ko*/valid.parquet'
- config_name: ku
data_files:
- split: train
path: '*/*ku*/train.parquet'
- split: test
path: '*/*ku*/test.parquet'
- split: valid
path: '*/*ku*/valid.parquet'
- config_name: lo
data_files:
- split: train
path: '*/*lo*/train.parquet'
- split: test
path: '*/*lo*/test.parquet'
- split: valid
path: '*/*lo*/valid.parquet'
- config_name: lt
data_files:
- split: train
path: '*/*lt*/train.parquet'
- split: test
path: '*/*lt*/test.parquet'
- split: valid
path: '*/*lt*/valid.parquet'
- config_name: mg
data_files:
- split: train
path: '*/*mg*/train.parquet'
- split: test
path: '*/*mg*/test.parquet'
- split: valid
path: '*/*mg*/valid.parquet'
- config_name: mk
data_files:
- split: train
path: '*/*mk*/train.parquet'
- split: test
path: '*/*mk*/test.parquet'
- split: valid
path: '*/*mk*/valid.parquet'
- config_name: ml
data_files:
- split: train
path: '*/*ml*/train.parquet'
- split: test
path: '*/*ml*/test.parquet'
- split: valid
path: '*/*ml*/valid.parquet'
- config_name: mn
data_files:
- split: train
path: '*/*mn*/train.parquet'
- split: test
path: '*/*mn*/test.parquet'
- split: valid
path: '*/*mn*/valid.parquet'
- config_name: mr
data_files:
- split: train
path: '*/*mr*/train.parquet'
- split: test
path: '*/*mr*/test.parquet'
- split: valid
path: '*/*mr*/valid.parquet'
- config_name: ms
data_files:
- split: train
path: '*/*ms*/train.parquet'
- split: test
path: '*/*ms*/test.parquet'
- split: valid
path: '*/*ms*/valid.parquet'
- config_name: my
data_files:
- split: train
path: '*/*my*/train.parquet'
- split: test
path: '*/*my*/test.parquet'
- split: valid
path: '*/*my*/valid.parquet'
- config_name: ne
data_files:
- split: train
path: '*/*ne*/train.parquet'
- split: test
path: '*/*ne*/test.parquet'
- split: valid
path: '*/*ne*/valid.parquet'
- config_name: nl
data_files:
- split: train
path: '*/*nl*/train.parquet'
- split: test
path: '*/*nl*/test.parquet'
- split: valid
path: '*/*nl*/valid.parquet'
- config_name: pl
data_files:
- split: train
path: '*/*pl*/train.parquet'
- split: test
path: '*/*pl*/test.parquet'
- split: valid
path: '*/*pl*/valid.parquet'
- config_name: pt
data_files:
- split: train
path: '*/*pt*/train.parquet'
- split: test
path: '*/*pt*/test.parquet'
- split: valid
path: '*/*pt*/valid.parquet'
- config_name: ro
data_files:
- split: train
path: '*/*ro*/train.parquet'
- split: test
path: '*/*ro*/test.parquet'
- split: valid
path: '*/*ro*/valid.parquet'
- config_name: ru
data_files:
- split: train
path: '*/*ru*/train.parquet'
- split: test
path: '*/*ru*/test.parquet'
- split: valid
path: '*/*ru*/valid.parquet'
- config_name: sk
data_files:
- split: train
path: '*/*sk*/train.parquet'
- split: test
path: '*/*sk*/test.parquet'
- split: valid
path: '*/*sk*/valid.parquet'
- config_name: sl
data_files:
- split: train
path: '*/*sl*/train.parquet'
- split: test
path: '*/*sl*/test.parquet'
- split: valid
path: '*/*sl*/valid.parquet'
- config_name: so
data_files:
- split: train
path: '*/*so*/train.parquet'
- split: test
path: '*/*so*/test.parquet'
- split: valid
path: '*/*so*/valid.parquet'
- config_name: sq
data_files:
- split: train
path: '*/*sq*/train.parquet'
- split: test
path: '*/*sq*/test.parquet'
- split: valid
path: '*/*sq*/valid.parquet'
- config_name: sr
data_files:
- split: train
path: '*/*sr*/train.parquet'
- split: test
path: '*/*sr*/test.parquet'
- split: valid
path: '*/*sr*/valid.parquet'
- config_name: sv
data_files:
- split: train
path: '*/*sv*/train.parquet'
- split: test
path: '*/*sv*/test.parquet'
- split: valid
path: '*/*sv*/valid.parquet'
- config_name: sw
data_files:
- split: train
path: '*/*sw*/train.parquet'
- split: test
path: '*/*sw*/test.parquet'
- split: valid
path: '*/*sw*/valid.parquet'
- config_name: ta
data_files:
- split: train
path: '*/*ta*/train.parquet'
- split: test
path: '*/*ta*/test.parquet'
- split: valid
path: '*/*ta*/valid.parquet'
- config_name: te
data_files:
- split: train
path: '*/*te*/train.parquet'
- split: test
path: '*/*te*/test.parquet'
- split: valid
path: '*/*te*/valid.parquet'
- config_name: th
data_files:
- split: train
path: '*/*th*/train.parquet'
- split: test
path: '*/*th*/test.parquet'
- split: valid
path: '*/*th*/valid.parquet'
- config_name: tl
data_files:
- split: train
path: '*/*tl*/train.parquet'
- split: test
path: '*/*tl*/test.parquet'
- split: valid
path: '*/*tl*/valid.parquet'
- config_name: tr
data_files:
- split: train
path: '*/*tr*/train.parquet'
- split: test
path: '*/*tr*/test.parquet'
- split: valid
path: '*/*tr*/valid.parquet'
- config_name: ug
data_files:
- split: train
path: '*/*ug*/train.parquet'
- split: test
path: '*/*ug*/test.parquet'
- split: valid
path: '*/*ug*/valid.parquet'
- config_name: uk
data_files:
- split: train
path: '*/*uk*/train.parquet'
- split: test
path: '*/*uk*/test.parquet'
- split: valid
path: '*/*uk*/valid.parquet'
- config_name: ur
data_files:
- split: train
path: '*/*ur*/train.parquet'
- split: test
path: '*/*ur*/test.parquet'
- split: valid
path: '*/*ur*/valid.parquet'
- config_name: uz
data_files:
- split: train
path: '*/*uz*/train.parquet'
- split: test
path: '*/*uz*/test.parquet'
- split: valid
path: '*/*uz*/valid.parquet'
- config_name: vi
data_files:
- split: train
path: '*/*vi*/train.parquet'
- split: test
path: '*/*vi*/test.parquet'
- split: valid
path: '*/*vi*/valid.parquet'
- config_name: ky
data_files:
- split: train
path: '*/*ky*/train.parquet'
- split: test
path: '*/*ky*/test.parquet'
- split: valid
path: '*/*ky*/valid.parquet'
- config_name: si
data_files:
- split: train
path: '*/*si*/train.parquet'
- split: test
path: '*/*si*/test.parquet'
- split: valid
path: '*/*si*/valid.parquet'
- config_name: as
data_files:
- split: train
path: '*/*as*/train.parquet'
- split: test
path: '*/*as*/test.parquet'
- split: valid
path: '*/*as*/valid.parquet'
- config_name: ha
data_files:
- split: train
path: '*/*ha*/train.parquet'
- split: test
path: '*/*ha*/test.parquet'
- split: valid
path: '*/*ha*/valid.parquet'
- config_name: la
data_files:
- split: train
path: '*/*la*/train.parquet'
- split: test
path: '*/*la*/test.parquet'
- split: valid
path: '*/*la*/valid.parquet'
- config_name: pa
data_files:
- split: train
path: '*/*pa*/train.parquet'
- split: test
path: '*/*pa*/test.parquet'
- split: valid
path: '*/*pa*/valid.parquet'
- config_name: ps
data_files:
- split: train
path: '*/*ps*/train.parquet'
- split: test
path: '*/*ps*/test.parquet'
- split: valid
path: '*/*ps*/valid.parquet'
---
|
ucf-crcv/GAEA-Train | ucf-crcv | "2025-03-22T23:15:05Z" | 14,794 | 2 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2503.16423",
"region:us"
] | null | "2025-03-15T05:49:10Z" | ---
dataset_info:
features:
- name: file_name
dtype: image
- name: image
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: lat
dtype: string
- name: lon
dtype: string
- name: location
dtype: string
- name: question_type
dtype: string
- name: subset
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 83569722798.802
num_examples: 1580531
download_size: 79157132726
dataset_size: 83569722798.802
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
<h1 align="left"> GAEA: A Geolocation Aware Conversational Model</h1>
<h3 align="left"> Summary</h3>
<p align="justify"> Image geolocalization, in which, traditionally, an AI model predicts the precise GPS coordinates of an image is a challenging task with many downstream applications. However, the user cannot utilize the model to further their knowledge other than the GPS coordinate; the model lacks an understanding of the location and the conversational ability to communicate with the user. In recent days, with tremendous progress of large multimodal models (LMMs)—proprietary and open-source—researchers attempted to geolocalize images via LMMs. However, the issues remain unaddressed; beyond general tasks, for more specialized downstream tasks, one of which is geolocalization, LMMs struggle. In this work, we propose to solve this problem by introducing a conversational model `GAEA` that can provide information regarding the location of an image, as required by a user. No large-scale dataset enabling the training of such a model exists. Thus we propose a comprehensive dataset `GAEA-Train` with 800K images and around 1.6M question-answer pairs constructed by leveraging OpenStreetMap (OSM) attributes and geographical context clues. For quantitative evaluation, we propose a diverse benchmark, `GAEA-Bench` comprising 4K image-text pairs to evaluate conversational capabilities equipped with diverse question types. We consider 11 state-of-the-art open-source and proprietary LMMs and demonstrate that `GAEA` significantly outperforms the best open-source model, LLaVA-OneVision by 25.69% and best proprietary model, GPT-4o by 8.28%. We will publicly release our dataset and codes. </p>
## `GAEA` is the first open-source conversational model for conversational capabilities equipped with global-scale geolocalization.
[](https://arxiv.org/abs/2503.16423)
[](https://huggingface.co/collections/ucf-crcv/gaea-67d514a61d48eb1708b13a08)
[](https://ucf-crcv.github.io/GAEA/)
**Main contributions:**
1) **`GAEA-Train: A Diverse Training Dataset:`** We propose GAEA-Train, a new dataset designed for training conversational image geolocalization models, incorporating diverse visual and contextual data.
2) **`GAEA-Bench: Evaluating Conversational Geolocalization:`** To assess conversational capabilities in geolocalization, we introduce GAEA-Bench, a benchmark featuring various question-answer formats.
3) **`GAEA: An Interactive Geolocalization Chatbot:`** We present GAEA, a conversational chatbot that extends beyond geolocalization to provide rich contextual insights about locations from images.
4) **`Benchmarking Against State-of-the-Art LMMs:`** We quantitatively compare our model’s performance against 8 open-source and 3 proprietary LMMs, including GPT-4o and Gemini-2.0-Flash.
<b> This page is dedicated to GAEA-Train </b>
<h2 align="left"> Data Collection and Annotation Pipeline</h2>
<p align="center">
<img src="Assets/GeoLLM_Flow.jpg" alt="Geo-LLM-Flow"></a>
</p>
<p align="justify"> <b> (Left)</b> GAEA-Train includes geographically diverse visual samples from various data sources, such as MP-16, GLD-v2, and CityGuesser68k. <b> (Middle)</b> We also incorporate OpenStreetMap (OSM) metadata and auxiliary context for each image, ranging from climate zones to geographical clues about the country. <b>(Right)</b> Using open-source LLMs and GPT-4o, we generate four diverse question-answer pairs across geolocation, reasoning, and conversational subsets. </p>
<h2 align="left"> Data Statistics</h2>
| Statistic | Value |
|-------------------------------------|------------|
| **Total images** | 822,951 |
| **Total cities / countries** | 41,481 / 234 |
| **Total questions** | 1,580,531 |
| **Total geo-localization questions** | 822,951 |
| **Total explanatory captions** | 384,947 |
| **Total open-ended questions** | 267,668 |
| **Total multiple-choice questions** | 48,673 |
| **Total true/false questions** | 56,292 |
<h2 align="left"> Qualitative Examples</h2>
<p align="center">
<img src="Assets/qual_osm_metadata.jpg" alt="GeoLLM-Bench"></a>
</p>
<p align="justify"> We showcase various question-types including multiple-choice, true/false, short and long VQAs generated using an open-source model on our GAEA-Train dataset. We carefully select geographical tags from OSM metadata to generate diverse question types. </p>
<h2 align="left"> Dataset Structure</h2>
The general structure of our dataset looks like the following:
```
GAEA-1.6M/
|–– MP-16/
| |–– img_1.jpg
| |–– img_2.jpg
| |–– ... # remaining images
|–– GLD-v2/
| |–– landmark_img_1.jpg
| |–– landmark_img_2.jpg
| |–– ... # remaining images
|–– City_Guessr/
| |–– random_sample_1.jpg
| |–– random_sample_2.jpg
| |–– ... # remaining images
```
---
# Citation
**BibTeX:**
```bibtex
@misc{campos2025gaeageolocationawareconversational,
title={GAEA: A Geolocation Aware Conversational Model},
author={Ron Campos and Ashmal Vayani and Parth Parag Kulkarni and Rohit Gupta and Aritra Dutta and Mubarak Shah},
year={2025},
eprint={2503.16423},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.16423},
}
```
---
## Licensing Information
We release our work under [CC BY-NC 4.0 License](https://creativecommons.org/licenses/by-nc/4.0/). The CC BY-NC 4.0 license allows others to share, remix, and adapt the work, as long as it's for non-commercial purposes and proper attribution is given to the original creator.
|
awettig/RedPajama-combined-15B-8k-llama | awettig | "2023-11-30T07:16:59Z" | 14,768 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-11-29T05:26:03Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 3181819992
num_examples: 29874
- name: train
num_bytes: 188533325564
num_examples: 1770133
download_size: 0
dataset_size: 191715145556
---
# Dataset Card for "RedPajama-combined-15B-8K-llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
droid-ai/libero | droid-ai | "2025-03-12T01:15:39Z" | 14,735 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | [
"robotics"
] | "2025-03-12T01:08:44Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "libero",
"total_episodes": 1693,
"total_frames": 273465,
"total_tasks": 40,
"total_videos": 0,
"total_chunks": 2,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1693"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"is_first": {
"dtype": "bool",
"shape": [
1
]
},
"is_last": {
"dtype": "bool",
"shape": [
1
]
},
"is_terminal": {
"dtype": "bool",
"shape": [
1
]
},
"image": {
"dtype": "image",
"shape": [
256,
256,
3
]
},
"wrist_image": {
"dtype": "image",
"shape": [
256,
256,
3
]
},
"state": {
"dtype": "float32",
"shape": [
8
]
},
"joint_state": {
"dtype": "float32",
"shape": [
7
]
},
"actions": {
"dtype": "float32",
"shape": [
7
]
},
"discount": {
"dtype": "float32",
"shape": [
1
]
},
"reward": {
"dtype": "float32",
"shape": [
1
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
deepghs/game_characters | deepghs | "2025-01-26T17:04:09Z" | 14,719 | 24 | [
"license:apache-2.0",
"region:us"
] | null | "2023-01-28T11:12:51Z" | ---
license: apache-2.0
---
# Database of Characters in Mobile Games
[](https://huggingface.co/datasets/deepghs/game_characters)
All the character in the following games are supported:
* Arknights (crawled from [https://prts.wiki](https://prts.wiki))
* Fate/Grand Order (crawled from [https://fgo.wiki](https://fgo.wiki))
* Azur Lane (crawled from [https://wiki.biligame.com/blhx](https://wiki.biligame.com/blhx))
* Girls' Front-Line (crawled from [https://iopwiki.com/](https://iopwiki.com/))
* Genshin Impact (crawled from [https://genshin-impact.fandom.com/ja/wiki/%E5%8E%9F%E7%A5%9E_Wiki](https://genshin-impact.fandom.com/ja/wiki/%E5%8E%9F%E7%A5%9E_Wiki))
The source code and python library is hosted on [narugo1992/gchar](https://github.com/narugo1992/gchar), and the scheduled job is configured on Github Action, so the data will be automatically updated to the latest version once a day.
More character data for other games is coming...
|
lmqg/qa_squadshifts_synthetic | lmqg | "2023-01-15T14:25:15Z" | 14,699 | 0 | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2210.03992",
"region:us"
] | [
"question-answering"
] | "2022-12-20T08:31:18Z" | ---
license: cc-by-4.0
pretty_name: Synthetic QA dataset on SQuADShifts.
language: en
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "lmqg/qa_squadshifts_synthetic"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), made for question-answering based evaluation (QAE) for question generation model proposed by [Zhang and Bansal, 2019](https://aclanthology.org/D19-1253/).
The test split is the original validation set of [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), where the model should be evaluate on.
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
TBA
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
bigbio/gad | bigbio | "2022-12-22T15:25:28Z" | 14,698 | 4 | [
"multilinguality:momolingual",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2022-09-26T03:36:32Z" | ---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: momolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: GAD
homepage: https://geneticassociationdb.nih.gov/
bigbio_pubmed: true
bigbio_public: true
bigbio_tasks:
- TEXT_CLASSIFICATION
paperswithcode_id: gad
---
# Dataset Card for GAD
## Dataset Description
- **Homepage:** https://geneticassociationdb.nih.gov/
- **Pubmed:** True
- **Public:** True
- **Tasks:** TXTCLASS
A corpus identifying associations between genes and diseases by a semi-automatic
annotation procedure based on the Genetic Association Database.
## Note about homepage
The homepage for this dataset is no longer reachable, but the url is recorded here.
Data for this dataset was originally downloaded from a google drive
folder (the link used in the [BLURB benchmark data download script](https://microsoft.github.io/BLURB/submit.html).
However, we host the data in the huggingface hub for more reliable downloads and access.
## Citation Information
```
@article{Bravo2015,
doi = {10.1186/s12859-015-0472-9},
url = {https://doi.org/10.1186/s12859-015-0472-9},
year = {2015},
month = feb,
publisher = {Springer Science and Business Media {LLC}},
volume = {16},
number = {1},
author = {{\`{A}}lex Bravo and Janet Pi{\~{n}}ero and N{\'{u}}ria Queralt-Rosinach and Michael Rautschka and Laura I Furlong},
title = {Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research},
journal = {{BMC} Bioinformatics}
}
```
|
microsoft/orca-agentinstruct-1M-v1 | microsoft | "2024-11-01T00:14:29Z" | 14,620 | 432 | [
"task_categories:question-answering",
"language:en",
"license:cdla-permissive-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | "2024-10-25T23:41:54Z" | ---
language:
- en
license: cdla-permissive-2.0
size_categories:
- 1M<n<10M
task_categories:
- question-answering
dataset_info:
features:
- name: messages
dtype: string
splits:
- name: creative_content
num_bytes: 288747542
num_examples: 50000
- name: text_modification
num_bytes: 346421282
num_examples: 50000
- name: struct2text_flow
num_bytes: 251920604
num_examples: 50000
- name: rc
num_bytes: 282448904
num_examples: 50000
- name: rag
num_bytes: 421188673
num_examples: 50000
- name: text_extraction
num_bytes: 312246895
num_examples: 50000
- name: mcq
num_bytes: 230459938
num_examples: 99986
- name: follow_up
num_bytes: 881311205
num_examples: 99054
- name: analytical_reasoning
num_bytes: 100724491
num_examples: 25000
- name: fermi
num_bytes: 78109959
num_examples: 25000
- name: fs_cot_flow
num_bytes: 109007740
num_examples: 25000
- name: code_
num_bytes: 617418962
num_examples: 100000
- name: brain_teaser
num_bytes: 124523402
num_examples: 50000
- name: text_classification
num_bytes: 151217275
num_examples: 50000
- name: open_domain_qa
num_bytes: 616935002
num_examples: 272370
download_size: 2210440144
dataset_size: 4812681874
configs:
- config_name: default
data_files:
- split: creative_content
path: data/creative_content-*
- split: text_modification
path: data/text_modification-*
- split: struct2text_flow
path: data/struct2text_flow-*
- split: rc
path: data/rc-*
- split: rag
path: data/rag-*
- split: text_extraction
path: data/text_extraction-*
- split: mcq
path: data/mcq-*
- split: follow_up
path: data/follow_up-*
- split: analytical_reasoning
path: data/analytical_reasoning-*
- split: fermi
path: data/fermi-*
- split: fs_cot_flow
path: data/fs_cot_flow-*
- split: code_
path: data/code_-*
- split: brain_teaser
path: data/brain_teaser-*
- split: text_classification
path: data/text_classification-*
- split: open_domain_qa
path: data/open_domain_qa-*
---
### Dataset Card
This dataset is a fully synthetic set of instruction pairs where both the prompts and the responses have been synthetically generated, using the AgentInstruct framework.
AgentInstruct is an extensible agentic framework for synthetic data generation.
This dataset contains ~1 million instruction pairs generated by the AgentInstruct, using only raw text content publicly avialble on the Web as seeds. The data covers different capabilities, such as text editing, creative writing, coding, reading comprehension, etc. The dataset can be used for instruction tuning of any base LLM.
Using the full AgentInstruct data (a superset of this dataset with ~25M instruction pairs), we post-trained Mistral-7b. When comparing the resulting model Orca-3-Mistral to Mistral-7b-Instruct (which uses the same base model), we observe significant improvements across many benchmarks. For example, 40% improvement on AGIEval, 19% improvement on MMLU, 54% improvement on GSM8K, 38% improvement on BBH and 45% improvement on AlpacaEval.
Please refer to AgentInstruct: Toward Generative Teaching with Agentic Flows for further details.
### Dataset Sources
**Repository:** microsoft/orca-agentinstruct-1M-v1
**Paper:** AgentInstruct: Toward Generative Teaching with Agentic Flows
### Direct Use
This is a general dataset created by the AgentInstruct methodology. It contains diverse synthetic data. This could be used for research on instruction tuning and synthetic data use and generaiton.
### Out-of-Scope Use
This dataset is being shared for research purposes. For training models to perform real-world tasks, we recommend further testing and validation where needed.
This dataset is not intended for use in educational systems or organizations, or for use in health systems.
### Dataset Structure
### Data Instances
A typical data entry in the dataset consists of a conversation i.e. sequence of question and its corresponding answer. Below is an example from the dataset:
```json
{
"messages": [
{
"role": "system",
"content": ""
},
{
"role": "user",
"content": "Argue the importance of having a strong understanding of physics and mechanics in the design of electron beam systems for semiconductor equipment."
},
{
"role": "assistant",
"content": "Having a strong understanding of physics and mechanics is crucial in the design of electron beam systems for semiconductor equipment. This is because electron beam systems rely heavily on the principles of physics and mechanics to function effectively and efficiently.\n\nFirstly, a strong understanding of physics is necessary to design the electron gun, which is the heart of the electron beam system. The electron gun generates and accelerates the electrons that are used to create the beam. The design of the electron gun must take into account the properties of electrons, such as their charge, mass, and velocity, as well as the principles of electromagnetism and quantum mechanics. Without a thorough understanding of these principles, it would be impossible to design an electron gun that can generate and accelerate electrons with the required energy and precision.\n\nSecondly, a strong understanding of mechanics is essential in the design of the electron beam system's optics. The optics are responsible for shaping and focusing the electron beam, which is critical for achieving high-resolution imaging and precise material processing. The design of the optics must take into account the principles of optics, such as refraction, reflection, and diffraction, as well as the mechanical properties of the components, such as their stiffness, thermal expansion, and vibration resistance. Without a thorough understanding of these principles, it would be impossible to design optics that can shape and focus the electron beam with the required precision and stability.\n\nIn conclusion, a strong understanding of physics and mechanics is essential in the design of electron beam systems for semiconductor equipment. Without this understanding, it would be impossible to design electron guns and optics that can generate and shape the electron beam with the required precision and stability. Therefore, designers of electron beam systems must have a solid foundation in physics and mechanics to ensure that their designs are effective and efficient."
},
{
"role": "user",
"content": "Could you perhaps rephrase your entire explanation using only haiku poetry? It would certainly give your answer a more artistic touch, don't you think?"
},
{
"role": "assistant",
"content": "Electron streams flow,\nPhysics guides their path and speed,\nSemiconduct's soul.\n\nElectron gun's heart,\nQuantum dance, charged particles,\nPrecision's fine art.\n\nOptics shape the beam,\nMechanics' firm hand guides light,\nFocused like a dream.\n\nIn the silicon dance,\nPhysics, mechanics embrace,\nAdvance our expanse.\n\nDesign's core must know,\nNature's laws to harness force,\nFor tech to grow, glow."
},
{
"role": "user",
"content": "Just to make sure I've got it right, your haikus suggest that the electron gun and optics are like artists, requiring precision and a deep understanding of natural laws to create their masterpiece?"
},
{
"role": "assistant",
"content": "Indeed, that's a poetic interpretation. The haikus suggest that the electron gun (the creator of the electron beam) and the optics (which shape and focus the beam) require a precise understanding of physics and mechanics, much like artists need a deep understanding of their tools and materials to create a masterpiece. The natural laws are the guidelines within which they must work to achieve the desired outcome in semiconductor equipment design."
},
{
"role": "user",
"content": "I'd like to understand more about the 'precision's fine art' you mentioned. Can you explain how precision is achieved in the design of the electron gun and optics?"
},
{
"role": "assistant",
"content": "Certainly! Achieving precision in the design of the electron gun and optics in electron beam systems involves several key factors:\n\n1. **Electron Gun Design:**\n - **Vacuum Environment:** The electron gun operates in a high vacuum to prevent electron scattering by air molecules, ensuring a precise and concentrated beam.\n - **Cathode Material:** The choice of cathode material affects the emission efficiency and stability of the electron beam. Materials like tungsten or lanthanum hexaboride are often used for their high melting points and low work functions.\n - **Anode Configuration:** The anode accelerates the electrons and its design, including the shape and voltage applied, determines the beam's energy and focus.\n - **Electromagnetic Lenses:** These lenses control the focus of the electron beam, and their design must account for spherical and chromatic aberrations to maintain precision.\n\n2. **Optics Design:**\n - **Electromagnetic Lenses:** Just like in the electron gun, these lenses in the optical column are used to focus and deflect the electron beam with high precision. The design and alignment of these lenses are critical for achieving the desired spot size and shape on the target.\n - **Stigmators:** These are used to correct for astigmatism in the electron beam, ensuring that the beam remains symmetrical and focused over a range of operating conditions.\n - **Apertures:** Placed at strategic points in the optical column, apertures help to shape the beam and eliminate electrons that stray from the desired path, improving resolution and reducing aberrations.\n - **Thermal Stability:** Precision in electron beam systems also depends on maintaining thermal stability to prevent drift or deformation of the optical components.\n\n3. **Control Systems:**\n - **Feedback Loops:** Precision is maintained through feedback systems that monitor the beam's characteristics and make real-time adjustments to the electron gun and optics to correct any deviations.\n - **Vibration Isolation:** The entire system must be isolated from external vibrations that could disturb the beam's path, often using pneumatic or magnetic isolation systems.\n\n4. **Material Science:**\n - **Material Properties:** The materials used in the construction of the electron gun and optics must be chosen for their stability under the operating conditions, including resistance to thermal expansion and electromagnetic interference.\n\n5. **Computational Modeling:**\n - **Simulation Software:** Advanced software is used to model the electron beam's behavior and interactions with the system's components, allowing for the optimization of the design before physical construction.\n\nIn summary, 'precision's fine art' in the context of electron beam systems refers to the meticulous design and engineering of the electron gun and optics, which are achieved through a deep understanding of physics and materials science, as well as the implementation of sophisticated control and feedback mechanisms."
}
],
}
```
### Data Fields
The dataset comprises the following fields:
messages: String representing a conversation in OpenAI format.
```json
[
{"role": "system", "content": "...."},
{"role": "user", "content": "...."},
{"role": "assistant", "content": "...."},
{"role": "user", "content": "...."},
{"role": "assistant", "content": "...."},
....
]
```
To read the conversation use `json.loads()`
### Data Splits
train
### Dataset Creation
### Source Data
Please refer to AgentInstruct: Toward Generative Teaching with Agentic Flows for further detail
### Data Collection and Processing
Please refer to AgentInstruct: Toward Generative Teaching with Agentic Flows for further details for details about the dataset construction.
### Who are the source data producers?
Microsoft
### Annotation process
We generate questions and answers using using Azure GPT-4.
### Personal and Sensitive Information
None
### Bias, Risks, and Limitations
• This dataset is in English.
• The dataset inherits the biases, errors, and omissions known to exist in data used for seed sources and models used for data generaiton.
• This dataset is not intended to represent any specific domain, and contains generic data. However, the AgentInstruct methodology, which was used to create this dataset, can be used to generate high-quality domain specific data, which can be used to fine-tune any existing model for a specific domain.
• The dataset is synthetically gnerated and hence may contain inaccuracies that do not accurately reflect real-world phenomena.
• The synthetic nature of this dataset may limit its ability to generalize to real-world cases.
• The data is intended for research and exoerumentation for model training and synthetic data generation.
### Citation
If you find this work useful in your method, you can cite the paper as below:
@misc{
title={ AgentInstruct: Toward Generative Teaching with Agentic Flows},
author={Arindam Mitra, Luciano Del Corro, Guoqing Zheng, Shweti Mahajan, Dany Rouhana, Andres Codas, Yadong Lu, Wei-ge Chen, Olga Vrousgos, Corby Rosset, Fillipe Silva, Hamed Khanpour, Yash Lara, Ahmed Awadallah},
year={2024},
eprint={ 2407.03502},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Dataset Card Contact
[email protected] |
cimec/lambada | cimec | "2024-01-04T14:16:25Z" | 14,493 | 58 | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|bookcorpus",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"long-range-dependency"
] | [
"text2text-generation"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|bookcorpus
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: lambada
pretty_name: LAMBADA
tags:
- long-range-dependency
dataset_info:
config_name: plain_text
features:
- name: text
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 978174122
num_examples: 2662
- name: test
num_bytes: 1791823
num_examples: 5153
- name: validation
num_bytes: 1703482
num_examples: 4869
download_size: 552427340
dataset_size: 981669427
configs:
- config_name: plain_text
data_files:
- split: train
path: plain_text/train-*
- split: test
path: plain_text/test-*
- split: validation
path: plain_text/validation-*
default: true
---
# Dataset Card for LAMBADA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LAMBADA homepage](https://zenodo.org/record/2630551#.X8UP76pKiIa)
- **Paper:** [The LAMBADA dataset: Word prediction requiring a broad discourse context∗](https://www.aclweb.org/anthology/P16-1144.pdf)
- **Data:** https://doi.org/10.5281/zenodo.2630551
### Dataset Summary
The LAMBADA evaluates the capabilities of computational models
for text understanding by means of a word prediction task.
LAMBADA is a collection of narrative passages sharing the characteristic
that human subjects are able to guess their last word if
they are exposed to the whole passage, but not if they
only see the last sentence preceding the target word.
To succeed on LAMBADA, computational models cannot
simply rely on local context, but must be able to
keep track of information in the broader discourse.
The LAMBADA dataset is extracted from BookCorpus and
consists of 10'022 passages, divided into 4'869 development
and 5'153 test passages. The training data for language
models to be tested on LAMBADA include the full text
of 2'662 novels (disjoint from those in dev+test),
comprising 203 million words.
### Supported Tasks and Leaderboards
Long range dependency evaluated as (last) word prediction
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A data point is a text sequence (passage) including the context, the target sentence (the last one) and the target word. For each passage in the dev and the test splits, the word to be guessed is the last one.
The training data include the full text of 2'662 novels (disjoint from
those in dev+test), comprising more than 200M words. It consists of text from the same domain as the dev+test passages, but not filtered in any way.
Each training instance has a `category` field indicating which sub-category the book was extracted from. This field is not given for the dev and test splits.
An example looks like this:
```
{"category": "Mystery",
"text": "bob could have been called in at this point , but he was n't miffed at his exclusion at all . he was relieved at not being brought into this initial discussion with central command . `` let 's go make some grub , '' said bob as he turned to danny . danny did n't keep his stoic expression , but with a look of irritation got up and left the room with bob",
}
```
### Data Fields
- `category`: the sub-category of books from which the book was extracted from. Only available for the training split.
- `text`: the text (concatenation of context, target sentence and target word). The word to be guessed is the last one.
### Data Splits
- train: 2'662 novels
- dev: 4'869 passages
- test: 5'153 passages
## Dataset Creation
### Curation Rationale
The dataset aims at evaluating the ability of language models to hold long-term contextual memories. Instances are extracted from books because they display long-term dependencies. In particular, the data are curated such that the target words are easy to guess by human subjects when they can look at the whole passage they come from, but nearly impossible if only the last sentence is considered.
### Source Data
#### Initial Data Collection and Normalization
The corpus was duplicated and potentially offensive material were filtered out with a stop word list.
#### Who are the source language producers?
The passages are extracted from novels from [Book Corpus](https://github.com/huggingface/datasets/tree/master/datasets/bookcorpus).
### Annotations
#### Annotation process
The authors required two consecutive subjects (paid crowdsourcers) to exactly match the missing word based on the whole passage (comprising the context and the target sentence), and made sure that no subject (out of ten) was able to provide it based on local context only, even when given 3 guesses.
#### Who are the annotators?
The text is self-annotated but was curated by asking (paid) crowdsourcers to guess the last word.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is released under the [CC BY 4.0](Creative Commons Attribution 4.0 International) license.
### Citation Information
```
@InProceedings{paperno-EtAl:2016:P16-1,
author = {Paperno, Denis and Kruszewski, Germ\'{a}n and Lazaridou,
Angeliki and Pham, Ngoc Quan and Bernardi, Raffaella and Pezzelle,
Sandro and Baroni, Marco and Boleda, Gemma and Fernandez, Raquel},
title = {The {LAMBADA} dataset: Word prediction requiring a broad
discourse context},
booktitle = {Proceedings of the 54th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers)},
month = {August},
year = {2016},
address = {Berlin, Germany},
publisher = {Association for Computational Linguistics},
pages = {1525--1534},
url = {http://www.aclweb.org/anthology/P16-1144}
}
```
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
gigant/tib_slides | gigant | "2025-01-28T14:40:52Z" | 14,424 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-01-27T01:44:02Z" | ---
dataset_info:
features:
- name: Image
dtype: image
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 131956494917.654
num_examples: 484843
download_size: 0
dataset_size: 131956494917.654
---
# Dataset Card for "tib_slides"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JonasLange/DBD-research-group | JonasLange | "2023-06-27T11:26:14Z" | 14,419 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-06-27T08:44:41Z" | ---
dataset_info:
features:
- name: ID
dtype: string
- name: Audio
dtype: audio
- name: Start Time (s)
dtype: string
- name: End Time (s)
dtype: string
- name: Low Freq (Hz)
dtype: string
- name: High Freq (Hz)
dtype: string
- name: Species eBird Code
dtype: string
- name: Call Type
dtype: 'null'
- name: Sex
dtype: 'null'
- name: Latitude
dtype: float64
- name: Longitude
dtype: float64
- name: Uncertainty
dtype: 'null'
- name: Microphone
dtype: string
- name: License
dtype: string
- name: Source
dtype: string
- name: BirdNet Training Data
dtype: bool
splits:
- name: train
num_bytes: 156231869250.216
num_examples: 10976
download_size: 538872707
dataset_size: 156231869250.216
---
# Dataset Card for "DBD-research-group"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
facebook/xnli | facebook | "2024-01-05T08:30:52Z" | 14,406 | 56 | [
"language:ar",
"language:bg",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fr",
"language:hi",
"language:ru",
"language:sw",
"language:th",
"language:tr",
"language:ur",
"language:vi",
"language:zh",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-03-02T23:29:22Z" | ---
language:
- ar
- bg
- de
- el
- en
- es
- fr
- hi
- ru
- sw
- th
- tr
- ur
- vi
- zh
paperswithcode_id: xnli
pretty_name: Cross-lingual Natural Language Inference
dataset_info:
- config_name: all_languages
features:
- name: premise
dtype:
translation:
languages:
- ar
- bg
- de
- el
- en
- es
- fr
- hi
- ru
- sw
- th
- tr
- ur
- vi
- zh
- name: hypothesis
dtype:
translation_variable_languages:
languages:
- ar
- bg
- de
- el
- en
- es
- fr
- hi
- ru
- sw
- th
- tr
- ur
- vi
- zh
num_languages: 15
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 1581471691
num_examples: 392702
- name: test
num_bytes: 19387432
num_examples: 5010
- name: validation
num_bytes: 9566179
num_examples: 2490
download_size: 963942271
dataset_size: 1610425302
- config_name: ar
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 107399614
num_examples: 392702
- name: test
num_bytes: 1294553
num_examples: 5010
- name: validation
num_bytes: 633001
num_examples: 2490
download_size: 59215902
dataset_size: 109327168
- config_name: bg
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 125973225
num_examples: 392702
- name: test
num_bytes: 1573034
num_examples: 5010
- name: validation
num_bytes: 774061
num_examples: 2490
download_size: 66117878
dataset_size: 128320320
- config_name: de
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 84684140
num_examples: 392702
- name: test
num_bytes: 996488
num_examples: 5010
- name: validation
num_bytes: 494604
num_examples: 2490
download_size: 55973883
dataset_size: 86175232
- config_name: el
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 139753358
num_examples: 392702
- name: test
num_bytes: 1704785
num_examples: 5010
- name: validation
num_bytes: 841226
num_examples: 2490
download_size: 74551247
dataset_size: 142299369
- config_name: en
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 74444026
num_examples: 392702
- name: test
num_bytes: 875134
num_examples: 5010
- name: validation
num_bytes: 433463
num_examples: 2490
download_size: 50627367
dataset_size: 75752623
- config_name: es
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 81383284
num_examples: 392702
- name: test
num_bytes: 969813
num_examples: 5010
- name: validation
num_bytes: 478422
num_examples: 2490
download_size: 53677157
dataset_size: 82831519
- config_name: fr
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 85808779
num_examples: 392702
- name: test
num_bytes: 1029239
num_examples: 5010
- name: validation
num_bytes: 510104
num_examples: 2490
download_size: 55968680
dataset_size: 87348122
- config_name: hi
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 170593964
num_examples: 392702
- name: test
num_bytes: 2073073
num_examples: 5010
- name: validation
num_bytes: 1023915
num_examples: 2490
download_size: 70908548
dataset_size: 173690952
- config_name: ru
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 129859615
num_examples: 392702
- name: test
num_bytes: 1603466
num_examples: 5010
- name: validation
num_bytes: 786442
num_examples: 2490
download_size: 70702606
dataset_size: 132249523
- config_name: sw
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 69285725
num_examples: 392702
- name: test
num_bytes: 871651
num_examples: 5010
- name: validation
num_bytes: 429850
num_examples: 2490
download_size: 45564152
dataset_size: 70587226
- config_name: th
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 176062892
num_examples: 392702
- name: test
num_bytes: 2147015
num_examples: 5010
- name: validation
num_bytes: 1061160
num_examples: 2490
download_size: 77222045
dataset_size: 179271067
- config_name: tr
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 71637140
num_examples: 392702
- name: test
num_bytes: 934934
num_examples: 5010
- name: validation
num_bytes: 459308
num_examples: 2490
download_size: 48509680
dataset_size: 73031382
- config_name: ur
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 96441486
num_examples: 392702
- name: test
num_bytes: 1416241
num_examples: 5010
- name: validation
num_bytes: 699952
num_examples: 2490
download_size: 46682785
dataset_size: 98557679
- config_name: vi
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 101417430
num_examples: 392702
- name: test
num_bytes: 1190217
num_examples: 5010
- name: validation
num_bytes: 590680
num_examples: 2490
download_size: 57690058
dataset_size: 103198327
- config_name: zh
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 72224841
num_examples: 392702
- name: test
num_bytes: 777929
num_examples: 5010
- name: validation
num_bytes: 384851
num_examples: 2490
download_size: 48269855
dataset_size: 73387621
configs:
- config_name: all_languages
data_files:
- split: train
path: all_languages/train-*
- split: test
path: all_languages/test-*
- split: validation
path: all_languages/validation-*
- config_name: ar
data_files:
- split: train
path: ar/train-*
- split: test
path: ar/test-*
- split: validation
path: ar/validation-*
- config_name: bg
data_files:
- split: train
path: bg/train-*
- split: test
path: bg/test-*
- split: validation
path: bg/validation-*
- config_name: de
data_files:
- split: train
path: de/train-*
- split: test
path: de/test-*
- split: validation
path: de/validation-*
- config_name: el
data_files:
- split: train
path: el/train-*
- split: test
path: el/test-*
- split: validation
path: el/validation-*
- config_name: en
data_files:
- split: train
path: en/train-*
- split: test
path: en/test-*
- split: validation
path: en/validation-*
- config_name: es
data_files:
- split: train
path: es/train-*
- split: test
path: es/test-*
- split: validation
path: es/validation-*
- config_name: fr
data_files:
- split: train
path: fr/train-*
- split: test
path: fr/test-*
- split: validation
path: fr/validation-*
- config_name: hi
data_files:
- split: train
path: hi/train-*
- split: test
path: hi/test-*
- split: validation
path: hi/validation-*
- config_name: ru
data_files:
- split: train
path: ru/train-*
- split: test
path: ru/test-*
- split: validation
path: ru/validation-*
- config_name: sw
data_files:
- split: train
path: sw/train-*
- split: test
path: sw/test-*
- split: validation
path: sw/validation-*
- config_name: th
data_files:
- split: train
path: th/train-*
- split: test
path: th/test-*
- split: validation
path: th/validation-*
- config_name: tr
data_files:
- split: train
path: tr/train-*
- split: test
path: tr/test-*
- split: validation
path: tr/validation-*
- config_name: ur
data_files:
- split: train
path: ur/train-*
- split: test
path: ur/test-*
- split: validation
path: ur/validation-*
- config_name: vi
data_files:
- split: train
path: vi/train-*
- split: test
path: vi/test-*
- split: validation
path: vi/validation-*
- config_name: zh
data_files:
- split: train
path: zh/train-*
- split: test
path: zh/test-*
- split: validation
path: zh/validation-*
---
# Dataset Card for "xnli"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.nyu.edu/projects/bowman/xnli/](https://www.nyu.edu/projects/bowman/xnli/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 7.74 GB
- **Size of the generated dataset:** 3.23 GB
- **Total amount of disk used:** 10.97 GB
### Dataset Summary
XNLI is a subset of a few thousand examples from MNLI which has been translated
into a 14 different languages (some low-ish resource). As with MNLI, the goal is
to predict textual entailment (does sentence A imply/contradict/neither sentence
B) and is a classification task (given two sentences, predict one of three
labels).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### all_languages
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 1.61 GB
- **Total amount of disk used:** 2.09 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "{\"language\": [\"ar\", \"bg\", \"de\", \"el\", \"en\", \"es\", \"fr\", \"hi\", \"ru\", \"sw\", \"th\", \"tr\", \"ur\", \"vi\", \"zh\"], \"translation\": [\"احد اع...",
"label": 0,
"premise": "{\"ar\": \"واحدة من رقابنا ستقوم بتنفيذ تعليماتك كلها بكل دقة\", \"bg\": \"един от нашите номера ще ви даде инструкции .\", \"de\": \"Eine ..."
}
```
#### ar
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 109.32 MB
- **Total amount of disk used:** 593.29 MB
An example of 'validation' looks as follows.
```
{
"hypothesis": "اتصل بأمه حالما أوصلته حافلة المدرسية.",
"label": 1,
"premise": "وقال، ماما، لقد عدت للمنزل."
}
```
#### bg
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 128.32 MB
- **Total amount of disk used:** 612.28 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "\"губиш нещата на следното ниво , ако хората си припомнят .\"...",
"label": 0,
"premise": "\"по време на сезона и предполагам , че на твоето ниво ще ги загубиш на следващото ниво , ако те решат да си припомнят отбора на ..."
}
```
#### de
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 86.17 MB
- **Total amount of disk used:** 570.14 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "Man verliert die Dinge auf die folgende Ebene , wenn sich die Leute erinnern .",
"label": 0,
"premise": "\"Du weißt , während der Saison und ich schätze , auf deiner Ebene verlierst du sie auf die nächste Ebene , wenn sie sich entschl..."
}
```
#### el
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 142.30 MB
- **Total amount of disk used:** 626.26 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "\"Τηλεφώνησε στη μαμά του μόλις το σχολικό λεωφορείο τον άφησε.\"...",
"label": 1,
"premise": "Και είπε, Μαμά, έφτασα στο σπίτι."
}
```
### Data Fields
The data fields are the same among all splits.
#### all_languages
- `premise`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
- `hypothesis`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### ar
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### bg
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### de
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### el
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
### Data Splits
| name |train |validation|test|
|-------------|-----:|---------:|---:|
|all_languages|392702| 2490|5010|
|ar |392702| 2490|5010|
|bg |392702| 2490|5010|
|de |392702| 2490|5010|
|el |392702| 2490|5010|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{conneau2018xnli,
author = {Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin},
title = {XNLI: Evaluating Cross-lingual Sentence Representations},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing},
year = {2018},
publisher = {Association for Computational Linguistics},
location = {Brussels, Belgium},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
open-rl-leaderboard/results_v2 | open-rl-leaderboard | "2024-12-07T20:19:13Z" | 14,357 | 1 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-14T15:05:26Z" | ---
dataset_info:
features:
- name: user_id
dtype: string
- name: model_id
dtype: string
- name: sha
dtype: string
- name: status
dtype: string
- name: env_id
dtype: string
- name: episodic_returns
sequence: float64
splits:
- name: train
num_bytes: 17423994045
num_examples: 46711176
download_size: 14153678919
dataset_size: 17423994045
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "results_v2"
[Leaderboard](https://huggingface.co/spaces/open-rl-leaderboard/leaderboard)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cboettig/fishbase | cboettig | "2024-08-16T20:35:09Z" | 14,325 | 3 | [
"license:cc-by-nc-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-01T21:36:53Z" | ---
license: cc-by-nc-4.0
---
# Dataset Card for FishBase
<!-- Provide a quick summary of the dataset. -->
Snapshots of FishBase data tables used by the [rOpenSci package `rfishbase`](https://github.com/ropensci/rfishbase) and the FishBase API.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Carl Boettiger
- **License:** CC-BY-NC, FishBase.org Team
### Dataset Sources
<https://fishbase.org>, <https://sealifebase.org>, via exclusive export of backend MySQL databases.
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
This data is provided to facilitate access to fishbase statistics through `rfishbase`.
Curation has evolved over the past decade and now provides basically direct access to versioned snapshots of the fishbase backend data.
### Out-of-Scope Use
Reuse of this data is not licensed for commerical applications. Developers should contact the FishBase team directly. See <https://fishbase.org> for details.
## Dataset Structure
Data provided here are raw CSV and parquet exports from the MySQL database backend powering <https://fishbase.org> and its sister site, <https://sealifebase.org>.
Please note that there are many tables here which have evolved over decades and were not developed with the intent of ever supporting direct access by end users.
The data structure can be highly unintuitive and common tasks may require joins across multiple tables. These files represent a snapshot in time as indicated
by the version numbers (e.g. v24.07 is data snapshot dated to July of 2024).
## Dataset Creation
Dataset creation relies on ingesting a MySQL dump provided roughly annually from the FishBase team to this maintainer. Current process imports this into
a MySQL (mariadb) server in docker, connects a the `duckdb` `mysql` extension and exports as parquet and csv, as shown in the `creation.py` script.
### Source Data
See <https://fishbase.org> for details.
|
eduagarcia-temp/OSCAR-2301_meta | eduagarcia-temp | "2023-08-28T14:07:22Z" | 14,312 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-08-27T20:24:54Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: meta
struct:
- name: categories
sequence: string
- name: dedup
struct:
- name: exact_norm
struct:
- name: cluster_main_idx
dtype: int64
- name: cluster_size
dtype: int64
- name: exact_hash_idx
dtype: int64
- name: is_duplicate
dtype: bool
- name: minhash
struct:
- name: cluster_main_idx
dtype: int64
- name: cluster_size
dtype: int64
- name: is_duplicate
dtype: bool
- name: minhash_idx
dtype: int64
- name: harmful_pp
dtype: float64
- name: identification
struct:
- name: label
dtype: string
- name: prob
dtype: float64
- name: quality_warnings
sequence: string
- name: sentence_identifications
list:
- name: label
dtype: string
- name: prob
dtype: float64
- name: tlsh
dtype: string
- name: warc_headers
struct:
- name: content-length
dtype: int64
- name: content-type
dtype: string
- name: warc-block-digest
dtype: string
- name: warc-date
dtype: string
- name: warc-identified-content-language
dtype: string
- name: warc-record-id
dtype: string
- name: warc-refers-to
dtype: string
- name: warc-target-uri
dtype: string
- name: warc-type
dtype: string
splits:
- name: train
num_bytes: 127702717461
num_examples: 18031400
download_size: 40317121912
dataset_size: 127702717461
---
# Dataset Card for "OSCAR-2301_meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
facebook/wiki_dpr | facebook | "2024-03-12T18:06:56Z" | 14,297 | 35 | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10M<n<100M",
"arxiv:2004.04906",
"region:us",
"text-search"
] | [
"fill-mask",
"text-generation"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license: cc-by-nc-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: Wiki-DPR
tags:
- text-search
dataset_info:
- config_name: psgs_w100.nq.exact
features:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 78419248156
num_examples: 21015300
download_size: 85288677114
dataset_size: 78419248156
- config_name: psgs_w100.nq.compressed
features:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 78419248156
num_examples: 21015300
download_size: 85288677114
dataset_size: 78419248156
- config_name: psgs_w100.nq.no_index
features:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 78419248156
num_examples: 21015300
download_size: 85288677114
dataset_size: 78419248156
- config_name: psgs_w100.multiset.exact
features:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 78419248156
num_examples: 21015300
download_size: 85289275855
dataset_size: 78419248156
- config_name: psgs_w100.multiset.compressed
features:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 78419248156
num_examples: 21015300
download_size: 85289275855
dataset_size: 78419248156
- config_name: psgs_w100.multiset.no_index
features:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 78419248156
num_examples: 21015300
download_size: 85289275855
dataset_size: 78419248156
---
# Dataset Card for "wiki_dpr"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/facebookresearch/DPR
- **Paper:** https://arxiv.org/abs/2004.04906
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
This is the wikipedia split used to evaluate the Dense Passage Retrieval (DPR) model.
It contains 21M passages from wikipedia along with their DPR embeddings.
The wikipedia articles were split into multiple, disjoint text blocks of 100 words as passages.
The wikipedia dump is the one from Dec. 20, 2018.
There are two types of DPR embeddings based on two different models:
- `nq`: the model is trained on the Natural Questions dataset
- `multiset`: the model is trained on multiple datasets
Additionally, a FAISS index can be created from the embeddings:
- `exact`: with an exact FAISS index (high RAM usage)
- `compressed`: with a compressed FAISS index (approximate, but lower RAM usage)
- `no_index`: without FAISS index
Finally, there is the possibility of generating the dataset without the embeddings:
- `no_embeddings`
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
Each instance contains a paragraph of at most 100 words, as well as the title of the wikipedia page it comes from, and the DPR embedding (a 768-d vector).
#### psgs_w100.multiset.compressed
- **Size of downloaded dataset files:** 70.97 GB
- **Size of the generated dataset:** 78.42 GB
- **Total amount of disk used:** 163.71 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{'id': '1',
'text': 'Aaron Aaron ( or ; "Ahärôn") is a prophet, high priest, and the brother of Moses in the Abrahamic religions. Knowledge of Aaron, along with his brother Moses, comes exclusively from religious texts, such as the Bible and Quran. The Hebrew Bible relates that, unlike Moses, who grew up in the Egyptian royal court, Aaron and his elder sister Miriam remained with their kinsmen in the eastern border-land of Egypt (Goshen). When Moses first confronted the Egyptian king about the Israelites, Aaron served as his brother\'s spokesman ("prophet") to the Pharaoh. Part of the Law (Torah) that Moses received from'],
'title': 'Aaron',
'embeddings': [-0.07233893871307373,
0.48035329580307007,
0.18650995194911957,
-0.5287084579467773,
-0.37329429388046265,
0.37622880935668945,
0.25524479150772095,
...
-0.336689829826355,
0.6313082575798035,
-0.7025573253631592]}
```
#### psgs_w100.multiset.exact
- **Size of downloaded dataset files:** 85.23 GB
- **Size of the generated dataset:** 78.42 GB
- **Total amount of disk used:** 163.71 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{'id': '1',
'text': 'Aaron Aaron ( or ; "Ahärôn") is a prophet, high priest, and the brother of Moses in the Abrahamic religions. Knowledge of Aaron, along with his brother Moses, comes exclusively from religious texts, such as the Bible and Quran. The Hebrew Bible relates that, unlike Moses, who grew up in the Egyptian royal court, Aaron and his elder sister Miriam remained with their kinsmen in the eastern border-land of Egypt (Goshen). When Moses first confronted the Egyptian king about the Israelites, Aaron served as his brother\'s spokesman ("prophet") to the Pharaoh. Part of the Law (Torah) that Moses received from'],
'title': 'Aaron',
'embeddings': [-0.07233893871307373,
0.48035329580307007,
0.18650995194911957,
-0.5287084579467773,
-0.37329429388046265,
0.37622880935668945,
0.25524479150772095,
...
-0.336689829826355,
0.6313082575798035,
-0.7025573253631592]}
```
#### psgs_w100.multiset.no_index
- **Size of downloaded dataset files:** 85.23 GB
- **Size of the generated dataset:** 78.42 GB
- **Total amount of disk used:** 163.71 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{'id': '1',
'text': 'Aaron Aaron ( or ; "Ahärôn") is a prophet, high priest, and the brother of Moses in the Abrahamic religions. Knowledge of Aaron, along with his brother Moses, comes exclusively from religious texts, such as the Bible and Quran. The Hebrew Bible relates that, unlike Moses, who grew up in the Egyptian royal court, Aaron and his elder sister Miriam remained with their kinsmen in the eastern border-land of Egypt (Goshen). When Moses first confronted the Egyptian king about the Israelites, Aaron served as his brother\'s spokesman ("prophet") to the Pharaoh. Part of the Law (Torah) that Moses received from'],
'title': 'Aaron',
'embeddings': [-0.07233893871307373,
0.48035329580307007,
0.18650995194911957,
-0.5287084579467773,
-0.37329429388046265,
0.37622880935668945,
0.25524479150772095,
...
-0.336689829826355,
0.6313082575798035,
-0.7025573253631592]}
```
#### psgs_w100.nq.compressed
- **Size of downloaded dataset files:** 85.23 GB
- **Size of the generated dataset:** 78.42 GB
- **Total amount of disk used:** 163.71 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{'id': '1',
'text': 'Aaron Aaron ( or ; "Ahärôn") is a prophet, high priest, and the brother of Moses in the Abrahamic religions. Knowledge of Aaron, along with his brother Moses, comes exclusively from religious texts, such as the Bible and Quran. The Hebrew Bible relates that, unlike Moses, who grew up in the Egyptian royal court, Aaron and his elder sister Miriam remained with their kinsmen in the eastern border-land of Egypt (Goshen). When Moses first confronted the Egyptian king about the Israelites, Aaron served as his brother\'s spokesman ("prophet") to the Pharaoh. Part of the Law (Torah) that Moses received from'],
'title': 'Aaron',
'embeddings': [0.013342111371457577,
0.582173764705658,
-0.31309744715690613,
-0.6991612911224365,
-0.5583199858665466,
0.5187504887580872,
0.7152731418609619,
...
-0.5385938286781311,
0.8093984127044678,
-0.4741983711719513]}
```
#### psgs_w100.nq.exact
- **Size of downloaded dataset files:** 85.23 GB
- **Size of the generated dataset:** 78.42 GB
- **Total amount of disk used:** 163.71 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{'id': '1',
'text': 'Aaron Aaron ( or ; "Ahärôn") is a prophet, high priest, and the brother of Moses in the Abrahamic religions. Knowledge of Aaron, along with his brother Moses, comes exclusively from religious texts, such as the Bible and Quran. The Hebrew Bible relates that, unlike Moses, who grew up in the Egyptian royal court, Aaron and his elder sister Miriam remained with their kinsmen in the eastern border-land of Egypt (Goshen). When Moses first confronted the Egyptian king about the Israelites, Aaron served as his brother\'s spokesman ("prophet") to the Pharaoh. Part of the Law (Torah) that Moses received from'],
'title': 'Aaron',
'embeddings': [0.013342111371457577,
0.582173764705658,
-0.31309744715690613,
-0.6991612911224365,
-0.5583199858665466,
0.5187504887580872,
0.7152731418609619,
...
-0.5385938286781311,
0.8093984127044678,
-0.4741983711719513]}
```
### Data Fields
The data fields are the same among all splits.
#### psgs_w100.multiset.compressed
- `id`: a `string` feature.
- `text`: a `string` feature.
- `title`: a `string` feature.
- `embeddings`: a `list` of `float32` features.
#### psgs_w100.multiset.exact
- `id`: a `string` feature.
- `text`: a `string` feature.
- `title`: a `string` feature.
- `embeddings`: a `list` of `float32` features.
#### psgs_w100.multiset.no_index
- `id`: a `string` feature.
- `text`: a `string` feature.
- `title`: a `string` feature.
- `embeddings`: a `list` of `float32` features.
#### psgs_w100.nq.compressed
- `id`: a `string` feature.
- `text`: a `string` feature.
- `title`: a `string` feature.
- `embeddings`: a `list` of `float32` features.
#### psgs_w100.nq.exact
- `id`: a `string` feature.
- `text`: a `string` feature.
- `title`: a `string` feature.
- `embeddings`: a `list` of `float32` features.
### Data Splits
| name | train |
|-----------------------------|-------:|
|psgs_w100.multiset.compressed|21015300|
|psgs_w100.multiset.exact |21015300|
|psgs_w100.multiset.no_index |21015300|
|psgs_w100.nq.compressed |21015300|
|psgs_w100.nq.exact |21015300|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
DPR is CC-BY-NC 4.0 licensed: https://github.com/facebookresearch/DPR/blob/main/LICENSE
### Citation Information
```
@inproceedings{karpukhin-etal-2020-dense,
title = "Dense Passage Retrieval for Open-Domain Question Answering",
author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.550",
doi = "10.18653/v1/2020.emnlp-main.550",
pages = "6769--6781",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
ncoop57/mmmlu | ncoop57 | "2023-02-01T07:02:32Z" | 14,252 | 2 | [
"license:mit",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2023-01-24T23:50:14Z" | ---
license: mit
---
|
shenyunhang/AISHELL-4 | shenyunhang | "2025-03-23T03:13:25Z" | 14,224 | 0 | [
"license:apache-2.0",
"modality:audio",
"modality:text",
"arxiv:2104.03603",
"region:us"
] | null | "2025-02-11T09:04:56Z" | ---
license: apache-2.0
---
---
Data source:
https://openslr.org/93/
Audios and transcriptions are extracted to wav and txt files.
---
<div class="container">
<h2 class="slrStyle"> AISHELL-4
</h2>
<p class="resource"> <b>Identifier:</b> SLR111 </p>
<p class="resource"> <b>Summary:</b> A Free Mandarin Multi-channel Meeting Speech Corpus, provided by Beijing Shell Shell Technology Co.,Ltd
</p>
<p class="resource"> <b>Category:</b> Speech
</p>
<p class="resource"> <b>License:</b> CC BY-SA 4.0
</p>
<p class="resource"> <b>Downloads (use a mirror closer to you):</b> <br>
<a href="https://www.openslr.org/resources/111/train_L.tar.gz"> train_L.tar.gz </a> [7.0G] ( Training set of large room, 8-channel microphone array speech
) Mirrors:
<a href="https://us.openslr.org/resources/111/train_L.tar.gz"> [US] </a>
<a href="https://openslr.elda.org/resources/111/train_L.tar.gz"> [EU] </a>
<a href="https://openslr.magicdatatech.com/resources/111/train_L.tar.gz"> [CN] </a>
<br><a href="https://www.openslr.org/resources/111/train_M.tar.gz"> train_M.tar.gz </a> [25G] ( Training set of medium room, 8-channel microphone array speech
) Mirrors:
<a href="https://us.openslr.org/resources/111/train_M.tar.gz"> [US] </a>
<a href="https://openslr.elda.org/resources/111/train_M.tar.gz"> [EU] </a>
<a href="https://openslr.magicdatatech.com/resources/111/train_M.tar.gz"> [CN] </a>
<br><a href="https://www.openslr.org/resources/111/train_S.tar.gz"> train_S.tar.gz </a> [14G] ( Training set of small room, 8-channel microphone array speech
) Mirrors:
<a href="https://us.openslr.org/resources/111/train_S.tar.gz"> [US] </a>
<a href="https://openslr.elda.org/resources/111/train_S.tar.gz"> [EU] </a>
<a href="https://openslr.magicdatatech.com/resources/111/train_S.tar.gz"> [CN] </a>
<br><a href="https://www.openslr.org/resources/111/test.tar.gz"> test.tar.gz </a> [5.2G] ( Test set
) Mirrors:
<a href="https://us.openslr.org/resources/111/test.tar.gz"> [US] </a>
<a href="https://openslr.elda.org/resources/111/test.tar.gz"> [EU] </a>
<a href="https://openslr.magicdatatech.com/resources/111/test.tar.gz"> [CN] </a>
<br><br></p><p class="resource"><b>About this resource:</b></p><div id="about">The AISHELL-4 is a sizable real-recorded Mandarin speech dataset
collected by 8-channel circular microphone array for speech processing
in conference scenarios. The dataset consists of 211 recorded meeting
sessions, each containing 4 to 8 speakers, with a total length of 120
hours. This dataset aims to bridge the advanced research on
multi-speaker processing and the practical application scenario in
three aspects. With real recorded meetings, AISHELL-4 provides
realistic acoustics and rich natural speech characteristics in
conversation such as short pause, speech overlap, quick speaker turn,
noise, etc. Meanwhile, the accurate transcription and speaker voice
activity are provided for each meeting in AISHELL-4. This allows the
researchers to explore different aspects in meeting processing,
ranging from individual tasks such as speech front-end processing,
speech recognition and speaker diarization, to multi-modality modeling
and joint optimization of relevant tasks. We also release a
PyTorch-based training and evaluation framework as a baseline system to
promote reproducible research in this field. The baseline system code
and generated samples are available
<a href="https://github.com/felixfuyihui/AISHELL-4">here</a>.
<p>
You can cite the data
using the following BibTeX entry:
</p><pre>
@inproceedings{AISHELL-4_2021,
title={AISHELL-4: An Open Source Dataset for Speech Enhancement, Separation, Recognition and Speaker Diarization in Conference Scenario},
author={Yihui Fu, Luyao Cheng, Shubo Lv, Yukai Jv, Yuxiang Kong, Zhuo Chen, Yanxin Hu, Lei Xie, Jian Wu, Hui Bu, Xin Xu, Jun Du, Jingdong Chen},
booktitle={Interspeech},
url={https://arxiv.org/abs/2104.03603},
year={2021}
}
</pre>
</div><p class="resource"> <b>External URL:</b> <a href="http://www.aishelltech.com/aishell_4"> http://www.aishelltech.com/aishell_4 </a> Full description from the company website
</p>
<div style="height:300px"> </div>
</div>
|
togethercomputer/RedPajama-Data-1T-Sample | togethercomputer | "2023-07-19T06:59:10Z" | 14,211 | 126 | [
"task_categories:text-generation",
"language:en",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | "2023-04-16T23:12:30Z" | ---
task_categories:
- text-generation
language:
- en
pretty_name: Red Pajama 1T Sample
---
# Dataset Card for Dataset Name
### Dataset Summary
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
This HuggingFace repo contains a 1B-token sample of the RedPajama dataset.
The full dataset has the following token counts and is available for [download]( https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T):
| Dataset | Token Count |
|---------------|-------------|
| Commoncrawl | 878 Billion |
| C4 | 175 Billion |
| GitHub | 59 Billion |
| Books | 26 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
| StackExchange | 20 Billion |
| Total | 1.2 Trillion |
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
}
```
## Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
### Source Data
#### Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
#### C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Books3
The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use
simhash to remove near duplicates.
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
#### Stackexchange
The Stack Exchange split of the dataset is download from the
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
remove html tags, group the posts into question-answer pairs, and order answers by their score.
<!--
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> |
rwkv-x-dev/lm-eval-data | rwkv-x-dev | "2024-08-12T08:43:34Z" | 14,184 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-02-21T07:06:42Z" | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: "DOES_NOT_EXIST_TRAIN.csv"
- split: test
path: "DOES_NOT_EXIST_TEST.csv"
---
# LM Eval data
This repository, represents a giant collection of various lm-eval results built using the [github scripts found here](https://github.com/RWKV/lm-evaluation-harness)
The goal at the point of creation, was to collect as much LM-eval results for as many relevent model for the RWKV paper. For analysis.
## Cloning the repo, without the jsonl dump
```
export GIT_LFS_SKIP_SMUDGE=1
git clone https://huggingface.co/datasets/rwkv-x-dev/lm-eval-data.git
```
## Cloning the repo, with the archives (large)
```
# Clone with archive files
git lfs clone https://huggingface.co/datasets/rwkv-x-dev/lm-eval-data.git
# Decompress all the archive files
./scripts/decompress-all-jsonl.sh
``` |
jmhessel/newyorker_caption_contest | jmhessel | "2023-12-22T19:13:58Z" | 14,093 | 65 | [
"task_categories:image-to-text",
"task_categories:multiple-choice",
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:visual-question-answering",
"task_categories:other",
"task_categories:text2text-generation",
"task_ids:multi-class-classification",
"task_ids:language-modeling",
"task_ids:visual-question-answering",
"task_ids:explanation-generation",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2209.06293",
"region:us",
"humor",
"caption contest",
"new yorker"
] | [
"image-to-text",
"multiple-choice",
"text-classification",
"text-generation",
"visual-question-answering",
"other",
"text2text-generation"
] | "2022-09-29T17:28:05Z" | ---
annotations_creators:
- expert-generated
- crowdsourced
- found
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-to-text
- multiple-choice
- text-classification
- text-generation
- visual-question-answering
- other
- text2text-generation
task_ids:
- multi-class-classification
- language-modeling
- visual-question-answering
- explanation-generation
pretty_name: newyorker_caption_contest
tags:
- humor
- caption contest
- new yorker
dataset_info:
- config_name: explanation
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
dtype: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 133827514.64
num_examples: 2340
- name: validation
num_bytes: 8039885.0
num_examples: 130
- name: test
num_bytes: 6863533.0
num_examples: 131
download_size: 139737042
dataset_size: 148730932.64
- config_name: explanation_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
dtype: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 136614332.45999998
num_examples: 2358
- name: validation
num_bytes: 7911995.0
num_examples: 128
- name: test
num_bytes: 8039885.0
num_examples: 130
download_size: 134637839
dataset_size: 152566212.45999998
- config_name: explanation_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
dtype: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 138337491.342
num_examples: 2346
- name: validation
num_bytes: 7460490.0
num_examples: 132
- name: test
num_bytes: 7911995.0
num_examples: 128
download_size: 138271185
dataset_size: 153709976.342
- config_name: explanation_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
dtype: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 138247435.342
num_examples: 2334
- name: validation
num_bytes: 7911920.0
num_examples: 130
- name: test
num_bytes: 7460490.0
num_examples: 132
download_size: 136862726
dataset_size: 153619845.342
- config_name: explanation_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
dtype: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 141175335.3
num_examples: 2340
- name: validation
num_bytes: 6863533.0
num_examples: 131
- name: test
num_bytes: 7911920.0
num_examples: 130
download_size: 140501251
dataset_size: 155950788.3
- config_name: explanation_from_pixels
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 23039316.0
num_examples: 390
- name: validation
num_bytes: 7956182.0
num_examples: 130
- name: test
num_bytes: 6778892.0
num_examples: 131
download_size: 37552582
dataset_size: 37774390.0
- config_name: explanation_from_pixels_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 21986652.0
num_examples: 393
- name: validation
num_bytes: 7831556.0
num_examples: 128
- name: test
num_bytes: 7956182.0
num_examples: 130
download_size: 37534409
dataset_size: 37774390.0
- config_name: explanation_from_pixels_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 22566608.0
num_examples: 391
- name: validation
num_bytes: 7376225.0
num_examples: 132
- name: test
num_bytes: 7831556.0
num_examples: 128
download_size: 37544724
dataset_size: 37774389.0
- config_name: explanation_from_pixels_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 22566629.0
num_examples: 389
- name: validation
num_bytes: 7831536.0
num_examples: 130
- name: test
num_bytes: 7376225.0
num_examples: 132
download_size: 37573931
dataset_size: 37774390.0
- config_name: explanation_from_pixels_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 23163962.0
num_examples: 390
- name: validation
num_bytes: 6778892.0
num_examples: 131
- name: test
num_bytes: 7831536.0
num_examples: 130
download_size: 37582524
dataset_size: 37774390.0
- config_name: matching
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 618272766.36
num_examples: 9792
- name: validation
num_bytes: 34157757.0
num_examples: 531
- name: test
num_bytes: 29813118.0
num_examples: 528
download_size: 594460072
dataset_size: 682243641.36
- config_name: matching_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 593200158.116
num_examples: 9684
- name: validation
num_bytes: 36712942.0
num_examples: 546
- name: test
num_bytes: 34157757.0
num_examples: 531
download_size: 563587231
dataset_size: 664070857.116
- config_name: matching_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 591676321.09
num_examples: 9630
- name: validation
num_bytes: 33697178.0
num_examples: 540
- name: test
num_bytes: 36712942.0
num_examples: 546
download_size: 571864348
dataset_size: 662086441.09
- config_name: matching_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 615620189.53
num_examples: 9630
- name: validation
num_bytes: 34829502.0
num_examples: 546
- name: test
num_bytes: 33697178.0
num_examples: 540
download_size: 571744845
dataset_size: 684146869.53
- config_name: matching_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 609696610.648
num_examples: 9702
- name: validation
num_bytes: 29813118.0
num_examples: 528
- name: test
num_bytes: 34829502.0
num_examples: 546
download_size: 592174904
dataset_size: 674339230.648
- config_name: matching_from_pixels
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 101439044.384
num_examples: 1632
- name: validation
num_bytes: 33714551.0
num_examples: 531
- name: test
num_bytes: 29368704.0
num_examples: 528
download_size: 139733134
dataset_size: 164522299.384
- config_name: matching_from_pixels_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 94090646.83
num_examples: 1614
- name: validation
num_bytes: 36257141.0
num_examples: 546
- name: test
num_bytes: 33714551.0
num_examples: 531
download_size: 137278691
dataset_size: 164062338.82999998
- config_name: matching_from_pixels_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 96253584.505
num_examples: 1605
- name: validation
num_bytes: 33236000.0
num_examples: 540
- name: test
num_bytes: 36257141.0
num_examples: 546
download_size: 137890850
dataset_size: 165746725.505
- config_name: matching_from_pixels_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 99928910.28
num_examples: 1605
- name: validation
num_bytes: 34380303.0
num_examples: 546
- name: test
num_bytes: 33236000.0
num_examples: 540
download_size: 139585876
dataset_size: 167545213.28
- config_name: matching_from_pixels_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 102509197.79
num_examples: 1617
- name: validation
num_bytes: 29368704.0
num_examples: 528
- name: test
num_bytes: 34380303.0
num_examples: 546
download_size: 138725891
dataset_size: 166258204.79000002
- config_name: ranking
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 594615535.632
num_examples: 9576
- name: validation
num_bytes: 32624105.0
num_examples: 507
- name: test
num_bytes: 28907567.0
num_examples: 513
download_size: 571604579
dataset_size: 656147207.632
- config_name: ranking_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 580099188.9
num_examples: 9450
- name: validation
num_bytes: 35332200.0
num_examples: 534
- name: test
num_bytes: 32624105.0
num_examples: 507
download_size: 546559254
dataset_size: 648055493.9
- config_name: ranking_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 566811450.504
num_examples: 9306
- name: validation
num_bytes: 32519173.0
num_examples: 531
- name: test
num_bytes: 35332200.0
num_examples: 534
download_size: 544444097
dataset_size: 634662823.504
- config_name: ranking_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 577828323.272
num_examples: 9324
- name: validation
num_bytes: 34072817.0
num_examples: 531
- name: test
num_bytes: 32519173.0
num_examples: 531
download_size: 548880699
dataset_size: 644420313.272
- config_name: ranking_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 593388719.232
num_examples: 9432
- name: validation
num_bytes: 28907567.0
num_examples: 513
- name: test
num_bytes: 34072817.0
num_examples: 531
download_size: 562902941
dataset_size: 656369103.232
- config_name: ranking_from_pixels
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 101282973.752
num_examples: 1596
- name: validation
num_bytes: 32072331.0
num_examples: 506
- name: test
num_bytes: 28550057.0
num_examples: 513
download_size: 134283256
dataset_size: 161905361.752
- config_name: ranking_from_pixels_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 93123370.15
num_examples: 1575
- name: validation
num_bytes: 34965110.0
num_examples: 534
- name: test
num_bytes: 32072331.0
num_examples: 506
download_size: 130879365
dataset_size: 160160811.15
- config_name: ranking_from_pixels_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 93496576.85
num_examples: 1550
- name: validation
num_bytes: 32145436.0
num_examples: 531
- name: test
num_bytes: 34965110.0
num_examples: 534
download_size: 131637359
dataset_size: 160607122.85
- config_name: ranking_from_pixels_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 93840620.26
num_examples: 1553
- name: validation
num_bytes: 33718821.0
num_examples: 531
- name: test
num_bytes: 32145436.0
num_examples: 531
download_size: 133214495
dataset_size: 159704877.26
- config_name: ranking_from_pixels_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 99008131.43
num_examples: 1571
- name: validation
num_bytes: 28550057.0
num_examples: 513
- name: test
num_bytes: 33718821.0
num_examples: 531
download_size: 136230399
dataset_size: 161277009.43
configs:
- config_name: explanation
data_files:
- split: train
path: explanation/train-*
- split: validation
path: explanation/validation-*
- split: test
path: explanation/test-*
- config_name: explanation_1
data_files:
- split: train
path: explanation_1/train-*
- split: validation
path: explanation_1/validation-*
- split: test
path: explanation_1/test-*
- config_name: explanation_2
data_files:
- split: train
path: explanation_2/train-*
- split: validation
path: explanation_2/validation-*
- split: test
path: explanation_2/test-*
- config_name: explanation_3
data_files:
- split: train
path: explanation_3/train-*
- split: validation
path: explanation_3/validation-*
- split: test
path: explanation_3/test-*
- config_name: explanation_4
data_files:
- split: train
path: explanation_4/train-*
- split: validation
path: explanation_4/validation-*
- split: test
path: explanation_4/test-*
- config_name: explanation_from_pixels
data_files:
- split: train
path: explanation_from_pixels/train-*
- split: validation
path: explanation_from_pixels/validation-*
- split: test
path: explanation_from_pixels/test-*
- config_name: explanation_from_pixels_1
data_files:
- split: train
path: explanation_from_pixels_1/train-*
- split: validation
path: explanation_from_pixels_1/validation-*
- split: test
path: explanation_from_pixels_1/test-*
- config_name: explanation_from_pixels_2
data_files:
- split: train
path: explanation_from_pixels_2/train-*
- split: validation
path: explanation_from_pixels_2/validation-*
- split: test
path: explanation_from_pixels_2/test-*
- config_name: explanation_from_pixels_3
data_files:
- split: train
path: explanation_from_pixels_3/train-*
- split: validation
path: explanation_from_pixels_3/validation-*
- split: test
path: explanation_from_pixels_3/test-*
- config_name: explanation_from_pixels_4
data_files:
- split: train
path: explanation_from_pixels_4/train-*
- split: validation
path: explanation_from_pixels_4/validation-*
- split: test
path: explanation_from_pixels_4/test-*
- config_name: matching
data_files:
- split: train
path: matching/train-*
- split: validation
path: matching/validation-*
- split: test
path: matching/test-*
- config_name: matching_1
data_files:
- split: train
path: matching_1/train-*
- split: validation
path: matching_1/validation-*
- split: test
path: matching_1/test-*
- config_name: matching_2
data_files:
- split: train
path: matching_2/train-*
- split: validation
path: matching_2/validation-*
- split: test
path: matching_2/test-*
- config_name: matching_3
data_files:
- split: train
path: matching_3/train-*
- split: validation
path: matching_3/validation-*
- split: test
path: matching_3/test-*
- config_name: matching_4
data_files:
- split: train
path: matching_4/train-*
- split: validation
path: matching_4/validation-*
- split: test
path: matching_4/test-*
- config_name: matching_from_pixels
data_files:
- split: train
path: matching_from_pixels/train-*
- split: validation
path: matching_from_pixels/validation-*
- split: test
path: matching_from_pixels/test-*
- config_name: matching_from_pixels_1
data_files:
- split: train
path: matching_from_pixels_1/train-*
- split: validation
path: matching_from_pixels_1/validation-*
- split: test
path: matching_from_pixels_1/test-*
- config_name: matching_from_pixels_2
data_files:
- split: train
path: matching_from_pixels_2/train-*
- split: validation
path: matching_from_pixels_2/validation-*
- split: test
path: matching_from_pixels_2/test-*
- config_name: matching_from_pixels_3
data_files:
- split: train
path: matching_from_pixels_3/train-*
- split: validation
path: matching_from_pixels_3/validation-*
- split: test
path: matching_from_pixels_3/test-*
- config_name: matching_from_pixels_4
data_files:
- split: train
path: matching_from_pixels_4/train-*
- split: validation
path: matching_from_pixels_4/validation-*
- split: test
path: matching_from_pixels_4/test-*
- config_name: ranking
data_files:
- split: train
path: ranking/train-*
- split: validation
path: ranking/validation-*
- split: test
path: ranking/test-*
- config_name: ranking_1
data_files:
- split: train
path: ranking_1/train-*
- split: validation
path: ranking_1/validation-*
- split: test
path: ranking_1/test-*
- config_name: ranking_2
data_files:
- split: train
path: ranking_2/train-*
- split: validation
path: ranking_2/validation-*
- split: test
path: ranking_2/test-*
- config_name: ranking_3
data_files:
- split: train
path: ranking_3/train-*
- split: validation
path: ranking_3/validation-*
- split: test
path: ranking_3/test-*
- config_name: ranking_4
data_files:
- split: train
path: ranking_4/train-*
- split: validation
path: ranking_4/validation-*
- split: test
path: ranking_4/test-*
- config_name: ranking_from_pixels
data_files:
- split: train
path: ranking_from_pixels/train-*
- split: validation
path: ranking_from_pixels/validation-*
- split: test
path: ranking_from_pixels/test-*
- config_name: ranking_from_pixels_1
data_files:
- split: train
path: ranking_from_pixels_1/train-*
- split: validation
path: ranking_from_pixels_1/validation-*
- split: test
path: ranking_from_pixels_1/test-*
- config_name: ranking_from_pixels_2
data_files:
- split: train
path: ranking_from_pixels_2/train-*
- split: validation
path: ranking_from_pixels_2/validation-*
- split: test
path: ranking_from_pixels_2/test-*
- config_name: ranking_from_pixels_3
data_files:
- split: train
path: ranking_from_pixels_3/train-*
- split: validation
path: ranking_from_pixels_3/validation-*
- split: test
path: ranking_from_pixels_3/test-*
- config_name: ranking_from_pixels_4
data_files:
- split: train
path: ranking_from_pixels_4/train-*
- split: validation
path: ranking_from_pixels_4/validation-*
- split: test
path: ranking_from_pixels_4/test-*
---
# Dataset Card for New Yorker Caption Contest Benchmarks
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [capcon.dev](https://www.capcon.dev)
- **Repository:** [https://github.com/jmhessel/caption_contest_corpus](https://github.com/jmhessel/caption_contest_corpus)
- **Paper:** [Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest](https://arxiv.org/abs/2209.06293)
- **Leaderboard:** https://leaderboard.allenai.org/nycc-matching/
- **Point of Contact:** [email protected]
### Dataset Summary
See [capcon.dev](https://www.capcon.dev) for more!
Data from:
[Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest](https://arxiv.org/abs/2209.06293)
```
@inproceedings{hessel2023androids,
title={Do Androids Laugh at Electric Sheep? {Humor} ``Understanding''
Benchmarks from {The New Yorker Caption Contest}},
author={Hessel, Jack and Marasovi{\'c}, Ana and Hwang, Jena D. and Lee, Lillian
and Da, Jeff and Zellers, Rowan and Mankoff, Robert and Choi, Yejin},
booktitle={Proceedings of the ACL},
year={2023}
}
```
If you use this dataset, we would appreciate you citing our work, but also -- several other papers that we build this corpus upon. See [Citation Information](#citation-information).
We challenge AI models to "demonstrate understanding" of the
sophisticated multimodal humor of The New Yorker Caption Contest.
Concretely, we develop three carefully circumscribed tasks for which
it suffices (but is not necessary) to grasp potentially complex and
unexpected relationships between image and caption, and similarly
complex and unexpected allusions to the wide varieties of human
experience.
### Supported Tasks and Leaderboards
Three tasks are supported:
- "Matching:" a model must recognize a caption written about a cartoon (vs. options that were not);
- "Quality ranking:" a model must evaluate the quality of a caption by scoring it more highly than a lower quality option from the same contest;
- "Explanation:" a model must explain why a given joke is funny.
There are no official leaderboards (yet).
### Languages
English
## Dataset Structure
Here's an example instance from Matching:
```
{'caption_choices': ['Tell me about your childhood very quickly.',
"Believe me . . . it's what's UNDER the ground that's "
'most interesting.',
"Stop me if you've heard this one.",
'I have trouble saying no.',
'Yes, I see the train but I think we can beat it.'],
'contest_number': 49,
'entities': ['https://en.wikipedia.org/wiki/Rule_of_three_(writing)',
'https://en.wikipedia.org/wiki/Bar_joke',
'https://en.wikipedia.org/wiki/Religious_institute'],
'from_description': 'scene: a bar description: Two priests and a rabbi are '
'walking into a bar, as the bartender and another patron '
'look on. The bartender talks on the phone while looking '
'skeptically at the incoming crew. uncanny: The scene '
'depicts a very stereotypical "bar joke" that would be '
'unlikely to be encountered in real life; the skepticism '
'of the bartender suggests that he is aware he is seeing '
'this trope, and is explaining it to someone on the '
'phone. entities: Rule_of_three_(writing), Bar_joke, '
'Religious_institute. choices A: Tell me about your '
"childhood very quickly. B: Believe me . . . it's what's "
"UNDER the ground that's most interesting. C: Stop me if "
"you've heard this one. D: I have trouble saying no. E: "
'Yes, I see the train but I think we can beat it.',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=323x231 at 0x7F34F283E9D0>,
'image_description': 'Two priests and a rabbi are walking into a bar, as the '
'bartender and another patron look on. The bartender '
'talks on the phone while looking skeptically at the '
'incoming crew.',
'image_location': 'a bar',
'image_uncanny_description': 'The scene depicts a very stereotypical "bar '
'joke" that would be unlikely to be encountered '
'in real life; the skepticism of the bartender '
'suggests that he is aware he is seeing this '
'trope, and is explaining it to someone on the '
'phone.',
'instance_id': '21125bb8787b4e7e82aa3b0a1cba1571',
'label': 'C',
'n_tokens_label': 1,
'questions': ['What is the bartender saying on the phone in response to the '
'living, breathing, stereotypical bar joke that is unfolding?']}
```
The label "C" indicates that the 3rd choice in the `caption_choices` is correct.
Here's an example instance from Ranking (in the from pixels setting --- though, this is also available in the from description setting)
```
{'caption_choices': ['I guess I misunderstood when you said long bike ride.',
'Does your divorce lawyer have any other cool ideas?'],
'contest_number': 582,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=600x414 at 0x7F8FF9F96610>,
'instance_id': 'dd1c214a1ca3404aa4e582c9ce50795a',
'label': 'A',
'n_tokens_label': 1,
'winner_source': 'official_winner'}
```
the label indicates that the first caption choice ("A", here) in the `caption_choices` list was more highly rated.
Here's an example instance from Explanation:
```
{'caption_choices': 'The classics can be so intimidating.',
'contest_number': 752,
'entities': ['https://en.wikipedia.org/wiki/Literature',
'https://en.wikipedia.org/wiki/Solicitor'],
'from_description': 'scene: a road description: Two people are walking down a '
'path. A number of giant books have surrounded them. '
'uncanny: There are book people in this world. entities: '
'Literature, Solicitor. caption: The classics can be so '
'intimidating.',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=800x706 at 0x7F90003D0BB0>,
'image_description': 'Two people are walking down a path. A number of giant '
'books have surrounded them.',
'image_location': 'a road',
'image_uncanny_description': 'There are book people in this world.',
'instance_id': 'eef9baf450e2fab19b96facc128adf80',
'label': 'A play on the word intimidating --- usually if the classics (i.e., '
'classic novels) were to be intimidating, this would mean that they '
'are intimidating to read due to their length, complexity, etc. But '
'here, they are surrounded by anthropomorphic books which look '
'physically intimidating, i.e., they are intimidating because they '
'may try to beat up these people.',
'n_tokens_label': 59,
'questions': ['What do the books want?']}
```
The label is an explanation of the joke, which serves as the autoregressive target.
### Data Instances
See above
### Data Fields
See above
### Data Splits
Data splits can be accessed as:
```
from datasets import load_dataset
dset = load_dataset("jmhessel/newyorker_caption_contest", "matching")
dset = load_dataset("jmhessel/newyorker_caption_contest", "ranking")
dset = load_dataset("jmhessel/newyorker_caption_contest", "explanation")
```
Or, in the from pixels setting, e.g.,
```
from datasets import load_dataset
dset = load_dataset("jmhessel/newyorker_caption_contest", "ranking_from_pixels")
```
Because the dataset is small, we reported in 5-fold cross-validation setting initially. The default splits are split 0. You can access the other splits, e.g.:
```
from datasets import load_dataset
# the 4th data split
dset = load_dataset("jmhessel/newyorker_caption_contest", "explanation_4")
```
## Dataset Creation
Full details are in the paper.
### Curation Rationale
See the paper for rationale/motivation.
### Source Data
See citation below. We combined 3 sources of data, and added significant annotations of our own.
#### Initial Data Collection and Normalization
Full details are in the paper.
#### Who are the source language producers?
We paid crowdworkers $15/hr to annotate the corpus.
In addition, significant annotation efforts were conducted by the authors of this work.
### Annotations
Full details are in the paper.
#### Annotation process
Full details are in the paper.
#### Who are the annotators?
A mix of crowdworks and authors of this paper.
### Personal and Sensitive Information
Has been redacted from the dataset. Images are published in the New Yorker already.
## Considerations for Using the Data
### Social Impact of Dataset
It's plausible that humor could perpetuate negative stereotypes. The jokes in this corpus are a mix of crowdsourced entries that are highly rated, and ones published in the new yorker.
### Discussion of Biases
Humor is subjective, and some of the jokes may be considered offensive. The images may contain adult themes and minor cartoon nudity.
### Other Known Limitations
More details are in the paper
## Additional Information
### Dataset Curators
The dataset was curated by researchers at AI2
### Licensing Information
The annotations we provide are CC-BY-4.0. See www.capcon.dev for more info.
### Citation Information
```
@article{hessel2022androids,
title={Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest},
author={Hessel, Jack and Marasovi{\'c}, Ana and Hwang, Jena D and Lee, Lillian and Da, Jeff and Zellers, Rowan and Mankoff, Robert and Choi, Yejin},
journal={arXiv preprint arXiv:2209.06293},
year={2022}
}
```
Our data contributions are:
- The cartoon-level annotations;
- The joke explanations;
- and the framing of the tasks
We release these data we contribute under CC-BY (see DATASET_LICENSE). If you find this data useful in your work, in addition to citing our contributions, please also cite the following, from which the cartoons/captions in our corpus are derived:
```
@misc{newyorkernextmldataset,
author={Jain, Lalit and Jamieson, Kevin and Mankoff, Robert and Nowak, Robert and Sievert, Scott},
title={The {N}ew {Y}orker Cartoon Caption Contest Dataset},
year={2020},
url={https://nextml.github.io/caption-contest-data/}
}
@inproceedings{radev-etal-2016-humor,
title = "Humor in Collective Discourse: Unsupervised Funniness Detection in The {New Yorker} Cartoon Caption Contest",
author = "Radev, Dragomir and
Stent, Amanda and
Tetreault, Joel and
Pappu, Aasish and
Iliakopoulou, Aikaterini and
Chanfreau, Agustin and
de Juan, Paloma and
Vallmitjana, Jordi and
Jaimes, Alejandro and
Jha, Rahul and
Mankoff, Robert",
booktitle = "LREC",
year = "2016",
}
@inproceedings{shahaf2015inside,
title={Inside jokes: Identifying humorous cartoon captions},
author={Shahaf, Dafna and Horvitz, Eric and Mankoff, Robert},
booktitle={KDD},
year={2015},
}
``` |
Open-Orca/FLAN | Open-Orca | "2023-08-02T15:08:01Z" | 14,084 | 176 | [
"language:en",
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2301.13688",
"arxiv:2109.01652",
"arxiv:2110.08207",
"arxiv:2204.07705",
"region:us"
] | null | "2023-07-21T13:45:12Z" | ---
license: cc-by-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- Open-Orca/OpenOrca
size_categories:
- 1B<n<10B
---
<p><h1>🍮 The WHOLE FLAN Collection! 🍮</h1></p>

# Overview
This repository includes the full dataset from the [FLAN Collection](https://ai.googleblog.com/2023/02/the-flan-collection-advancing-open.html), totalling ~300GB as parquets.
Generated using the official seqio templating from the [Google FLAN Collection GitHub repo](https://github.com/google-research/FLAN/tree/main/flan/v2).
The data is subject to all the same licensing of the component datasets.
To keep up with our continued work on OpenOrca and other exciting research, find our Discord here:
https://AlignmentLab.ai
# Motivation
This work was done as part of the requirements for the OpenOrca project.
There was not a large enough subset of FLAN Collection generated publicly to subsample from to complete the work.
So, we opted to process the entire collection ourselves.
Generating this requires an understanding of seqio and a Linux server with 512GB of CPU ram, as well as fast drives and custom limits for many parameters beyond what is default on Linux server distributions (e.g., requiring up to 45,000 threads running at once).
It takes downloading over 400GB of datasets, working around tfds bugs, and then processing the datasets over the course of several days.
We provide this repo as a resource to other ML researchers, as it saves these time consuming and laborious steps to getting the data into a more accessible format for further consumption.
# Data
## Organization
* JSON files at top level are used for subsampling in OpenOrca
* Parquets in subdirectories contain the entire FLAN collection in Dask-sharded folders by submix fractions
## Zero-Shot vs Few-Shot and Options vs No-Options
The core sub-collections of FLAN are `CoT`, `Dialog`, `NIv2`, `T0`, and `flan2021`.
Within those sub-collections are four "remixes" of the data that are templated differently:
* `Zero-Shot` and `Few-Shot`
* `Zero-Shot` provides a prompt, question, or challenge without any exemplaries prior
* `Few-Shot` provides exemplaries first
* `Options` and `No-Options`
* `Options` provides a question or challenge with multiple-choice (e.g. A/B/C/D) answer options provided to select from
* `No-Options` requires a free-form answer
For every sub-collection, only some of the "remixes" may officially be provided. All available have been generated in full without any redaction or sub-sampling.
An example: `t0_fsopt_data` folder contains the sub-collection `T0`'s Few-Shot (FS), Options (OPT) remix set.
Notably, this is the largest "remix" and the one that necessitates 512GB CPU ram to generate. The raw json output is nearly 200GB.
## Parquet Sizes
Each sub-collection's individual remixes are provided as [Parquet](https://huggingface.co/docs/datasets/loading#parquet) files which have been sharded by [Dask](https://huggingface.co/docs/datasets/main/en/filesystems#dask) into ~160MB chunks (starting from 256MB blocks of the source jsonl files).
The folder structure along with size sums is provided below.
```
$ du -h --max-depth=1 ./
9.1G ./niv2_fsopt_data
2.4G ./niv2_zsopt_data
59G ./flan_fsopt_data
984M ./dialog_zsopt_data
11G ./flan_zsopt_data
8.6G ./dialog_fsopt_data
16G ./t0_zsnoopt_data
149M ./cot_fsopt_data
20M ./cot_zsopt_data
17G ./t0_zsopt_data
11G ./flan_zsnoopt_data
101G ./t0_fsopt_data
25G ./flan_fsnoopt_data
39G ./t0_fsnoopt_data
296G ./
```
# Citations
```bibtex
@misc{goodson2023huggyflan
title={Fine FLAN: Seqio to Parquet So You Don't Have To},
author={Bleys Goodson},
year={2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/datasets/Open-Orca/FLAN},
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{wei2022finetuned,
title={Finetuned Language Models Are Zero-Shot Learners},
author={Jason Wei and Maarten Bosma and Vincent Y. Zhao and Kelvin Guu and Adams Wei Yu and Brian Lester and Nan Du and Andrew M. Dai and Quoc V. Le},
year={2022},
eprint={2109.01652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{sanh2022multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Tali Bers and Stella Biderman and Leo Gao and Thomas Wolf and Alexander M. Rush},
year={2022},
eprint={2110.08207},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
```bibtex
@misc{wang2022supernaturalinstructions,
title={Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and Anjana Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and Mehrad Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddhartha Mishra and Sujan Reddy and Sumanta Patro and Tanay Dixit and Xudong Shen and Chitta Baral and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi and Daniel Khashabi},
year={2022},
eprint={2204.07705},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
OpenLLM-France/Lucie-Training-Dataset | OpenLLM-France | "2025-03-19T13:02:08Z" | 14,056 | 20 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_ids:language-modeling",
"multilinguality:multilingual",
"language:en",
"language:fr",
"language:de",
"language:es",
"language:it",
"language:code",
"license:cc-by-nc-sa-4.0",
"size_categories:10B<n<100B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2308.12477",
"arxiv:2311.16840",
"arxiv:2402.00786",
"arxiv:1905.10892",
"arxiv:1906.02192",
"arxiv:2108.01139",
"arxiv:2010.12871",
"arxiv:2406.17557",
"arxiv:2312.17120",
"arxiv:2201.07311",
"arxiv:1904.01557",
"arxiv:2101.00027",
"arxiv:2211.15533",
"arxiv:2503.12294",
"region:us",
"text-generation",
"conditional-text-generation"
] | [
"text-generation",
"text2text-generation"
] | "2024-10-16T10:46:27Z" | ---
pretty_name: Lucie Training Dataset
license: cc-by-nc-sa-4.0
language:
- en
- fr
- de
- es
- it
- code
multilinguality:
- multilingual
task_categories:
- text-generation
- text2text-generation
task_ids:
- language-modeling
tags:
- text-generation
- conditional-text-generation
size_categories:
- n>1T
viewer: true
configs:
- config_name: default
data_files:
- path: data/v*/*/*/*/*parquet
split: train
- config_name: en
data_files:
- path: data/v*/natural/en/*/*parquet
split: train
- config_name: fr
data_files:
- path: data/v*/natural/fr/*/*parquet
split: train
- config_name: de
data_files:
- path: data/v*/natural/de/*/*parquet
split: train
- config_name: es
data_files:
- path: data/v*/natural/es/*/*parquet
split: train
- config_name: it
data_files:
- path: data/v*/natural/it/*/*parquet
split: train
- config_name: de,fr
data_files:
- path: data/v*/natural/de-fr/*/*.parquet
split: train
- config_name: es,en
data_files:
- path: data/v*/natural/es-en/*/*.parquet
split: train
- config_name: fr,en
data_files:
- path: data/v*/natural/fr-en/*/*.parquet
split: train
- config_name: it,en
data_files:
- path: data/v*/natural/it-en/*/*.parquet
split: train
- config_name: natural
data_files:
- path: data/v*/natural/*/*/*.parquet
split: train
- config_name: code
data_files:
- path: data/v*/code/*/*/*parquet
split: train
- config_name: code-assembly
data_files:
- path: data/v*/code/assembly/*/*.parquet
split: train
- config_name: code-c
data_files:
- path: data/v*/code/c/*/*.parquet
split: train
- config_name: code-c#
data_files:
- path: data/v*/code/c#/*/*.parquet
split: train
- config_name: code-c++
data_files:
- path: data/v*/code/c++/*/*.parquet
split: train
- config_name: code-clojure
data_files:
- path: data/v*/code/clojure/*/*.parquet
split: train
- config_name: code-dart
data_files:
- path: data/v*/code/dart/*/*.parquet
split: train
- config_name: code-elixir
data_files:
- path: data/v*/code/elixir/*/*.parquet
split: train
- config_name: code-erlang
data_files:
- path: data/v*/code/erlang/*/*.parquet
split: train
- config_name: code-fortran
data_files:
- path: data/v*/code/fortran/*/*.parquet
split: train
- config_name: code-go
data_files:
- path: data/v*/code/go/*/*.parquet
split: train
- config_name: code-haskell
data_files:
- path: data/v*/code/haskell/*/*.parquet
split: train
- config_name: code-java
data_files:
- path: data/v*/code/java/*/*.parquet
split: train
- config_name: code-javascript
data_files:
- path: data/v*/code/javascript/*/*.parquet
split: train
- config_name: code-julia
data_files:
- path: data/v*/code/julia/*/*.parquet
split: train
- config_name: code-kotlin
data_files:
- path: data/v*/code/kotlin/*/*.parquet
split: train
- config_name: code-lua
data_files:
- path: data/v*/code/lua/*/*.parquet
split: train
- config_name: code-mathematica
data_files:
- path: data/v*/code/mathematica/*/*.parquet
split: train
- config_name: code-matlab
data_files:
- path: data/v*/code/matlab/*/*.parquet
split: train
- config_name: code-ocaml
data_files:
- path: data/v*/code/ocaml/*/*.parquet
split: train
- config_name: code-perl
data_files:
- path: data/v*/code/perl/*/*.parquet
split: train
- config_name: code-php
data_files:
- path: data/v*/code/php/*/*.parquet
split: train
- config_name: code-python
data_files:
- path: data/v*/code/python/*/*.parquet
split: train
- config_name: code-r
data_files:
- path: data/v*/code/r/*/*.parquet
split: train
- config_name: code-racket
data_files:
- path: data/v*/code/racket/*/*.parquet
split: train
- config_name: code-ruby
data_files:
- path: data/v*/code/ruby/*/*.parquet
split: train
- config_name: code-rust
data_files:
- path: data/v*/code/rust/*/*.parquet
split: train
- config_name: code-scala
data_files:
- path: data/v*/code/scala/*/*.parquet
split: train
- config_name: code-swift
data_files:
- path: data/v*/code/swift/*/*.parquet
split: train
- config_name: code-tex
data_files:
- path: data/v*/code/tex/*/*.parquet
split: train
- config_name: code-typescript
data_files:
- path: data/v*/code/typescript/*/*.parquet
split: train
- config_name: AmendementsParlement
data_files:
- path: data/v*/natural/*/AmendementsParlement/*.parquet
split: train
- config_name: AmericanStories
data_files:
- path: data/v*/natural/*/AmericanStories/*.parquet
split: train
- config_name: Claire
data_files:
- path: data/v*/natural/*/Claire/*.parquet
split: train
- config_name: Claire-en
data_files:
- path: data/v*/natural/en/Claire/*.parquet
split: train
- config_name: Claire-fr
data_files:
- path: data/v*/natural/fr/Claire/*.parquet
split: train
- config_name: CroissantAligned
data_files:
- path: data/v*/natural/*/CroissantAligned/*.parquet
split: train
- config_name: DiscoursPublics
data_files:
- path: data/v*/natural/*/DiscoursPublics/*.parquet
split: train
- config_name: Europarl
data_files:
- path: data/v*/natural/*/Europarl/*.parquet
split: train
- config_name: Europarl-de
data_files:
- path: data/v*/natural/de/Europarl/*.parquet
split: train
- config_name: Europarl-en
data_files:
- path: data/v*/natural/en/Europarl/*.parquet
split: train
- config_name: Europarl-es
data_files:
- path: data/v*/natural/es/Europarl/*.parquet
split: train
- config_name: Europarl-fr
data_files:
- path: data/v*/natural/fr/Europarl/*.parquet
split: train
- config_name: EuroparlAligned
data_files:
- path: data/v*/natural/*/EuroparlAligned/*.parquet
split: train
- config_name: EuroparlAligned-de,fr
data_files:
- path: data/v*/natural/de-fr/EuroparlAligned/*.parquet
split: train
- config_name: EuroparlAligned-es,en
data_files:
- path: data/v*/natural/es-en/EuroparlAligned/*.parquet
split: train
- config_name: EuroparlAligned-fr,en
data_files:
- path: data/v*/natural/fr-en/EuroparlAligned/*.parquet
split: train
- config_name: EuroparlAligned-it,en
data_files:
- path: data/v*/natural/it-en/EuroparlAligned/*.parquet
split: train
- config_name: Eurovoc
data_files:
- path: data/v*/natural/*/Eurovoc/*.parquet
split: train
- config_name: Eurovoc-de
data_files:
- path: data/v*/natural/de/Eurovoc/*.parquet
split: train
- config_name: Eurovoc-en
data_files:
- path: data/v*/natural/en/Eurovoc/*.parquet
split: train
- config_name: Eurovoc-es
data_files:
- path: data/v*/natural/es/Eurovoc/*.parquet
split: train
- config_name: Eurovoc-it
data_files:
- path: data/v*/natural/it/Eurovoc/*.parquet
split: train
- config_name: FineWebEdu
data_files:
- path: data/v*/natural/*/FineWebEdu/*.parquet
split: train
- config_name: GallicaMonographies
data_files:
- path: data/v*/natural/*/GallicaMonographies/*.parquet
split: train
- config_name: GallicaPress
data_files:
- path: data/v*/natural/*/GallicaPress/*.parquet
split: train
- config_name: Gutenberg
data_files:
- path: data/v*/natural/*/Gutenberg/*.parquet
split: train
- config_name: Gutenberg-de
data_files:
- path: data/v*/natural/de/Gutenberg/*.parquet
split: train
- config_name: Gutenberg-en
data_files:
- path: data/v*/natural/en/Gutenberg/*.parquet
split: train
- config_name: Gutenberg-es
data_files:
- path: data/v*/natural/es/Gutenberg/*.parquet
split: train
- config_name: Gutenberg-fr
data_files:
- path: data/v*/natural/fr/Gutenberg/*.parquet
split: train
- config_name: Gutenberg-it
data_files:
- path: data/v*/natural/it/Gutenberg/*.parquet
split: train
- config_name: HAL
data_files:
- path: data/v*/natural/*/HAL/*.parquet
split: train
- config_name: InterventionsParlement
data_files:
- path: data/v*/natural/*/InterventionsParlement/*.parquet
split: train
- config_name: LEGI
data_files:
- path: data/v*/natural/*/LEGI/*.parquet
split: train
- config_name: MathPile
data_files:
- path: data/v*/natural/*/MathPile/*.parquet
split: train
- config_name: OpenData
data_files:
- path: data/v*/natural/*/OpenData/*.parquet
split: train
- config_name: OpenEdition
data_files:
- path: data/v*/natural/*/OpenEdition/*.parquet
split: train
- config_name: PeS2o
data_files:
- path: data/v*/natural/*/PeS2o/*.parquet
split: train
- config_name: PeS2o-s2ag
data_files:
- path: data/v*/natural/*/PeS2o/*s2ag.parquet
split: train
- config_name: PeS2o-s2orc
data_files:
- path: data/v*/natural/*/PeS2o/*s2orc.parquet
split: train
- config_name: Pile
data_files:
- path: data/v*/natural/*/Pile/*.parquet
split: train
- config_name: Pile-DM_Mathematics
data_files:
- path: data/v*/natural/*/Pile/*DM_Mathematics.parquet
split: train
- config_name: Pile-FreeLaw
data_files:
- path: data/v*/natural/*/Pile/*FreeLaw.parquet
split: train
- config_name: Pile-NIH_ExPorter
data_files:
- path: data/v*/natural/*/Pile/*NIH_ExPorter.parquet
split: train
- config_name: Pile-PhilPapers
data_files:
- path: data/v*/natural/*/Pile/*PhilPapers.parquet
split: train
- config_name: Pile-StackExchange
data_files:
- path: data/v*/natural/*/Pile/*StackExchange.parquet
split: train
- config_name: Pile-USPTO_Backgrounds
data_files:
- path: data/v*/natural/*/Pile/*USPTO_Backgrounds.parquet
split: train
- config_name: Pile-Ubuntu_IRC
data_files:
- path: data/v*/natural/*/Pile/*Ubuntu_IRC.parquet
split: train
- config_name: QuestionsEcritesParlement
data_files:
- path: data/v*/natural/*/QuestionsEcritesParlement/*.parquet
split: train
- config_name: RedPajama
data_files:
- path: data/v*/natural/*/RedPajama/*.parquet
split: train
- config_name: RedPajama-de
data_files:
- path: data/v*/natural/de/RedPajama/*.parquet
split: train
- config_name: RedPajama-es
data_files:
- path: data/v*/natural/es/RedPajama/*.parquet
split: train
- config_name: RedPajama-fr
data_files:
- path: data/v*/natural/fr/RedPajama/*.parquet
split: train
- config_name: RedPajama-it
data_files:
- path: data/v*/natural/it/RedPajama/*.parquet
split: train
- config_name: Stac
data_files:
- path: data/v*/natural/*/Stac/*.parquet
split: train
- config_name: TheStack
data_files:
- path: data/v*/code/*/TheStack/*.parquet
split: train
- config_name: Theses
data_files:
- path: data/v*/natural/*/Theses/*.parquet
split: train
- config_name: Wikipedia
data_files:
- path: data/v*/natural/*/Wikipedia/*.parquet
split: train
- config_name: Wikipedia-de
data_files:
- path: data/v*/natural/de/Wikipedia/*.parquet
split: train
- config_name: Wikipedia-en
data_files:
- path: data/v*/natural/en/Wikipedia/*.parquet
split: train
- config_name: Wikipedia-es
data_files:
- path: data/v*/natural/es/Wikipedia/*.parquet
split: train
- config_name: Wikipedia-fr
data_files:
- path: data/v*/natural/fr/Wikipedia/*.parquet
split: train
- config_name: Wikipedia-it
data_files:
- path: data/v*/natural/it/Wikipedia/*.parquet
split: train
- config_name: Wikisource
data_files:
- path: data/v*/natural/*/Wikisource/*.parquet
split: train
- config_name: Wiktionary
data_files:
- path: data/v*/natural/*/Wiktionary/*.parquet
split: train
- config_name: YouTube
data_files:
- path: data/v*/natural/*/YouTube/*.parquet
split: train
---
# Lucie Training Dataset Card
The Lucie Training Dataset is a curated collection of text data
in English, French, German, Spanish and Italian culled from a variety of sources including: web data, video subtitles, academic papers,
digital books, newspapers, and magazines, some of which were processed by Optical Character Recognition (OCR). It also contains samples of diverse programming languages.
The Lucie Training Dataset was used to pretrain [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B),
a foundation LLM with strong capabilities in French and English. Code for data preparation can be found in the [training respository](https://github.com/OpenLLM-France/Lucie-Training/tree/7f1f7efa1288f709662a9067bf2c3db856b850f8) for Lucie-7B. Due to the licenses of a few subcorpora, the Lucie Training Dataset is released under a [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). A subset available for commercial use will be released soon.
We note that one subcorpus used for training could not be released with the Lucie Training Dataset due to copyright conflicts discovered after training had begun. This data came from the [Persée platform](https://www.persee.fr/). The full list of urls used to create the dataset can be recreated from the file [persee_metadata_documents.csv](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/metadata/persee_metadata_documents.csv), where the corresponding url is `https://www.persee.fr/doc/{ID}` for each `ID` in the column `file_id`. The file [persee_metadata_collections.csv](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/metadata/persee_metadata_collections.csv) gives statistics on document, word and character counts for the data grouped by collection. In all, the corpus contains a total of 3.25 billion words and 5.75 billion tokens, making up around 0.25% of the raw corpus and 0.37% of the tokens seen during training.
Table of Contents:
<ul>
<li><a href="#dataset-description">Dataset Description</a>
<ul>
<li><a href="#sample-metadata">Sample Metadata</a></li>
<li><a href="#dataset-composition">Dataset Composition</a>
<table>
<tr>
<td style="vertical-align: top;">
<ul>
<li><a href="#category-web"> Web</a></li>
<li><a href="#category-newspaper"> Newspaper</a></li>
<li><a href="#category-technical"> Technical</a></li>
<li><a href="#category-book"> Book</a></li>
</ul>
</td>
<td style="vertical-align: top;">
<ul>
<li><a href="#category-legislative-texts"> Legislative Texts</a></li>
<li><a href="#category-legislative-transcripts"> Legislative Transcripts</a></li>
<li><a href="#category-wiki"> Wiki</a></li>
<li><a href="#category-math"> Math</a></li>
</ul>
</td>
<td style="vertical-align: top;">
<ul>
<li><a href="#category-forum"> Forum</a></li>
<li><a href="#category-dialogue"> Dialogue</a></li>
<li><a href="#category-multilingual-parallel-corpora">Multilingual Parallel Corpora</a></li>
<li><a href="#category-programming"> Programming</a></li>
</ul>
</td>
</tr>
</table>
</li>
<li><a href="#configurable-subsets-and-versions">Configurable Subsets and Versions</a></li>
<li><a href="#details-on-data-sources">Details on Data Sources</a>
<table>
<tr>
<td style="vertical-align: top;">
<ul>
<li><a href="#amendementsparlement"> AmendementsParlement</a></li>
<li><a href="#americanstories"> AmericanStories</a></li>
<li><a href="#claire-french-and-english"> Claire (French and English)</a></li>
<li><a href="#croissantaligned"> CroissantAligned</a></li>
<li><a href="#discourspublics"> DiscoursPublics</a></li>
<li><a href="#europarl-and-europarlaligned"> Europarl and EuroparlAligned</a></li>
<li><a href="#eurovoc"> Eurovoc</a></li>
<li><a href="#finewebedu"> FineWebEdu</a></li>
<li><a href="#gallicamonographies"> GallicaMonographies</a></li>
</ul>
</td>
<td style="vertical-align: top;">
<ul>
<li><a href="#gallicapress"> GallicaPress</a></li>
<li><a href="#gutenberg"> Gutenberg</a></li>
<li><a href="#hal"> HAL</a></li>
<li><a href="#interventionsparlement"> InterventionsParlement</a></li>
<li><a href="#legi"> LEGI</a></li>
<li><a href="#mathpile-commercial"> MathPile (Commercial)</a></li>
<li><a href="#opendata"> OpenData</a></li>
<li><a href="#openedition"> OpenEdition</a></li>
<li><a href="#pes2o-v2"> PeS2o (v2)</a></li>
</ul>
</td>
<td style="vertical-align: top;">
<ul>
<li><a href="#pile-uncopyrighted"> Pile (Uncopyrighted)</a></li>
<li><a href="#questionsecritesparlement"> QuestionsEcritesParlement</a></li>
<li><a href="#redpajama-v2"> RedPajama (v2)</a></li>
<li><a href="#stac"> Stac</a></li>
<li><a href="#thestack-v12"> TheStack (v1.2)</a></li>
<li><a href="#theses"> Theses</a></li>
<li><a href="#wikipedia-wikisource-wiktionary"> Wikipedia, Wikisource, Wiktionary</a></li>
<li><a href="#youtube"> YouTube</a></li>
</ul>
</td>
</tr>
</table>
</li>
</ul>
</li>
<li><a href="#example-use-in-python">Example use in Python</a></li>
<ul>
<li><a href="#load-the-dataset">Load the dataset</a></li>
<li><a href="#iterate-over-a-subset">Iterate over a subset</a></li>
<li><a href="#load-a-specific-version">Load a specific version</a></li>
</ul>
</li>
<li><a href="#citation">Citation</a></li>
<li><a href="#acknowledgements">Acknowledgements</a></li>
<li><a href="#contact">Contact</a></li>
</ul>
## Dataset Description
This dataset is intended to provide extensive and diverse multilingual data for training Large Language Models (LLMs). Here are some of the principal features of the corpus:
* Data mix:
* The dataset contains more French than English data -- it is in fact one of the biggest collections of French text data that has been preprocessed for LLM training -- with the aim of minimizing anglo-centric cultural biases.
* German, Spanish and Italian are also represented in small amounts.
* Code is included to boost the reasoning capabilities of LLMs.
* Data filtering and deduplication:
* The dataset has been cleaned in an effort to remove very low-quality data.
* Duplicate data samples have been removed to some extent, following best practices.
* Web data has been filtered to minimize potentially toxic content and personally identifying information.
* Ethics:
* Special care has been taken to respect copyright laws and individual privacy.
All newspapers, monographies, magazines and legislative documents, as well as most books, are in the public domain
(which depends on the author's date of death and the country of publication). Other data are published with permissive licenses (e.g., CC BY or CC BY-SA) or, in very rare cases, CC BY-NC-SA.
* All web data in the dataset come from sites with robots.txt files that do not forbid crawling.
### Sample Metadata
In addition to the `text` field, which provides the content of the sample, each training sample in the corpus contains the following metadata when available:
* [`language`](metadata/metadata_examples.json#L3): the language of the text sample (note that this information is taken from the original data source and may be incorrect).
<br>Possible values:
- the ISO 639-1 code for a given natural language ("en", "fr", "de", "es", or "it"),
- the name of a programming language prefixed by "code:" ("code:python", "code:c++", …), or
- a list of ISO 639-1 codes separated by commas for data containing parallel translations ("fr,en", "de,fr", "es,en", "it,en",
or one of those pairs in the opposite order if the languages appear in the opposite order in the text).
* [`source`](metadata/metadata_examples.json#L4): an identifier for the source(s) of the text sample (Wikipedia, RedPajama, Gutenberg, …).
All sources are described in detail [below](#details-on-data-sources).
* [`id`](metadata/metadata_examples.json#L13): an identifier that is unique among documents from the same source.
* [`url`](metadata/metadata_examples.json#L35) (optional): the URL of the original text sample on the web, if available.
* [`title`](metadata/metadata_examples.json#L36) (optional): the title of the original text sample, if available.
* [`author`](metadata/metadata_examples.json#L81) (optional): the author of the original text sample, if available.
<details><summary>Note:</summary>
The author name is given in plain text, except in the case of <a href="metadata/metadata_examples.json#L91">Gutenberg books</a>, where it is the JSON serialized object of the author metadata.
</details>
* [`date`](metadata/metadata_examples.json#L6) (optional): the publication date of the original text sample, if available.
<details><summary>Note:</summary>
The text format of the date depends on the source.
</details>
* [`quality_signals`](metadata/metadata_examples.json#L17) (optional): a list of quality signals for the text sample in JSON format (which could be used for further filtering or sample weighting).
<details><summary>Note:</summary>
It can include indicators computed by `fasttext` and `CCNet`, statistics about occurrences of characters, words, special characters, etc.
</details>
* [`extra`](metadata/metadata_examples.json#L16) (optional): extra information about the text sample, in JSON format.
This can include metadata about the source subset, the rights, etc.
The list of metadata available for each source is provided (without the `text` field) in [metadata_examples.json](metadata/metadata_examples.json).
### Dataset Composition
The following figure shows the distribution of the dataset by language (colors) and category (hatch patterns).

The following table provides an overview of the dataset composition,
broken down by source and language.
Sources are grouped by category.
The table provides the numbers of documents, words, tokens, and characters for each subset.
All numbers in this table are available in the CSV file [dataset_composition.csv](metadata/dataset_composition.csv).
Token counts are computed using the tokenizer for [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B).
<!-- The following is automatically generated. Do not update manually. -->
<!-- TABLE START -->
<table>
<thead>
<tr>
<th><strong>Subset</strong></th>
<th><strong>Language</strong></th>
<th><strong>M docs</strong></th>
<th><strong>B words</strong></th>
<th><strong>B tokens</strong></th>
<th><strong>B chars</strong></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="11" style="vertical-align: top;"><strong>TOTAL</strong></td>
<td></td>
<td>2186.562</td>
<td>1356.021</td>
<td>2314.862</td>
<td>8842.200</td>
<td></td>
</tr>
<tr>
<td><strong>French (fr)</strong></td>
<td>653.812</td>
<td>583.687</td>
<td>928.618</td>
<td>3619.672</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_french_pie.png">composition details</a></td>
</tr>
<tr>
<td><strong>English (en)</strong></td>
<td>554.289</td>
<td>412.202</td>
<td>611.894</td>
<td>2553.541</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_english_pie.png">composition details</a></td>
</tr>
<tr>
<td><strong>code</strong></td>
<td>125.769</td>
<td>51.306</td>
<td>228.954</td>
<td>630.749</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_code_pie.png">composition details</a></td>
</tr>
<tr>
<td><strong>German (de)</strong></td>
<td>165.915</td>
<td>105.609</td>
<td>206.610</td>
<td>764.779</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_german_pie.png">composition details</a></td>
</tr>
<tr>
<td><strong>Spanish (es)</strong></td>
<td>171.651</td>
<td>123.857</td>
<td>200.825</td>
<td>759.457</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_spanish_pie.png">composition details</a></td>
</tr>
<tr>
<td><strong>Italian (it)</strong></td>
<td>99.440</td>
<td>62.051</td>
<td>112.031</td>
<td>404.454</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_italian_pie.png">composition details</a></td>
</tr>
<tr>
<td><strong>fr-en</strong></td>
<td>410.032</td>
<td>17.016</td>
<td>25.494</td>
<td>107.658</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_fr-en_pie.png">composition details</a></td>
</tr>
<tr>
<td><strong>it-en</strong></td>
<td>1.901</td>
<td>0.100</td>
<td>0.151</td>
<td>0.638</td>
<td></td>
</tr>
<tr>
<td><strong>es-en</strong></td>
<td>1.961</td>
<td>0.103</td>
<td>0.143</td>
<td>0.631</td>
<td></td>
</tr>
<tr>
<td><strong>de-fr</strong></td>
<td>1.792</td>
<td>0.0908</td>
<td>0.141</td>
<td>0.621</td>
<td></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-web">Category: Web</h4></td></tr>
<tr>
<td rowspan="4" style="vertical-align: top;"><a href="#redpajama-v2"><strong>RedPajama</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>640.770</td>
<td>477.758</td>
<td>741.023</td>
<td>2974.596</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-french_histogram.png">composition details</a></td>
</tr>
<tr>
<td><strong>German (de)</strong></td>
<td>162.779</td>
<td>103.078</td>
<td>201.371</td>
<td>747.631</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-german_histogram.png">composition details</a></td>
</tr>
<tr>
<td><strong>Spanish (es)</strong></td>
<td>169.447</td>
<td>121.751</td>
<td>197.125</td>
<td>746.984</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-spanish_histogram.png">composition details</a></td>
</tr>
<tr>
<td><strong>Italian (it)</strong></td>
<td>97.324</td>
<td>60.194</td>
<td>108.416</td>
<td>393.012</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-italian_histogram.png">composition details</a></td>
</tr>
<tr>
<td><a href="#finewebedu"><strong>FineWebEdu</strong></a></td>
<td><strong>English (en)</strong></td>
<td>421.209</td>
<td>327.453</td>
<td>467.837</td>
<td>2018.215</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_finewebedu-english_histogram.png">composition details</a></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-newspaper">Category: Newspaper</h4></td></tr>
<tr>
<td><a href="#gallicapress"><strong>GallicaPress</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>3.205</td>
<td>67.496</td>
<td>121.606</td>
<td>408.882</td>
<td></td>
</tr>
<tr>
<td><a href="#americanstories"><strong>AmericanStories</strong></a></td>
<td><strong>English (en)</strong></td>
<td>59.420</td>
<td>8.902</td>
<td>14.313</td>
<td>50.844</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_americanstories-english_histogram.png">composition details</a></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-technical">Category: Technical</h4></td></tr>
<tr>
<td><a href="#pes2o-v2"><strong>PeS2o</strong></a></td>
<td><strong>English (en)</strong></td>
<td>38.972</td>
<td>42.296</td>
<td>65.365</td>
<td>268.963</td>
<td></td>
</tr>
<tr>
<td><a href="#hal"><strong>HAL</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>0.349</td>
<td>9.356</td>
<td>16.224</td>
<td>58.308</td>
<td></td>
</tr>
<tr>
<td><a href="#theses"><strong>Theses</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>0.102</td>
<td>7.547</td>
<td>14.060</td>
<td>47.758</td>
<td></td>
</tr>
<tr>
<td><a href="#pile-uncopyrighted"><strong>Pile (USPTO_Backgrounds)</strong></a></td>
<td><strong>English (en)</strong></td>
<td>5.139</td>
<td>3.492</td>
<td>5.105</td>
<td>22.309</td>
<td></td>
</tr>
<tr>
<td><a href="#openedition"><strong>OpenEdition</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>0.939</td>
<td>2.225</td>
<td>3.604</td>
<td>14.459</td>
<td></td>
</tr>
<tr>
<td><a href="#pile-uncopyrighted"><strong>Pile (PhilPapers)</strong></a></td>
<td><strong>English (en)</strong></td>
<td>0.0308</td>
<td>0.363</td>
<td>0.618</td>
<td>2.304</td>
<td></td>
</tr>
<tr>
<td><a href="#pile-uncopyrighted"><strong>Pile (NIH_ExPorter)</strong></a></td>
<td><strong>English (en)</strong></td>
<td>0.914</td>
<td>0.288</td>
<td>0.431</td>
<td>1.979</td>
<td></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-book">Category: Book</h4></td></tr>
<tr>
<td><a href="#gallicamonographies"><strong>GallicaMonographies</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>0.278</td>
<td>15.106</td>
<td>25.169</td>
<td>90.456</td>
<td></td>
</tr>
<tr>
<td rowspan="5" style="vertical-align: top;"><a href="#gutenberg"><strong>Gutenberg</strong></a></td>
<td><strong>English (en)</strong></td>
<td>0.0563</td>
<td>3.544</td>
<td>5.516</td>
<td>20.579</td>
<td></td>
</tr>
<tr>
<td><strong>French (fr)</strong></td>
<td>0.00345</td>
<td>0.227</td>
<td>0.383</td>
<td>1.392</td>
<td></td>
</tr>
<tr>
<td><strong>German (de)</strong></td>
<td>0.00188</td>
<td>0.0987</td>
<td>0.193</td>
<td>0.654</td>
<td></td>
</tr>
<tr>
<td><strong>Italian (it)</strong></td>
<td>0.000958</td>
<td>0.0657</td>
<td>0.129</td>
<td>0.414</td>
<td></td>
</tr>
<tr>
<td><strong>Spanish (es)</strong></td>
<td>0.000735</td>
<td>0.0512</td>
<td>0.0920</td>
<td>0.303</td>
<td></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-legislative-texts">Category: Legislative Texts</h4></td></tr>
<tr>
<td><a href="#pile-uncopyrighted"><strong>Pile (FreeLaw)</strong></a></td>
<td><strong>English (en)</strong></td>
<td>3.415</td>
<td>8.204</td>
<td>14.011</td>
<td>52.580</td>
<td></td>
</tr>
<tr>
<td rowspan="4" style="vertical-align: top;"><a href="#eurovoc"><strong>Eurovoc</strong></a></td>
<td><strong>English (en)</strong></td>
<td>0.272</td>
<td>1.523</td>
<td>2.571</td>
<td>9.468</td>
<td></td>
</tr>
<tr>
<td><strong>Italian (it)</strong></td>
<td>0.245</td>
<td>0.731</td>
<td>1.527</td>
<td>4.867</td>
<td></td>
</tr>
<tr>
<td><strong>German (de)</strong></td>
<td>0.247</td>
<td>0.678</td>
<td>1.497</td>
<td>4.915</td>
<td></td>
</tr>
<tr>
<td><strong>Spanish (es)</strong></td>
<td>0.246</td>
<td>0.757</td>
<td>1.411</td>
<td>4.684</td>
<td></td>
</tr>
<tr>
<td><a href="#opendata"><strong>OpenData</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>1.169</td>
<td>0.755</td>
<td>1.209</td>
<td>4.638</td>
<td></td>
</tr>
<tr>
<td><a href="#questionsecritesparlement"><strong>QuestionsEcritesParlement</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>0.189</td>
<td>0.108</td>
<td>0.156</td>
<td>0.705</td>
<td></td>
</tr>
<tr>
<td><a href="#legi"><strong>LEGI</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>0.621</td>
<td>0.0878</td>
<td>0.145</td>
<td>0.563</td>
<td></td>
</tr>
<tr>
<td><a href="#amendementsparlement"><strong>AmendementsParlement</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>0.673</td>
<td>0.0452</td>
<td>0.0738</td>
<td>0.274</td>
<td></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-legislative-transcripts">Category: Legislative Transcripts</h4></td></tr>
<tr>
<td rowspan="4" style="vertical-align: top;"><a href="#europarl-and-europarlaligned"><strong>Europarl</strong></a></td>
<td><strong>German (de)</strong></td>
<td>0.0102</td>
<td>0.0451</td>
<td>0.0734</td>
<td>0.327</td>
<td></td>
</tr>
<tr>
<td><strong>Spanish (es)</strong></td>
<td>0.0103</td>
<td>0.0524</td>
<td>0.0733</td>
<td>0.325</td>
<td></td>
</tr>
<tr>
<td><strong>French (fr)</strong></td>
<td>0.0103</td>
<td>0.0528</td>
<td>0.0717</td>
<td>0.339</td>
<td></td>
</tr>
<tr>
<td><strong>English (en)</strong></td>
<td>0.0111</td>
<td>0.0563</td>
<td>0.0690</td>
<td>0.339</td>
<td></td>
</tr>
<tr>
<td><a href="#discourspublics"><strong>DiscoursPublics</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>0.110</td>
<td>0.163</td>
<td>0.238</td>
<td>1.025</td>
<td></td>
</tr>
<tr>
<td><a href="#interventionsparlement"><strong>InterventionsParlement</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>1.832</td>
<td>0.104</td>
<td>0.157</td>
<td>0.654</td>
<td></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-wiki">Category: Wiki</h4></td></tr>
<tr>
<td rowspan="5" style="vertical-align: top;"><a href="#wikipedia-wikisource-wiktionary"><strong>Wikipedia</strong></a></td>
<td><strong>English (en)</strong></td>
<td>6.893</td>
<td>4.708</td>
<td>7.898</td>
<td>26.616</td>
<td></td>
</tr>
<tr>
<td><strong>German (de)</strong></td>
<td>2.877</td>
<td>1.709</td>
<td>3.476</td>
<td>11.252</td>
<td></td>
</tr>
<tr>
<td><strong>French (fr)</strong></td>
<td>2.648</td>
<td>1.726</td>
<td>2.940</td>
<td>9.879</td>
<td></td>
</tr>
<tr>
<td><strong>Spanish (es)</strong></td>
<td>1.947</td>
<td>1.245</td>
<td>2.124</td>
<td>7.161</td>
<td></td>
</tr>
<tr>
<td><strong>Italian (it)</strong></td>
<td>1.870</td>
<td>1.060</td>
<td>1.959</td>
<td>6.161</td>
<td></td>
</tr>
<tr>
<td><a href="#wikipedia-wikisource-wiktionary"><strong>wikisource</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>0.186</td>
<td>0.523</td>
<td>0.795</td>
<td>3.080</td>
<td></td>
</tr>
<tr>
<td><a href="#wikipedia-wikisource-wiktionary"><strong>wiktionary</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>0.650</td>
<td>0.0531</td>
<td>0.117</td>
<td>0.347</td>
<td></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-math">Category: Math</h4></td></tr>
<tr>
<td><a href="#mathpile-commercial"><strong>MathPile</strong></a></td>
<td><strong>English (en)</strong></td>
<td>0.737</td>
<td>3.408</td>
<td>9.637</td>
<td>27.290</td>
<td></td>
</tr>
<tr>
<td><a href="#pile-uncopyrighted"><strong>Pile (DM_Mathematics)</strong></a></td>
<td><strong>English (en)</strong></td>
<td>0.992</td>
<td>1.746</td>
<td>4.928</td>
<td>8.127</td>
<td></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-forum">Category: Forum</h4></td></tr>
<tr>
<td><a href="#pile-uncopyrighted"><strong>Pile (StackExchange)</strong></a></td>
<td><strong>English (en)</strong></td>
<td>15.269</td>
<td>4.534</td>
<td>10.275</td>
<td>33.609</td>
<td></td>
</tr>
<tr>
<td><a href="#pile-uncopyrighted"><strong>Pile (Ubuntu_IRC)</strong></a></td>
<td><strong>English (en)</strong></td>
<td>0.0104</td>
<td>0.867</td>
<td>2.159</td>
<td>5.610</td>
<td></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-dialogue">Category: Dialogue</h4></td></tr>
<tr>
<td rowspan="2" style="vertical-align: top;"><a href="#claire-french-and-english"><strong>Claire</strong></a></td>
<td><strong>English (en)</strong></td>
<td>0.949</td>
<td>0.818</td>
<td>1.161</td>
<td>4.709</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_claire-english_pie.png">composition details</a></td>
</tr>
<tr>
<td><strong>French (fr)</strong></td>
<td>0.0393</td>
<td>0.210</td>
<td>0.311</td>
<td>1.314</td>
<td><a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_claire-french_pie.png">composition details</a></td>
</tr>
<tr>
<td><a href="#youtube"><strong>YouTube</strong></a></td>
<td><strong>French (fr)</strong></td>
<td>0.0375</td>
<td>0.145</td>
<td>0.336</td>
<td>1.003</td>
<td></td>
</tr>
<tr>
<td><a href="#stac"><strong>STAC</strong></a></td>
<td><strong>English (en)</strong></td>
<td>0.0000450</td>
<td>0.0000529</td>
<td>0.000121</td>
<td>0.000327</td>
<td></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-multilingual-parallel-corpora">Category: Multilingual Parallel Corpora</h4></td></tr>
<tr>
<td><a href="#croissantaligned"><strong>CroissantAligned</strong></a></td>
<td><strong>fr-en</strong></td>
<td>408.029</td>
<td>16.911</td>
<td>25.351</td>
<td>107.003</td>
<td></td>
</tr>
<tr>
<td rowspan="4" style="vertical-align: top;"><a href="#europarl-and-europarlaligned"><strong>EuroparlAligned</strong></a></td>
<td><strong>it-en</strong></td>
<td>1.901</td>
<td>0.100</td>
<td>0.151</td>
<td>0.638</td>
<td></td>
</tr>
<tr>
<td><strong>fr-en</strong></td>
<td>2.003</td>
<td>0.105</td>
<td>0.143</td>
<td>0.655</td>
<td></td>
</tr>
<tr>
<td><strong>es-en</strong></td>
<td>1.961</td>
<td>0.103</td>
<td>0.143</td>
<td>0.631</td>
<td></td>
</tr>
<tr>
<td><strong>de-fr</strong></td>
<td>1.792</td>
<td>0.0908</td>
<td>0.141</td>
<td>0.621</td>
<td></td>
</tr>
<tr>
<td colspan="7"><h4 id="category-programming">Category: Programming</h4></td></tr>
<tr>
<td rowspan="30" style="vertical-align: top;"><a href="#thestack-v12"><strong>TheStack</strong></a></td>
<td><strong>JAVASCRIPT</strong></td>
<td>21.109</td>
<td>8.526</td>
<td>58.609</td>
<td>141.647</td>
<td></td>
</tr>
<tr>
<td><strong>JAVA</strong></td>
<td>20.152</td>
<td>7.421</td>
<td>27.680</td>
<td>89.297</td>
<td></td>
</tr>
<tr>
<td><strong>C</strong></td>
<td>8.626</td>
<td>5.916</td>
<td>24.092</td>
<td>57.428</td>
<td></td>
</tr>
<tr>
<td><strong>PHP</strong></td>
<td>15.905</td>
<td>4.865</td>
<td>22.883</td>
<td>66.844</td>
<td></td>
</tr>
<tr>
<td><strong>PYTHON</strong></td>
<td>12.962</td>
<td>5.434</td>
<td>21.683</td>
<td>64.304</td>
<td></td>
</tr>
<tr>
<td><strong>C++</strong></td>
<td>6.378</td>
<td>4.584</td>
<td>18.835</td>
<td>50.892</td>
<td></td>
</tr>
<tr>
<td><strong>C#</strong></td>
<td>10.839</td>
<td>3.574</td>
<td>13.381</td>
<td>46.286</td>
<td></td>
</tr>
<tr>
<td><strong>GO</strong></td>
<td>4.730</td>
<td>2.735</td>
<td>10.262</td>
<td>25.738</td>
<td></td>
</tr>
<tr>
<td><strong>TYPESCRIPT</strong></td>
<td>10.637</td>
<td>2.617</td>
<td>9.836</td>
<td>28.815</td>
<td></td>
</tr>
<tr>
<td><strong>RUST</strong></td>
<td>1.387</td>
<td>0.872</td>
<td>3.241</td>
<td>9.529</td>
<td></td>
</tr>
<tr>
<td><strong>RUBY</strong></td>
<td>3.405</td>
<td>0.646</td>
<td>2.392</td>
<td>7.139</td>
<td></td>
</tr>
<tr>
<td><strong>SWIFT</strong></td>
<td>1.756</td>
<td>0.553</td>
<td>1.876</td>
<td>6.134</td>
<td></td>
</tr>
<tr>
<td><strong>KOTLIN</strong></td>
<td>2.243</td>
<td>0.454</td>
<td>1.758</td>
<td>5.769</td>
<td></td>
</tr>
<tr>
<td><strong>SCALA</strong></td>
<td>1.362</td>
<td>0.457</td>
<td>1.587</td>
<td>4.862</td>
<td></td>
</tr>
<tr>
<td><strong>TEX</strong></td>
<td>0.398</td>
<td>0.394</td>
<td>1.507</td>
<td>3.805</td>
<td></td>
</tr>
<tr>
<td><strong>LUA</strong></td>
<td>0.559</td>
<td>0.318</td>
<td>1.367</td>
<td>3.279</td>
<td></td>
</tr>
<tr>
<td><strong>DART</strong></td>
<td>0.933</td>
<td>0.308</td>
<td>1.242</td>
<td>3.864</td>
<td></td>
</tr>
<tr>
<td><strong>PERL</strong></td>
<td>0.392</td>
<td>0.297</td>
<td>1.149</td>
<td>2.634</td>
<td></td>
</tr>
<tr>
<td><strong>MATHEMATICA</strong></td>
<td>0.0269</td>
<td>0.120</td>
<td>1.117</td>
<td>1.720</td>
<td></td>
</tr>
<tr>
<td><strong>ASSEMBLY</strong></td>
<td>0.248</td>
<td>0.209</td>
<td>0.867</td>
<td>1.575</td>
<td></td>
</tr>
<tr>
<td><strong>HASKELL</strong></td>
<td>0.545</td>
<td>0.307</td>
<td>0.807</td>
<td>2.364</td>
<td></td>
</tr>
<tr>
<td><strong>FORTRAN</strong></td>
<td>0.165</td>
<td>0.192</td>
<td>0.780</td>
<td>1.843</td>
<td></td>
</tr>
<tr>
<td><strong>JULIA</strong></td>
<td>0.299</td>
<td>0.152</td>
<td>0.660</td>
<td>1.539</td>
<td></td>
</tr>
<tr>
<td><strong>OCAML</strong></td>
<td>0.160</td>
<td>0.130</td>
<td>0.430</td>
<td>1.107</td>
<td></td>
</tr>
<tr>
<td><strong>ERLANG</strong></td>
<td>0.0994</td>
<td>0.0657</td>
<td>0.260</td>
<td>0.726</td>
<td></td>
</tr>
<tr>
<td><strong>ELIXIR</strong></td>
<td>0.282</td>
<td>0.0731</td>
<td>0.258</td>
<td>0.737</td>
<td></td>
</tr>
<tr>
<td><strong>CLOJURE</strong></td>
<td>0.126</td>
<td>0.0448</td>
<td>0.179</td>
<td>0.492</td>
<td></td>
</tr>
<tr>
<td><strong>R</strong></td>
<td>0.0392</td>
<td>0.0278</td>
<td>0.158</td>
<td>0.305</td>
<td></td>
</tr>
<tr>
<td><strong>MATLAB</strong></td>
<td>0.000967</td>
<td>0.00865</td>
<td>0.0427</td>
<td>0.0372</td>
<td></td>
</tr>
<tr>
<td><strong>RACKET</strong></td>
<td>0.00420</td>
<td>0.00479</td>
<td>0.0153</td>
<td>0.0378</td>
<td></td>
</tr>
</tbody>
</table>
<!-- TABLE END -->
### Configurable Subsets and Versions
As the Lucie Training Dataset is a collection of multilingual corpora from different sources, it can be divided into subsets based on the source and language of its constituent corpora.
<br> The list of possible configurations is available [in the YAML header of this README file](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/v1.2/README.md?code=true#L24).
Each configuration corresponds to a pathname pattern in the [data subdirectory](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data).
The dataset is also available in the following versions:
- **v1.1** / [**main**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/main/data) (default):
The data used for the first (main) pretraining phase of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B), which contains approximately 2.3T tokens. The statistics above apply to this version.
- [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data): An improved version of the main dataset, where
- GallicaMonographies and GallicaPress have been fltered aggressively to remove documents with low OCR quality.
- The `Ubuntu_IRC` and `PhilPapers` subsets of Pile have been refined by fixing encoding issues and removing documents in languages other than English, French, Spanish, German and Italian.
- [**v1.2-recent-web**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2-recent-web/data) : The data used for the second pretraining phase (context extension) of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B#2-context-extension).
This version is identical to `v1.2` with the exception that older snapshots of web data (before 2023 for RedPajama and before 2024 for FineWebEdu) have been excluded.
All data from `v1.1` that were not filtered out remain unchanged in `v1.2` and `v1.2-recent-web`.
Except from **v1.1**, which is a git tag, all versions are git branches in the dataset repository
(e.g. [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data)).
The <a href="#example-use-in-python">Example use in Python</a> section contains example Python code for loading and iterating over the dataset with different configurations, including source, language and version.
### Details on Data Sources
#### AmendementsParlement
* <u>Source</u>: Corpus contributed by OpenLLM partners.
* <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4). License: [CC BY-SA](https://www.regardscitoyens.org/mentions-legales/).
* <u>Description</u>: A collection of proposed amendments by the French parliament. Documents contain the text of the proposed amendment, the name of the associated law as well as information on who voted on the amendment and what was decided.
#### AmericanStories
* <u>Source</u>: [dell-research-harvard/AmericanStories](https://huggingface.co/datasets/dell-research-harvard/AmericanStories). License: [CC BY 4.0](https://huggingface.co/datasets/dell-research-harvard/AmericanStories).
* <u>Extracted from</u>: [Chronicling America](https://www.loc.gov/collections/chronicling-america/about-this-collection/). License: [Open](https://www.loc.gov/collections/chronicling-america/about-this-collection/rights-and-access/).
* <u>Description</u>: "The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets" (from the [dataset card](https://huggingface.co/datasets/dell-research-harvard/AmericanStories)). See the dataset <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_americanstories-english_histogram.png">composition details</a> for statistics on documents by year. Dataset containing text retrieved through OCR.
* <u>Pre-processing</u>:
* <u>Filtering</u>:
To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 2310,
measured using a CCNET model in English (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L2106)).
The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
* <u>Citation</u>: Melissa Dell, Jacob Carlson, Tom Bryan, Emily Silcock, Abhishek Arora, Zejiang Shen, Luca D'Amico-Wong, Quan Le, Pablo Querubin and Leander Heldring (2023). "American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers," [arxiv:2308.12477](https://arxiv.org/abs/2308.12477v1).
#### Claire (French and English)
* <u>Sources</u>:
* French dataset: [OpenLLM-France/Claire-Dialogue-French-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1).
* English dataset: [OpenLLM-France/Claire-Dialogue-English-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1). License: [CC BY-NC-SA 4.0](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1).
* <u>Extracted from</u>: see the datacards for the [French](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1) and [English](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1) datasets.
* <u>Description</u>: The Claire datasets are composed of transcripts of spoken conversations -- including parliamentary proceedings, interviews, debates, meetings, and free conversations -- as well as some written conversations from theater plays and written chats. The dataset is designed to help downstream performance of models fine-tuned for tasks requiring the comprehension of spontaneous spoken conversation, such as meeting summarization. Each dialogue is split into speech turns, and each speech turn is labeled with the name of the speaker or a unique identifier. See the composition details for the <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_claire-french_pie.png">French dataset</a> and the <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_claire-english_pie.png">English dataset</a> for a high-level view of the distribution of different types of documents in each dataset.
* <u>Citation</u>: Julie Hunter, Jérôme Louradour, Virgile Rennard, Ismaïl Harrando, Guokan Shang, Jean-Pierre Lorré (2023). The Claire French Dialogue Dataset. [arXiv:2311.16840](https://arxiv.org/abs/2311.16840).
#### CroissantAligned
* <u>Source</u>: [croissantllm/croissant_dataset_no_web_data](https://huggingface.co/datasets/croissantllm/croissant_dataset_no_web_data/tree/main/aligned_36b) (subset: `aligned_36b`). License: not specified.
* <u>Extracted from</u>:
* Translation pairs: [OPUS](https://opus.nlpl.eu/) (99.6% of the data in CroissantAligned). Pairs extracted from OPUS are labeled as "UnbabelFrEn".
* Thesis abstracts: French thesis abstract pairs. License: [ETALAB-Licence-Ouverte-v2.0](https://www.etalab.gouv.fr/wp-content/uploads/2017/04/ETALAB-Licence-Ouverte-v2.0.pdf).
* Song lyrics: [lacoccinelle](https://www.lacoccinelle.net).
* <u>Description</u>: CroissantAligned contains samples of parallel French/English (or English/French) data. Data extracted from OPUS takes the form of sentences pairs, where one sentence is in French and the other is in English. OPUS pairs were passed through a custom pipeline designed to select the highest quality translation examples. Selected pairs are labeled "UnbabelFrEn" in the CroissantAligned dataset. The thesis abstract subset contains thesis abstracts paired with translations written by the thesis authors. The song lyrics are translated by contributors to www.lacoccinelle.net. Parallel data are used to boost the multilingual capabilities of models trained on them ([Faysse et al.,2024](https://arxiv.org/pdf/2402.00786)).
* <u>Pre-processing</u>:
* <u>Language separation and tagging</u>: The original text field of [the Croissant dataset](https://huggingface.co/datasets/croissantllm/croissant_dataset_no_web_data) contains a sentence or passage in French or English immediately followed by its translation without any indication of which passage is in which language. The first step was thus to split each text into separate, monolingual passages and tag each passage with the appropriate language code, identified automatically using the [langid library](https://pypi.org/project/langid/) (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/data.py#L1407)). In the Lucie Training Dataset, the `extra` metadata field for CroissantAligned contains separate keys, `text_fr` for French and `text_en` for English, that stores the texts separately.
* <u>Random combination of texts prefixed by language</u>: To create the text values, each monolingual text was repaired with its translation, but random separators and various methods of prefixing the text with the language (name or code) were added.
This was done as a precaution to prevent models trained on this data from switching languages when generating text and can be seen as a very basic instruction to translate the source (first) text into the target (second) text (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/data.py#L1458)).
* <u>Citation</u>: Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F.T. Martins, Gautier Viaud, Céline Hudelot, Pierre Colombo (2024). "CroissantLLM: A Truly Bilingual French-English Language Model," [arXiv:2402.00786](https://arxiv.org/abs/2402.00786).
#### DiscoursPublics
* <u>Source</u>: Corpus contributed by OpenLLM partners.
* <u>Extracted from</u>: [Vie Publique](https://www.vie-publique.fr/collection-discours-publics). License: [ETALAB-Licence-Ouverte-v2.0](https://www.vie-publique.fr/mentions-legales).
* <u>Description</u>: A collection of public speeches from the principal public actors in France including speeches from the French President starting from 1974 and from the Prime Minister and members of the government starting from 1980.
* <u>Pre-processing</u>:
* <u>Text cleaning</u>: the mention of the source url and the number of views were removed from the text.
#### Europarl and EuroparlAligned
* <u>Sources</u>:
* `fr-en`, `es-en`, `it-en` parallel data: [Europarl v7](https://www.statmt.org/europarl/v7/). License: [Open](https://www.statmt.org/europarl/).
* `fr`, `en`, `de`, `es` monolingual data and `de-fr` parallel data: [Europarl v10](https://www.statmt.org/europarl/v10/training-monolingual/). License: [Open](https://www.statmt.org/europarl/).
* <u>Description</u>: "The Europarl parallel corpus is extracted from the proceedings of the European Parliament. It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek. The goal of the extraction and processing was to generate sentence aligned text for statistical machine translation systems" ([www.statmt.org](https://www.statmt.org/europarl/)).
* <u>Pre-processing</u>:
* <u>Random combination of aligned texts prefixed by language</u>: The same process as used for the [CroissantAligned](#croissantaligned) dataset was applied to the EuroparlAligned dataset (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/data.py#L1350)).
In the Lucie Training Dataset, the `extra` field in the metadata for EuroparlAligned provides texts in the two languages under the sub-fields `text_1` and `text_2`, and the corresponding language codes under `lang_1` and `lang_2`.
* <u>Citation</u>: Philipp Koehn (2005). "Europarl: A Parallel Corpus for Statistical Machine Translation," MT Summit.
#### Eurovoc
* <u>Source</u>: [EuropeanParliament/Eurovoc](https://huggingface.co/datasets/EuropeanParliament/Eurovoc). License: [EUPL 1.1](https://huggingface.co/datasets/EuropeanParliament/Eurovoc).
* <u>Extracted from</u>: [Cellar](https://op.europa.eu/en/web/cellar). License: [CC BY-4.0](https://op.europa.eu/en/web/about-us/legal-notices/publications-office-of-the-european-union-copyright).
* <u>Description</u>: A collection of mutlilingual documents from the data repository of the Publications Office of the European Union annotated with Eurovoc labels. The corpus contains legal, policy-related, historical and organizational information about the EU. Dataset containing text retrieved through OCR.
* <u>Pre-processing</u>:
* <u>Filtering</u>:
To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 1500,
measured using a CCNET model in English (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1590)).
The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
* <u>Text cleaning</u>:
Mentions of Credit Institutions Directives (CID) that appears in the raw texts such as `(cid:146)` were removed.
* <u>Citations</u>:
* Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos (2019). "[Extreme Multi-Label Legal Text Classification: A Case Study in EU Legislation](https://arxiv.org/pdf/1905.10892)," Proceedings of the Natural Legal Language Processing Workshop 2019, pages 78–87, Minneapolis, Minnesota. Association for Computational Linguistics.
* Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos (2019). "[Large-Scale Multi-Label Text Classification on EU Legislation](https://arxiv.org/pdf/1906.02192)," Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), Florence, Italy, (short papers).
* Andrei-Marius Avram, Vasile Pais, and Dan Ioan Tufis (2021). "[PyEuroVoc: A Tool for Multilingual Legal Document Classification with EuroVoc Descriptors](https://arxiv.org/pdf/2108.01139)," Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 92–101, Held Online. INCOMA Ltd.
* Zein Shaheen, Gerhard Wohlgenannt and Erwin Filtz (2020). "Large scale legal text classification using transformer models," [arXiv:2010.12871](https://arxiv.org/abs/2010.12871v1).
#### FineWebEdu
* <u>Source</u>: [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
* <u>Extracted from</u>: [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb). License: [ODC-BY](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
* <u>Description</u>: A 1.3 trillion token selection from [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), which contains 15 trillion tokens of curated data from 96 Common Crawl dumps. Content in FineWebEdu has been selected by a custom designed classifier for its high-quality, educational content. Most recent crawl: 2024-10 (see <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_finewebedu-english_histogram.png">composition details</a> for information about the crawls included in this dataset.)
* <u>Pre-processing</u>:
* <u>Removing duplicate urls</u>: urls were removed if their base domain overlapped with a dataset already in the Lucie Training Dataset (e.g., "philpapers.org") in order to increase diversity of content (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/text.py#L843))
* <u>Filtering by robots.txt files</u>: we collect robots.txt and remove all documents for which CCBot is disallowed or for which we failed to collect information as of July 2024 in an effort to select data free from opt-out evidence according to the 4th article of the copyright European directive (2019).
* <u>Citation</u>: Guilherme Penedo, Hynek Kydlíček, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, Thomas Wolf (2024). "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale," [ arXiv:2406.17557](https://arxiv.org/abs/2406.17557).
#### GallicaMonographies
* <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Books](https://huggingface.co/datasets/PleIAs/French-PD-Books). License: Public domain.
* <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
* <u>Description</u>: A large collection of French monographies in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
* <u>Pre-processing</u>:
* <u>Text cleaning for v1.1</u>:
To filter out documents with excessive OCR errors, the dataset was split into chunks and chunks were kept if the source language was detected as French by [FastText](https://github.com/facebookresearch/fastText) with a confidence score of 0.65 or above, and the perplexity score, as measured using a CCNET model in French, was between 10 and 1000.
The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
* <u>Filtering for v1.2</u>: Using OCR scores provided in the metadata of the source corpus, documents with an OCR score of less than 90 out of 100 were filtered out.
#### GallicaPress
* <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [PleIAs/French-PD-Newspapers](https://huggingface.co/datasets/PleIAs/French-PD-Newspapers). License: Public domain.
* <u>Extracted from</u>: [Gallicagram](https://shiny.ens-paris-saclay.fr/app/gallicagram).
* <u>Description</u>: A large collection of French newspapers and periodicals in the public domain made available through the French National Library ([Gallica](https://gallica.bnf.fr/accueil/fr/content/accueil-fr?mode=desktop)). Dataset containing text retrieved through OCR.
* <u>Pre-processing</u>:
* <u>Text cleaning for v1.1</u>:
To filter out documents with excessive OCR errors, the dataset was split into chunks and chunks were kept if the source language was detected as French by [FastText](https://github.com/facebookresearch/fastText) with a confidence score of 0.65 or above, and the perplexity score, as measured using a CCNET model in French, was between 10 and 1000 (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1840)).
The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
* <u>Filtering for v1.2</u>: Using OCR scores provided in the metadata of the source corpus, documents with an OCR score of less than 90 out of 100 were filtered out.
#### Gutenberg
* <u>Source</u>: Corpus compiled by OpenLLM partners.
* <u>Extracted from</u>:
* [aleph.gutenberg.org](http://aleph.gutenberg.org/) via [Project Gutenberg](https://www.gutenberg.org/). License: [Open](https://www.gutenberg.org/policy/terms_of_use.html).
* [pgcorpus](https://github.com/pgcorpus/gutenberg). License: [CC BY-4.0](https://zenodo.org/records/2422561).
* <u>Description</u>: A collection of free eBooks, manually prepared by human annotators.
* <u>Pre-processing</u>:
* <u>Filtering</u>: The dataset was filtered based on the author date of death, so that only texts from authors who died more than 70 years ago are included (80 years for French authors). See [code details here](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1136). This filtering was done to ensure that the texts are in the public domain.
* <u>Text cleaning</u>: Headers and footers containing information about Project Gutenberg were removed (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/text.py#L93)).
#### HAL
* <u>Source</u>: [bigscience-data/roots_fr_hal_archives_ouvertes](https://huggingface.co/datasets/bigscience-data/roots_fr_hal_archives_ouvertes). License: Roots dataset.
* <u>Extracted from</u>: [HAL](https://hal.science/) ([Open access](https://about.hal.science/)).
* <u>Description</u>: A collection of scientific papers and manuscripts distributed through the open science platform HAL. Dataset containing text retrieved through OCR.
* <u>Pre-processing</u>:
* <u>Filtering</u>:
To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 930,
measured using a CCNET model in French (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1929)).
The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
* <u>Citation</u>: Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Šaško, Quentin Lhoest, Angelina McMillan-Major, Gerard Dupont, Stella Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Muñoz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid Almubarak, Minh Chien Vu, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Sasha Alexandra Luccioni, Yacine Jernite (2022). "[The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset](https://proceedings.neurips.cc/paper_files/paper/2022/hash/ce9e92e3de2372a4b93353eb7f3dc0bd-Abstract-Datasets_and_Benchmarks.html)," Advances in Neural Information Processing Systems (NeurIPS), 35, 31809-31826.
#### InterventionsParlement
* <u>Source</u>: Corpus contributed by OpenLLM partners.
* <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4). License: [CC BY-SA](https://www.regardscitoyens.org/mentions-legales/).
* <u>Description</u>: Transcripts of remarks made during French parlementary debates. Each text contains a continuous remark by a single speaker.
#### LEGI
* <u>Source</u>: Corpus contributed by OpenLLM partners. A version is also published here: [Nicolas-BZRD/DILA_OPENDATA_FR_2023](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main).
* <u>Extracted from</u>: [OpenData](https://echanges.dila.gouv.fr/OPENDATA/) (Data collection date: October, 2023).
* <u>Description</u>: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the [dataset card](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main)).
#### MathPile (Commercial)
* <u>Source</u>: [GAIR/MathPile_Commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial). License: [CC BY-SA 4.0](https://huggingface.co/datasets/GAIR/MathPile_Commercial).
* <u>Extracted from</u>: [MathPile](https://huggingface.co/datasets/GAIR/MathPile). License: [CC BY-SA-NC 4.0](https://huggingface.co/datasets/GAIR/MathPile).
* <u>Description</u>: A preprocessed collection of documents focused on math, including Textbooks, arXiv, Wikipedia, ProofWiki, StackExchange, and web pages from Common Crawl. The content targets a range of levels, from kindergarten through postgraduate level. MathPile_Commercial was obtained by removing documents from MathPile that do not allow commercial use.
* <u>Pre-processing</u>:
* <u>Formatting</u>: Converted the content of StackExchange questions and answers to match the {"text": value} format, using the following formula:
```python
text = sample["question"]["Body"] + "\n\n".join([answer["Body"] for answer in sample["answers"]])
```
* <u>Citation</u>: Zengzhi Wang, Rui Xia and Pengfei Liu (2023). "Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math," [ arXiv:2312.17120](https://export.arxiv.org/abs/2312.17120).
#### OpenData
* <u>Source</u>: [Nicolas-BZRD/DILA_OPENDATA_FR_2023](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main) (balo, dole, inca, kali, and sarde subsets). License: [ODC-BY](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main).
* <u>Extracted from</u>: [OpenData](https://echanges.dila.gouv.fr/OPENDATA/) (Data collection date: October, 2023).
* <u>Description</u>: "The French Government Open Data (DILA) Dataset is a collection of text data extracted from various sources provided by the French government, specifically the Direction de l'information légale et administrative (DILA). This dataset contains a wide range of legal, administrative, and legislative documents. The data has been organized into several categories for easy access and analysis" (from the [dataset card](https://huggingface.co/datasets/Nicolas-BZRD/DILA_OPENDATA_FR_2023/tree/main)).
<!-- * <u>Citation</u>: No paper found. -->
#### OpenEdition
* <u>Source</u>: Corpus contributed by OpenLLM partners.
* <u>Extracted from</u>: [Open Edition](https://www.openedition.org/). License: [Open Edition Books](https://www.openedition.org/12554).
* <u>Description</u>: A collection of scientific books, journal articles, blog entries and event descriptions.
<!-- * <u>Citation</u>: No paper found. -->
#### PeS2o (v2)
* <u>Source</u>: [allenai/peS2o](https://huggingface.co/datasets/allenai/peS2o) version [v2](https://huggingface.co/datasets/allenai/peS2o/tree/main/data/v2). License: [ODC BY-v1.0](https://github.com/allenai/s2orc/).
* <u>Extracted from</u>: [S2ORC](https://github.com/allenai/s2orc) (see [aclanthology](https://aclanthology.org/2020.acl-main.447/)). License: [ODC BY-v1.0](https://github.com/allenai/s2orc/).
* <u>Description</u>: A preprocessed collection of academic papers designed for pre-training of language models. PeS2o is composed of two subsets: one containing full papers and one containing only paper titles and abstracts. Dataset containing (some) text retrieved through OCR. Knowledge cutoff: 2023-01-03.
* <u>Citation</u>: Luca Soldaini and Kyle Lo (2023). "peS2o (Pretraining Efficiently on S2ORC) Dataset," Allen Institute for AI. [GitHub](https://github.com/allenai/pes2o).
#### Pile (Uncopyrighted)
* <u>Source</u>: [monology/pile-uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted). License: [Other](https://huggingface.co/datasets/monology/pile-uncopyrighted).
* <u>Extracted from</u>: [FreeLaw](https://free.law/), [StackExchange](https://stackexchange.com/), [USPTO Backgrounds](https://bulkdata.uspto.gov/), [DM Mathematics](https://github.com/google-deepmind/mathematics_dataset), [Ubuntu IRC](https://irclogs.ubuntu.com/), [PhilPapers](https://philpapers.org/), NIH ExPorter from [The Pile](https://huggingface.co/datasets/EleutherAI/pile). License: [MIT](https://arxiv.org/pdf/2201.07311).
* <u>Description</u> (from the [Datasheet](https://arxiv.org/abs/2201.07311)):
* FreeLaw: "The Free Law Project is US registered non-profit that provide access to millions of legal opinions and analytical tools for academic studies in the legal realm."
* StackExchange: "The StackExchange dataset is a dump of anonymized user-contributed content on the Stack Exchange network, a popular collection of websites centered around user-contributed questions and answers."
* USPTO Backgrounds: "The USPTO Backgrounds dataset is a set of background sections from patents granted by the United States Patent and Trademark Office, derived from its published bulk archives."
* DM Mathematics: "The DeepMind Mathematics dataset consists of a collection of mathematical problems such as algebra, arithmetic, calculus, number theory, and probability, formatted as natural language prompts [Saxton et al., 2019](https://arxiv.org/abs/1904.01557)."
* Ubuntu IRC: "The Ubuntu IRC dataset is derived from the publicly available chatlogs of all Ubunturelated channels on the Freenode IRC chat server."
* PhilPapers: a dataset of open access philosophy publications from an international database maintained by the Center for Digital Philosophy at the University of Western Ontario.
* NIH ExPORTER: "The NIH Grant abstracts provides a bulk-data repository for awarded applications through the ExPORTER4 service covering the fiscal years 1985-present."
* <u>Pre-processing (v1.2 only)</u>:
* <u>Filtering of PhilPapers</u>: Papers were removed if their language, detected using [Stanza](https://github.com/stanfordnlp/stanza), was not classified as English, French, German, Spanish or Italian.
* <u>Filtering and text cleaning of Ubuntu IRC</u>: Texts from some channels were excluded to avoid data from languages other than English, French, German, Spanish or Italian and certain encoding errors were fixed (see [code details here](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/text.py#L190)).
* <u>Citations</u>:
* Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy (2020). "The Pile: An 800GB Dataset of Diverse Text for Language Modeling," [ arXiv:2101.00027](https://arxiv.org/abs/2101.00027).
* Stella Biderman, Kieran Bicheno, Leo Gao (2022). "Datasheet for the Pile," [arXiv:2201.07311](https://arxiv.org/abs/2201.07311).
#### QuestionsEcritesParlement
* <u>Source</u>: Corpus contributed by OpenLLM partners.
* <u>Extracted from</u>: [Regards citoyens](https://www.regardscitoyens.org/#&panel1-4). License: [CC BY-SA](https://www.regardscitoyens.org/mentions-legales/).
* <u>Description</u>: Collection of long written questions, read during a session at the French National Assembly. Questions are asked by a member of the French parliament and addressed to a minister (who is given two months to respond).
#### RedPajama (v2)
* <u>Source</u>: [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2). License: [Apache 2.0](https://github.com/togethercomputer/RedPajama-Data) (data preparation code), Not specified (data) but see [Common Crawl terms of use](https://commoncrawl.org/terms-of-use).
* <u>Extracted from</u>: [Common Crawl](https://commoncrawl.org/).
* <u>Description</u>: "RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text documents coming from 84 CommonCrawl snapshots and processed using the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus that additionally come with quality signals, and 20B documents that are deduplicated" (from [GitHub](https://github.com/togethercomputer/RedPajama-Data)). Most recent crawl for French data in the Lucie Training Dataset v1.1: 2023-14. (For more details on the time periods covered by crawls in this dataset see the composition details for <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-french_histogram.png">French</a>, <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-german_histogram.png">German</a>, <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-italian_histogram.png">Italian</a> and <a href="https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/blob/main/figures/fig_distribution_redpajama-spanish_histogram.png">Spanish</a>.)
* <u>Pre-processing and deduplication</u>:
* <u> Url filtering: </u>
* <u>Removing duplicate urls</u>: urls were removed if their base domain overlapped with a dataset already in the Lucie Training Dataset (e.g., "theses.fr") in order to increase diversity of content (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/webdata_processing/base.py#L154)).
* <u>Filtering certain toxic content</u>: urls from a list of blacklisted content were removed (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/webdata_processing/base.py#L177)).
* <u>Filtering by robots.txt files</u>: we collect robots.txt and remove all documents for which CCBot is disallowed or for which we failed to collect information as of July 2024 in an effort to select data free from opt-out evidence according to the 4th article of the copyright European directive (2019).
* <u>Filtering</u>: A series of filters were applied using [quality signals](https://github.com/togethercomputer/RedPajama-Data?tab=readme-ov-file#quality-annotations) already available in the dataset. This includes (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/d9cccb7bfac37b8c8285f9c04aa67d907ce475f0/webdata_processing/base.py#L36)):
* CCnet perplexity below 10 or above 1000
* C4 filtering (including removal of documents that contain toxic words)
* Gopher filtering and repetition removal
* Redpajama document deduplication
* <u>Removal of personally identifying information (PII)</u>: email addresses and ip addresses were replaced with random addresses (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/webdata_processing/base.py#L301)).
* <u>MinHash deduplication</u> was performed on each snapshot and language independantly as proposed in FineWeb. For minhash configuration [see code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/webdata_processing/minhash.py#L63).
The [Datatrove](https://github.com/huggingface/datatrove) library was used to perform both filtering and deduplication stages.
* <u>Citation</u>: Together Computer (2023). "RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language Models," [GitHub](https://github.com/togethercomputer/RedPajama-Data).
#### STAC
* <u>Source</u>: [STAC](https://www.irit.fr/STAC/corpus.html). License: [CC BY-SA-NC 4.0](https://www.irit.fr/STAC/corpus.html).
* <u>Description</u>: A collection of multiparty chats from an online version of the game Settlers of Catan. The full STAC corpus contains annotations for discourse structure. We use only the text of the chats.
* <u>Citation</u>: Nicholas Asher, Julie Hunter, Mathieu Morey, Farah Benamara and Stergos Afantenos (2016). "[Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus](https://hal.science/hal-02124399/file/asher_22646.pdf)," The Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association, pp. 2721-2727.
#### TheStack (v1.2)
* <u>Source</u>: [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup). License: [Other](https://huggingface.co/datasets/bigcode/the-stack-dedup) (mixture of copyleft licenses).
* <u>Extracted from</u>: [GitHub](https://github.com/) via [GHarchive](https://www.gharchive.org/). Mixed licenses for source.
* <u>Description</u>: "The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. This is the near-deduplicated version with 3TB data" (from the [dataset card](https://huggingface.co/datasets/bigcode/the-stack-dedup)).
* <u>Citation</u>: Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra and Harm de Vries (2022). "The Stack: 3 TB of permissively licensed source code," [arxiv:2211.15533](https://arxiv.org/abs/2211.15533).
#### Theses
* <u>Source</u>: Corpus contributed by OpenLLM partners.
* <u>Extracted from</u>: [theses.fr](https://theses.fr/?domaine=theses) (License: [Licence Ouverte / Open Licence version 2.0](https://www.data.gouv.fr/fr/datasets/theses-soutenues-en-france-depuis-1985/)) and [HAL](https://hal.science/) ([Open access](https://about.hal.science/)).
* <u>Description</u>: A collection of doctoral theses published in France. Dataset containing text retrieved through OCR.
* <u>Pre-processing</u>:
* <u>Text cleaning</u>:
* Title pages about HAL, pages containing a significant fraction of control characters, and duplicate lines were removed (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/cdec8fd6369385455829ab39c2f04bcb1a8a475a/tokenization/text.py#L277)).
* Because the results of OCR on tables and graphics can give rise to garbage text, the text was cleaned by removing the most suspicious chunks.
In particular, a chunk was removed if it was not detected as being written in French, English, Spanish, German or Italian, or if the perplexity of a CCNet Language Model on the chunk was higher than 2000 (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1946)).
The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
* <u>Filtering</u>: Texts with fewer than 1000 words or 10000 characters were removed (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1975)).
<!-- * <u>Citation</u>: No paper found. -->
#### Wikipedia, Wikisource, Wiktionary
* <u>Source</u>: Corpus contributed by LINAGORA Labs (OpenLLM-France).
Also published here:
* [OpenLLM-France/wikipedia](https://huggingface.co/datasets/OpenLLM-France/wikipedia)
* [OpenLLM-France/wikisource](https://huggingface.co/datasets/OpenLLM-France/wikisource)
* [OpenLLM-France/wiktionary](https://huggingface.co/datasets/OpenLLM-France/wiktionary)
* <u>Extracted from</u>: [Wikimedia dumps](https://dumps.wikimedia.org/other/enterprise_html/runs/). License: [GFDL/CC BY-SA](https://dumps.wikimedia.org/legal.html).
<!-- * <u>Description</u>: TODO -->
<!-- * <u>Pre-processing</u>: TODO -->
<!-- * <u>Citation</u>: No paper found. -->
#### YouTube
* <u>Source</u>: Corpus contributed by LINAGORA Labs and [LeVoiceLab](https://www.levoicelab.org/).
* <u>Extracted from</u>: [YouTube](https://www.youtube.com/). <!-- License: TODO? -->
* <u>Description</u>: French subtitles from videos published with permissive licenses on YouTube. <!-- TODO -->
* <u>Extraction pipeline description</u>:
* **Searching for YouTube videos likely in French:** Based on searches generated automatically from random sequences of words extracted from a corpus of French journalistic articles (initially obtained through a web-crawling tool applied to publicly accessible news and media sites such as Huffington Post, 20 Minutes, Le Parisien, Actu, Numerama, Slate, etc.).
Selection of videos with subtitles labeled as "French," excluding those marked as "automatically generated."
*At this stage: 52,778 videos selected, corresponding to 10,654 hours of audio.*
* **Selection of videos whose subtitle language classification confirms French with a certain confidence index:**
*At this stage: 51,934 videos selected, corresponding to 10,425 hours of audio.*
* **Selection of videos whose subtitles contain uppercase, lowercase, and punctuation marks:**
This step filters out automatically generated subtitles created with speech recognition tools.
*At this stage: 45,488 videos selected, corresponding to 8,904 hours of audio.*
* **Extraction of audio tracks from the selected videos.**
* **Automatic formatting of transcripts obtained from subtitles:** Removal of emojis, sound event annotations in brackets (like "[Music]") and extra text such as "subtitled by XXX." (on last seconds of the video).
* **Selection of videos where an automatic speech recognition tool correctly transcribes the first 30 seconds with a minimum recall and precision rate:**
*At this stage: 37,513 videos selected, corresponding to 7,541 hours of audio.*
* **Realignment of the transcript:** Ensuring accurate timestamps in the transcriptions based on the subtitles and excluding audios where alignment fails.
*At this stage: 36,618 videos selected, corresponding to 6,729 hours of audio.*
## Example use in Python
### Load the dataset
Load and iterate over the full dataset using the `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", split="train", streaming=True)
for sample in dataset:
text = sample["text"]
# … do something with the text
```
### Iterate over a subset
Several configurations are available to select a language, a source, or both, illustrated in the following examples.
The list of possible configurations can be obtained programmatically:
```python
from datasets import load_dataset_builder
config_names = list(load_dataset_builder("OpenLLM-France/Lucie-Training-Dataset").builder_configs)
print(config_names)
```
```plaintext
['default', 'en', 'fr', 'de', 'es', 'it', 'de,fr', 'es,en', 'fr,en', 'it,en', 'natural', 'code', 'code-assembly', 'code-c', 'code-c#', 'code-c++', 'code-clojure', 'code-dart', 'code-elixir', 'code-erlang', 'code-fortran', 'code-go', 'code-haskell', 'code-java', 'code-javascript', 'code-julia', 'code-kotlin', 'code-lua', 'code-mathematica', 'code-matlab', 'code-ocaml', 'code-perl', 'code-php', 'code-python', 'code-r', 'code-racket', 'code-ruby', 'code-rust', 'code-scala', 'code-swift', 'code-tex', 'code-typescript', 'AmendementsParlement', 'AmericanStories', 'Claire', 'Claire-en', 'Claire-fr', 'CroissantAligned', 'DiscoursPublics', 'Europarl', 'Europarl-de', 'Europarl-en', 'Europarl-es', 'Europarl-fr', 'EuroparlAligned', 'EuroparlAligned-de,fr', 'EuroparlAligned-es,en', 'EuroparlAligned-fr,en', 'EuroparlAligned-it,en', 'Eurovoc', 'Eurovoc-de', 'Eurovoc-en', 'Eurovoc-es', 'Eurovoc-it', 'FineWebEdu', 'GallicaMonographies', 'GallicaPress', 'Gutenberg', 'Gutenberg-de', 'Gutenberg-en', 'Gutenberg-es', 'Gutenberg-fr', 'Gutenberg-it', 'HAL', 'InterventionsParlement', 'LEGI', 'MathPile', 'OpenData', 'OpenEdition', 'PeS2o', 'PeS2o-s2ag', 'PeS2o-s2orc', 'Pile', 'Pile-DM_Mathematics', 'Pile-FreeLaw', 'Pile-NIH_ExPorter', 'Pile-PhilPapers', 'Pile-StackExchange', 'Pile-USPTO_Backgrounds', 'Pile-Ubuntu_IRC', 'QuestionsEcritesParlement', 'RedPajama', 'RedPajama-de', 'RedPajama-es', 'RedPajama-fr', 'RedPajama-it', 'Stac', 'TheStack', 'Theses', 'Wikipedia', 'Wikipedia-de', 'Wikipedia-en', 'Wikipedia-es', 'Wikipedia-fr', 'Wikipedia-it', 'Wikisource', 'Wiktionary', 'YouTube']
```
Below are some examples of how to load data from different sources and in different languages.
Load data in French:
```python
from datasets import load_dataset
kwargs = dict(split="train", streaming=True)
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr", **kwargs)
```
Load data where French and English are aligned:
```python
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "fr,en", **kwargs)
```
Load data corresponding to files with programming languages:
```python
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code", **kwargs)
```
Load data in Python:
```python
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "code-python", **kwargs)
```
Load data from Wikipedia (in all available languages):
```python
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia", **kwargs)
```
Load data from French pages of Wikipedia ([wikipedia.fr](https://www.wikipedia.fr/)):
```python
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Wikipedia-fr", **kwargs)
```
Load the Pile dataset:
```python
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Pile", **kwargs)
```
Load the subset "`PhilPapers`" from the Pile dataset:
```python
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", "Pile-PhilPapers", **kwargs)
```
### Load a specific version
You can load a specific version with the `datasets` Python package using the `revision` parameter of `load_dataset(…)`:
```python
from datasets import load_dataset
kwargs = dict(split="train", streaming=True)
name = None # or a configuration (e.g. "fr", "code-python", "Wikipedia-fr", "Pile-PhilPapers")
dataset = load_dataset("OpenLLM-France/Lucie-Training-Dataset", name, revision="v1.2", **kwargs)
```
## Citation
When using the Lucie Training Dataset, please cite the following paper:
✍ Olivier Gouvert, Julie Hunter, Jérôme Louradour,
Christophe Cérisara, Evan Dufraisse, Yaya Sy,
Laura Rivière, Jean-Pierre Lorré (2025).
[The Lucie-7B LLM and the Lucie Training Dataset:
Open resources for multilingual language generation](https://arxiv.org/abs/2503.12294). arxiv:2503.12294.
```bibtex
@misc{openllm2025lucie,
title={The Lucie-7B LLM and the Lucie Training Dataset: Open resources for multilingual language generation},
author={Olivier Gouvert and Julie Hunter and Jérôme Louradour and Christophe Cerisara and Evan Dufraisse and Yaya Sy and Laura Rivière and Jean-Pierre Lorré and OpenLLM-France community},
year={2025},
eprint={2503.12294},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.12294},
}
```
## Acknowledgements
The Lucie Training Dataset was created by members of [LINAGORA](https://labs.linagora.com/) (Olivier Gouvert, Julie Hunter, Jérôme Louradour, Jean-Pierre Lorré) and the [OpenLLM-France](https://www.openllm-france.fr/) community.
We thank in particular Rachel Bawden (INRIA), Clément Bénesse (Opsci), Christophe Cérisara (LORIA), Evan Dufraisse (CEA List), Olivier Ferret (CEA List), Joöl Gombin (Opsci), Ismaïl Harrando (LINAGORA), Jordan Ricker (Opsci), Guokan Shang (MBZUAI), and Yaya Sy (LORIA) for their helpful input.
Data storage and significant parts of the data processing were made possible through the HPC resources from GENCI–IDRIS (Grant 2024-GC011015444).
## Contact
<pre>[email protected]</pre>
|
stanfordnlp/sst2 | stanfordnlp | "2024-01-04T16:31:07Z" | 13,980 | 114 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | "2022-06-13T14:01:47Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: sst
pretty_name: Stanford Sentiment Treebank v2
dataset_info:
features:
- name: idx
dtype: int32
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 4681603
num_examples: 67349
- name: validation
num_bytes: 106252
num_examples: 872
- name: test
num_bytes: 216640
num_examples: 1821
download_size: 3331058
dataset_size: 5004495
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nlp.stanford.edu/sentiment/
- **Repository:**
- **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://www.aclweb.org/anthology/D13-1170/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the
compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005)
and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and
includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges.
Binary classification experiments on full sentences (negative or somewhat negative vs somewhat positive or positive
with neutral sentences discarded) refer to the dataset as SST-2 or SST binary.
### Supported Tasks and Leaderboards
- `sentiment-classification`
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Instances
```
{'idx': 0,
'sentence': 'hide new secretions from the parental units ',
'label': 0}
```
### Data Fields
- `idx`: Monotonically increasing index ID.
- `sentence`: Complete sentence expressing an opinion about a film.
- `label`: Sentiment of the opinion, either "negative" (0) or positive (1). The test set labels are hidden (-1).
### Data Splits
| | train | validation | test |
|--------------------|---------:|-----------:|-----:|
| Number of examples | 67349 | 872 | 1821 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Rotten Tomatoes reviewers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown.
### Citation Information
```bibtex
@inproceedings{socher-etal-2013-recursive,
title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
author = "Socher, Richard and
Perelygin, Alex and
Wu, Jean and
Chuang, Jason and
Manning, Christopher D. and
Ng, Andrew and
Potts, Christopher",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
month = oct,
year = "2013",
address = "Seattle, Washington, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D13-1170",
pages = "1631--1642",
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
RMT-team/babilong | RMT-team | "2024-06-17T09:49:52Z" | 13,938 | 14 | [
"language:en",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.10149",
"arxiv:2402.10790",
"arxiv:1502.05698",
"region:us"
] | null | "2024-03-07T14:15:14Z" | ---
language:
- en
configs:
- config_name: 0k
data_files:
- split: qa1
path: data/qa1/0k.json
- split: qa2
path: data/qa2/0k.json
- split: qa3
path: data/qa3/0k.json
- split: qa4
path: data/qa4/0k.json
- split: qa5
path: data/qa5/0k.json
- split: qa6
path: data/qa6/0k.json
- split: qa7
path: data/qa7/0k.json
- split: qa8
path: data/qa8/0k.json
- split: qa9
path: data/qa9/0k.json
- split: qa10
path: data/qa10/0k.json
- split: qa11
path: data/qa11/0k.json
- split: qa12
path: data/qa12/0k.json
- split: qa13
path: data/qa13/0k.json
- split: qa14
path: data/qa14/0k.json
- split: qa15
path: data/qa15/0k.json
- split: qa16
path: data/qa16/0k.json
- split: qa17
path: data/qa17/0k.json
- split: qa18
path: data/qa18/0k.json
- split: qa19
path: data/qa19/0k.json
- split: qa20
path: data/qa20/0k.json
- config_name: 1k
data_files:
- split: qa1
path: data/qa1/1k.json
- split: qa2
path: data/qa2/1k.json
- split: qa3
path: data/qa3/1k.json
- split: qa4
path: data/qa4/1k.json
- split: qa5
path: data/qa5/1k.json
- split: qa6
path: data/qa6/1k.json
- split: qa7
path: data/qa7/1k.json
- split: qa8
path: data/qa8/1k.json
- split: qa9
path: data/qa9/1k.json
- split: qa10
path: data/qa10/1k.json
- config_name: 2k
data_files:
- split: qa1
path: data/qa1/2k.json
- split: qa2
path: data/qa2/2k.json
- split: qa3
path: data/qa3/2k.json
- split: qa4
path: data/qa4/2k.json
- split: qa5
path: data/qa5/2k.json
- split: qa6
path: data/qa6/2k.json
- split: qa7
path: data/qa7/2k.json
- split: qa8
path: data/qa8/2k.json
- split: qa9
path: data/qa9/2k.json
- split: qa10
path: data/qa10/2k.json
- config_name: 4k
data_files:
- split: qa1
path: data/qa1/4k.json
- split: qa2
path: data/qa2/4k.json
- split: qa3
path: data/qa3/4k.json
- split: qa4
path: data/qa4/4k.json
- split: qa5
path: data/qa5/4k.json
- split: qa6
path: data/qa6/4k.json
- split: qa7
path: data/qa7/4k.json
- split: qa8
path: data/qa8/4k.json
- split: qa9
path: data/qa9/4k.json
- split: qa10
path: data/qa10/4k.json
- config_name: 8k
data_files:
- split: qa1
path: data/qa1/8k.json
- split: qa2
path: data/qa2/8k.json
- split: qa3
path: data/qa3/8k.json
- split: qa4
path: data/qa4/8k.json
- split: qa5
path: data/qa5/8k.json
- split: qa6
path: data/qa6/8k.json
- split: qa7
path: data/qa7/8k.json
- split: qa8
path: data/qa8/8k.json
- split: qa9
path: data/qa9/8k.json
- split: qa10
path: data/qa10/8k.json
- config_name: 16k
data_files:
- split: qa1
path: data/qa1/16k.json
- split: qa2
path: data/qa2/16k.json
- split: qa3
path: data/qa3/16k.json
- split: qa4
path: data/qa4/16k.json
- split: qa5
path: data/qa5/16k.json
- split: qa6
path: data/qa6/16k.json
- split: qa7
path: data/qa7/16k.json
- split: qa8
path: data/qa8/16k.json
- split: qa9
path: data/qa9/16k.json
- split: qa10
path: data/qa10/16k.json
- config_name: 32k
data_files:
- split: qa1
path: data/qa1/32k.json
- split: qa2
path: data/qa2/32k.json
- split: qa3
path: data/qa3/32k.json
- split: qa4
path: data/qa4/32k.json
- split: qa5
path: data/qa5/32k.json
- split: qa6
path: data/qa6/32k.json
- split: qa7
path: data/qa7/32k.json
- split: qa8
path: data/qa8/32k.json
- split: qa9
path: data/qa9/32k.json
- split: qa10
path: data/qa10/32k.json
- config_name: 64k
data_files:
- split: qa1
path: data/qa1/64k.json
- split: qa2
path: data/qa2/64k.json
- split: qa3
path: data/qa3/64k.json
- split: qa4
path: data/qa4/64k.json
- split: qa5
path: data/qa5/64k.json
- split: qa6
path: data/qa6/64k.json
- split: qa7
path: data/qa7/64k.json
- split: qa8
path: data/qa8/64k.json
- split: qa9
path: data/qa9/64k.json
- split: qa10
path: data/qa10/64k.json
- config_name: 128k
data_files:
- split: qa1
path: data/qa1/128k.json
- split: qa2
path: data/qa2/128k.json
- split: qa3
path: data/qa3/128k.json
- split: qa4
path: data/qa4/128k.json
- split: qa5
path: data/qa5/128k.json
- split: qa6
path: data/qa6/128k.json
- split: qa7
path: data/qa7/128k.json
- split: qa8
path: data/qa8/128k.json
- split: qa9
path: data/qa9/128k.json
- split: qa10
path: data/qa10/128k.json
- config_name: 256k
data_files:
- split: qa1
path: data/qa1/256k.json
- split: qa2
path: data/qa2/256k.json
- split: qa3
path: data/qa3/256k.json
- split: qa4
path: data/qa4/256k.json
- split: qa5
path: data/qa5/256k.json
- split: qa6
path: data/qa6/256k.json
- split: qa7
path: data/qa7/256k.json
- split: qa8
path: data/qa8/256k.json
- split: qa9
path: data/qa9/256k.json
- split: qa10
path: data/qa10/256k.json
- config_name: 512k
data_files:
- split: qa1
path: data/qa1/512k.json
- split: qa2
path: data/qa2/512k.json
- split: qa3
path: data/qa3/512k.json
- split: qa4
path: data/qa4/512k.json
- split: qa5
path: data/qa5/512k.json
- split: qa6
path: data/qa6/512k.json
- split: qa7
path: data/qa7/512k.json
- split: qa8
path: data/qa8/512k.json
- split: qa9
path: data/qa9/512k.json
- split: qa10
path: data/qa10/512k.json
- config_name: 1M
data_files:
- split: qa1
path: data/qa1/1M.json
- split: qa2
path: data/qa2/1M.json
- split: qa3
path: data/qa3/1M.json
- split: qa4
path: data/qa4/1M.json
- split: qa5
path: data/qa5/1M.json
- split: qa6
path: data/qa6/1M.json
- split: qa7
path: data/qa7/1M.json
- split: qa8
path: data/qa8/1M.json
- split: qa9
path: data/qa9/1M.json
- split: qa10
path: data/qa10/1M.json
- config_name: 10M
data_files:
- split: qa1
path: data/qa1/10M.json
- split: qa2
path: data/qa2/10M.json
- split: qa3
path: data/qa3/10M.json
- split: qa4
path: data/qa4/10M.json
- split: qa5
path: data/qa5/10M.json
- config_name: qa1
data_files:
- split: 0k
path: data/qa1/0k.json
- split: 1k
path: data/qa1/1k.json
- split: 2k
path: data/qa1/2k.json
- split: 4k
path: data/qa1/4k.json
- split: 8k
path: data/qa1/8k.json
- split: 16k
path: data/qa1/16k.json
- split: 32k
path: data/qa1/32k.json
- split: 64k
path: data/qa1/64k.json
- split: 128k
path: data/qa1/128k.json
- split: 256k
path: data/qa1/256k.json
- split: 512k
path: data/qa1/512k.json
- split: 1M
path: data/qa1/1M.json
- config_name: qa2
data_files:
- split: 0k
path: data/qa2/0k.json
- split: 1k
path: data/qa2/1k.json
- split: 2k
path: data/qa2/2k.json
- split: 4k
path: data/qa2/4k.json
- split: 8k
path: data/qa2/8k.json
- split: 16k
path: data/qa2/16k.json
- split: 32k
path: data/qa2/32k.json
- split: 64k
path: data/qa2/64k.json
- split: 128k
path: data/qa2/128k.json
- split: 256k
path: data/qa2/256k.json
- split: 512k
path: data/qa2/512k.json
- split: 1M
path: data/qa2/1M.json
- config_name: qa3
data_files:
- split: 0k
path: data/qa3/0k.json
- split: 1k
path: data/qa3/1k.json
- split: 2k
path: data/qa3/2k.json
- split: 4k
path: data/qa3/4k.json
- split: 8k
path: data/qa3/8k.json
- split: 16k
path: data/qa3/16k.json
- split: 32k
path: data/qa3/32k.json
- split: 64k
path: data/qa3/64k.json
- split: 128k
path: data/qa3/128k.json
- split: 256k
path: data/qa3/256k.json
- split: 512k
path: data/qa3/512k.json
- split: 1M
path: data/qa3/1M.json
- config_name: qa4
data_files:
- split: 0k
path: data/qa4/0k.json
- split: 1k
path: data/qa4/1k.json
- split: 2k
path: data/qa4/2k.json
- split: 4k
path: data/qa4/4k.json
- split: 8k
path: data/qa4/8k.json
- split: 16k
path: data/qa4/16k.json
- split: 32k
path: data/qa4/32k.json
- split: 64k
path: data/qa4/64k.json
- split: 128k
path: data/qa4/128k.json
- split: 256k
path: data/qa4/256k.json
- split: 512k
path: data/qa4/512k.json
- split: 1M
path: data/qa4/1M.json
- config_name: qa5
data_files:
- split: 0k
path: data/qa5/0k.json
- split: 1k
path: data/qa5/1k.json
- split: 2k
path: data/qa5/2k.json
- split: 4k
path: data/qa5/4k.json
- split: 8k
path: data/qa5/8k.json
- split: 16k
path: data/qa5/16k.json
- split: 32k
path: data/qa5/32k.json
- split: 64k
path: data/qa5/64k.json
- split: 128k
path: data/qa5/128k.json
- split: 256k
path: data/qa5/256k.json
- split: 512k
path: data/qa5/512k.json
- split: 1M
path: data/qa5/1M.json
- config_name: qa6
data_files:
- split: 0k
path: data/qa6/0k.json
- split: 1k
path: data/qa6/1k.json
- split: 2k
path: data/qa6/2k.json
- split: 4k
path: data/qa6/4k.json
- split: 8k
path: data/qa6/8k.json
- split: 16k
path: data/qa6/16k.json
- split: 32k
path: data/qa6/32k.json
- split: 64k
path: data/qa6/64k.json
- split: 128k
path: data/qa6/128k.json
- split: 256k
path: data/qa6/256k.json
- split: 512k
path: data/qa6/512k.json
- split: 1M
path: data/qa6/1M.json
- config_name: qa7
data_files:
- split: 0k
path: data/qa7/0k.json
- split: 1k
path: data/qa7/1k.json
- split: 2k
path: data/qa7/2k.json
- split: 4k
path: data/qa7/4k.json
- split: 8k
path: data/qa7/8k.json
- split: 16k
path: data/qa7/16k.json
- split: 32k
path: data/qa7/32k.json
- split: 64k
path: data/qa7/64k.json
- split: 128k
path: data/qa7/128k.json
- split: 256k
path: data/qa7/256k.json
- split: 512k
path: data/qa7/512k.json
- split: 1M
path: data/qa7/1M.json
- config_name: qa8
data_files:
- split: 0k
path: data/qa8/0k.json
- split: 1k
path: data/qa8/1k.json
- split: 2k
path: data/qa8/2k.json
- split: 4k
path: data/qa8/4k.json
- split: 8k
path: data/qa8/8k.json
- split: 16k
path: data/qa8/16k.json
- split: 32k
path: data/qa8/32k.json
- split: 64k
path: data/qa8/64k.json
- split: 128k
path: data/qa8/128k.json
- split: 256k
path: data/qa8/256k.json
- split: 512k
path: data/qa8/512k.json
- split: 1M
path: data/qa8/1M.json
- config_name: qa9
data_files:
- split: 0k
path: data/qa9/0k.json
- split: 1k
path: data/qa9/1k.json
- split: 2k
path: data/qa9/2k.json
- split: 4k
path: data/qa9/4k.json
- split: 8k
path: data/qa9/8k.json
- split: 16k
path: data/qa9/16k.json
- split: 32k
path: data/qa9/32k.json
- split: 64k
path: data/qa9/64k.json
- split: 128k
path: data/qa9/128k.json
- split: 256k
path: data/qa9/256k.json
- split: 512k
path: data/qa9/512k.json
- split: 1M
path: data/qa9/1M.json
- config_name: qa10
data_files:
- split: 0k
path: data/qa10/0k.json
- split: 1k
path: data/qa10/1k.json
- split: 2k
path: data/qa10/2k.json
- split: 4k
path: data/qa10/4k.json
- split: 8k
path: data/qa10/8k.json
- split: 16k
path: data/qa10/16k.json
- split: 32k
path: data/qa10/32k.json
- split: 64k
path: data/qa10/64k.json
- split: 128k
path: data/qa10/128k.json
- split: 256k
path: data/qa10/256k.json
- split: 512k
path: data/qa10/512k.json
- split: 1M
path: data/qa10/1M.json
---
# BABILong (100 samples) : a long-context needle-in-a-haystack benchmark for LLMs
Preprint is on [arXiv](https://arxiv.org/abs/2406.10149) and code for LLM evaluation is available on [GitHub](https://github.com/booydar/babilong).
[BABILong Leaderboard](https://huggingface.co/spaces/RMT-team/babilong) with top-performing long-context models.
## bAbI + Books = BABILong
**BABILong** is a novel generative benchmark for evaluating the performance of NLP models in
processing arbitrarily long documents with distributed facts.
It contains 11 configs, corresponding to different sequence lengths in tokens: 0k, 1k, 2k, 4k, 8k, 16k, 32k, 128k, 256k, 512k, 1M.
```
from datasets import load_dataset
babilong = load_dataset("RMT-team/babilong", "128k")["qa1"]
```
Solving tasks with a long context size requires the model to distinguish important information from large amounts of irrelevant details. To simulate this behavior we ”hide” the sentences of the original task between the sentences of irrelevant text. We use the [bAbI](https://huggingface.co/datasets/facebook/babi_qa) dataset [1] as facts and [PG19](https://huggingface.co/datasets/pg19) as background text. Resulting test samples might have lenghts of **millions of tokens**.
BABILong consists of 10 tasks designed for evaluation of basic aspects of reasoning. The bAbI tasks are generated by simulating a set of characters and objects engaged in various movements and interactions with each other in multiple locations. Each interaction is represented by a fact, e.g. **”Mary travelled to the office”**, and the task is to answer a question using the facts from the current simulation, for instance, **”Where is Mary?”**. The bAbI tasks vary based on the number of facts, question complexity and the aspects of reasoning.
### First ten tasks of BABILong
| Task | Name | facts per task | supporting facts per task |
|------|--------------------------|-----------------|---------------------------|
| qa1 | single supporting fact | 2 - 10 | 1 |
| qa2 | two supporting facts | 2 - 68 | 2 |
| qa3 | three supporting facts | 4 - 32 | 3 |
| qa4 | two arg relations | 2 | 1 |
| qa5 | three arg relations | 2 - 126 | 1 |
| qa6 | yes-no questions | 2 - 26 | 1 |
| qa7 | counting | 2 - 52 | 1-10 |
| qa8 | lists-sets | 2 - 50 | 1-8 |
| qa9 | simple negation | 2 - 10 | 1 |
| qa10 | indefinite knowledge | 2 - 10 | 1 |
Join us in this exciting endeavor and let's push the boundaries of what's possible together!
## Citation
```
@misc{kuratov2024babilong,
title={BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack},
author={Yuri Kuratov and Aydar Bulatov and Petr Anokhin and Ivan Rodkin and Dmitry Sorokin and Artyom Sorokin and Mikhail Burtsev},
year={2024},
eprint={2406.10149},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
```
```
@misc{kuratov2024search,
title={In Search of Needles in a 10M Haystack: Recurrent Memory Finds What LLMs Miss},
author={Yuri Kuratov and Aydar Bulatov and Petr Anokhin and Dmitry Sorokin and Artyom Sorokin and Mikhail Burtsev},
year={2024},
eprint={2402.10790},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## References
[1] Weston, Jason, et al. "Towards ai-complete question answering: A set of prerequisite toy tasks." arXiv preprint [arXiv:1502.05698](https://arxiv.org/abs/1502.05698) (2015).
## License
Our code is released under the Apache 2.0 License. We use data from the PG-19 corpora (Rae et al., 2020) ([Apache 2.0 License](https://github.com/google-deepmind/pg19/blob/master/LICENSE)) and the bAbI dataset (Weston et al., 2016) ([BSD License](https://github.com/facebookarchive/bAbI-tasks/blob/master/LICENSE.md)). |
introvoyz041/Perforator | introvoyz041 | "2025-01-31T12:29:38Z" | 13,910 | 0 | [
"license:mit",
"region:us"
] | null | "2025-01-31T12:19:36Z" | ---
license: mit
---
|
gigant/tib_slides_wip | gigant | "2023-03-26T16:22:49Z" | 13,860 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-03-26T00:20:40Z" | ---
dataset_info:
features:
- name: Image
dtype: image
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 161850916866.84
num_examples: 595458
download_size: 29396407498
dataset_size: 161850916866.84
---
# Dataset Card for "tib_slides_wip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
legacy-datasets/common_voice | legacy-datasets | "2024-08-22T08:27:23Z" | 13,854 | 137 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"language:ab",
"language:ar",
"language:as",
"language:br",
"language:ca",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:hi",
"language:hsb",
"language:hu",
"language:ia",
"language:id",
"language:it",
"language:ja",
"language:ka",
"language:kab",
"language:ky",
"language:lg",
"language:lt",
"language:lv",
"language:mn",
"language:mt",
"language:nl",
"language:or",
"language:pa",
"language:pl",
"language:pt",
"language:rm",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sl",
"language:sv",
"language:ta",
"language:th",
"language:tr",
"language:tt",
"language:uk",
"language:vi",
"language:vot",
"language:zh",
"license:cc0-1.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"automatic-speech-recognition"
] | "2022-03-02T23:29:22Z" | ---
pretty_name: Common Voice
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ab
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- hi
- hsb
- hu
- ia
- id
- it
- ja
- ka
- kab
- ky
- lg
- lt
- lv
- mn
- mt
- nl
- or
- pa
- pl
- pt
- rm
- ro
- ru
- rw
- sah
- sl
- sv
- ta
- th
- tr
- tt
- uk
- vi
- vot
- zh
language_bcp47:
- fy-NL
- ga-IE
- pa-IN
- rm-sursilv
- rm-vallader
- sv-SE
- zh-CN
- zh-HK
- zh-TW
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- extended|common_voice
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: common-voice
viewer: false
dataset_info:
- config_name: ab
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 1295622
num_examples: 22
- name: test
num_bytes: 411844
num_examples: 9
- name: validation
- name: other
num_bytes: 40023390
num_examples: 752
- name: validated
num_bytes: 1707426
num_examples: 31
- name: invalidated
num_bytes: 361626
num_examples: 8
download_size: 41038412
dataset_size: 43799908
- config_name: ar
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 359335168
num_examples: 14227
- name: test
num_bytes: 237546641
num_examples: 7622
- name: validation
num_bytes: 209606861
num_examples: 7517
- name: other
num_bytes: 515822404
num_examples: 18283
- name: validated
num_bytes: 1182522872
num_examples: 43291
- name: invalidated
num_bytes: 194805036
num_examples: 6333
download_size: 1756264615
dataset_size: 2699638982
- config_name: as
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 11442279
num_examples: 270
- name: test
num_bytes: 5071343
num_examples: 110
- name: validation
num_bytes: 5480156
num_examples: 124
- name: other
- name: validated
num_bytes: 21993698
num_examples: 504
- name: invalidated
num_bytes: 886145
num_examples: 31
download_size: 22226465
dataset_size: 44873621
- config_name: br
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 62238289
num_examples: 2780
- name: test
num_bytes: 54461339
num_examples: 2087
- name: validation
num_bytes: 46995570
num_examples: 1997
- name: other
num_bytes: 269858143
num_examples: 10912
- name: validated
num_bytes: 203503622
num_examples: 8560
- name: invalidated
num_bytes: 20861017
num_examples: 623
download_size: 465276982
dataset_size: 657917980
- config_name: ca
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 12966939466
num_examples: 285584
- name: test
num_bytes: 745761890
num_examples: 15724
- name: validation
num_bytes: 716442038
num_examples: 15724
- name: other
num_bytes: 2693542910
num_examples: 64446
- name: validated
num_bytes: 18115833966
num_examples: 416701
- name: invalidated
num_bytes: 850402888
num_examples: 18846
download_size: 20743110341
dataset_size: 36088923158
- config_name: cnh
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 18866674
num_examples: 807
- name: test
num_bytes: 24675321
num_examples: 752
- name: validation
num_bytes: 22162315
num_examples: 756
- name: other
num_bytes: 84878963
num_examples: 2934
- name: validated
num_bytes: 69330148
num_examples: 2432
- name: invalidated
num_bytes: 13642724
num_examples: 433
download_size: 161331331
dataset_size: 233556145
- config_name: cs
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 215205282
num_examples: 5655
- name: test
num_bytes: 148499476
num_examples: 4144
- name: validation
num_bytes: 148312130
num_examples: 4118
- name: other
num_bytes: 282225475
num_examples: 7475
- name: validated
num_bytes: 1019817024
num_examples: 30431
- name: invalidated
num_bytes: 24717823
num_examples: 685
download_size: 1271909933
dataset_size: 1838777210
- config_name: cv
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 31649510
num_examples: 931
- name: test
num_bytes: 32513061
num_examples: 788
- name: validation
num_bytes: 28429779
num_examples: 818
- name: other
num_bytes: 288294623
num_examples: 6927
- name: validated
num_bytes: 126717875
num_examples: 3496
- name: invalidated
num_bytes: 57923138
num_examples: 1282
download_size: 439329081
dataset_size: 565527986
- config_name: cy
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 271642649
num_examples: 6839
- name: test
num_bytes: 206865596
num_examples: 4820
- name: validation
num_bytes: 201813388
num_examples: 4776
- name: other
num_bytes: 688469886
num_examples: 17919
- name: validated
num_bytes: 2763112391
num_examples: 72984
- name: invalidated
num_bytes: 146874576
num_examples: 3648
download_size: 3434474658
dataset_size: 4278778486
- config_name: de
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 11463160619
num_examples: 246525
- name: test
num_bytes: 744617681
num_examples: 15588
- name: validation
num_bytes: 729559862
num_examples: 15588
- name: other
num_bytes: 464513461
num_examples: 10095
- name: validated
num_bytes: 22402489041
num_examples: 565186
- name: invalidated
num_bytes: 1440604803
num_examples: 32789
download_size: 23283812097
dataset_size: 37244945467
- config_name: dv
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 118576140
num_examples: 2680
- name: test
num_bytes: 94281409
num_examples: 2202
- name: validation
num_bytes: 94117088
num_examples: 2077
- name: other
- name: validated
num_bytes: 528571107
num_examples: 11866
- name: invalidated
num_bytes: 37694847
num_examples: 840
download_size: 540488041
dataset_size: 873240591
- config_name: el
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 80759076
num_examples: 2316
- name: test
num_bytes: 53820491
num_examples: 1522
- name: validation
num_bytes: 44818565
num_examples: 1401
- name: other
num_bytes: 186861175
num_examples: 5659
- name: validated
num_bytes: 204446790
num_examples: 5996
- name: invalidated
num_bytes: 6023769
num_examples: 185
download_size: 381570611
dataset_size: 576729866
- config_name: en
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 26088826658
num_examples: 564337
- name: test
num_bytes: 758718688
num_examples: 16164
- name: validation
num_bytes: 795638801
num_examples: 16164
- name: other
num_bytes: 5796244022
num_examples: 169895
- name: validated
num_bytes: 48425872575
num_examples: 1224864
- name: invalidated
num_bytes: 9122973965
num_examples: 189562
download_size: 60613063630
dataset_size: 90988274709
- config_name: eo
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 993655930
num_examples: 19587
- name: test
num_bytes: 420153812
num_examples: 8969
- name: validation
num_bytes: 391427586
num_examples: 8987
- name: other
num_bytes: 142476819
num_examples: 2946
- name: validated
num_bytes: 2603249289
num_examples: 58094
- name: invalidated
num_bytes: 238105462
num_examples: 4736
download_size: 2883560869
dataset_size: 4789068898
- config_name: es
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 6918333205
num_examples: 161813
- name: test
num_bytes: 754049291
num_examples: 15089
- name: validation
num_bytes: 735558084
num_examples: 15089
- name: other
num_bytes: 5528972205
num_examples: 144791
- name: validated
num_bytes: 9623788388
num_examples: 236314
- name: invalidated
num_bytes: 1664876264
num_examples: 40640
download_size: 16188844718
dataset_size: 25225577437
- config_name: et
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 161124199
num_examples: 2966
- name: test
num_bytes: 133183135
num_examples: 2509
- name: validation
num_bytes: 137604813
num_examples: 2507
- name: other
num_bytes: 30339130
num_examples: 569
- name: validated
num_bytes: 573417188
num_examples: 10683
- name: invalidated
num_bytes: 193019544
num_examples: 3557
download_size: 767174465
dataset_size: 1228688009
- config_name: eu
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 317322801
num_examples: 7505
- name: test
num_bytes: 238866501
num_examples: 5172
- name: validation
num_bytes: 228150083
num_examples: 5172
- name: other
num_bytes: 988079897
num_examples: 23570
- name: validated
num_bytes: 2621488299
num_examples: 63009
- name: invalidated
num_bytes: 208553909
num_examples: 5387
download_size: 3664586106
dataset_size: 4602461490
- config_name: fa
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 239255087
num_examples: 7593
- name: test
num_bytes: 217939210
num_examples: 5213
- name: validation
num_bytes: 196558067
num_examples: 5213
- name: other
num_bytes: 737017546
num_examples: 22510
- name: validated
num_bytes: 8120181903
num_examples: 251659
- name: invalidated
num_bytes: 499570226
num_examples: 11698
download_size: 8884585819
dataset_size: 10010522039
- config_name: fi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 16017393
num_examples: 460
- name: test
num_bytes: 16117529
num_examples: 428
- name: validation
num_bytes: 15471757
num_examples: 415
- name: other
num_bytes: 5836400
num_examples: 149
- name: validated
num_bytes: 47669391
num_examples: 1305
- name: invalidated
num_bytes: 2228215
num_examples: 59
download_size: 49882909
dataset_size: 103340685
- config_name: fr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 12439892070
num_examples: 298982
- name: test
num_bytes: 733943163
num_examples: 15763
- name: validation
num_bytes: 703801114
num_examples: 15763
- name: other
num_bytes: 117998889
num_examples: 3222
- name: validated
num_bytes: 17921836252
num_examples: 461004
- name: invalidated
num_bytes: 1794149368
num_examples: 40351
download_size: 19130141984
dataset_size: 33711620856
- config_name: fy-NL
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 159116360
num_examples: 3927
- name: test
num_bytes: 126913262
num_examples: 3020
- name: validation
num_bytes: 112288554
num_examples: 2790
- name: other
num_bytes: 893887467
num_examples: 21569
- name: validated
num_bytes: 429651922
num_examples: 10495
- name: invalidated
num_bytes: 38985422
num_examples: 1031
download_size: 1237743070
dataset_size: 1760842987
- config_name: ga-IE
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 15396820
num_examples: 541
- name: test
num_bytes: 16611739
num_examples: 506
- name: validation
num_bytes: 14897739
num_examples: 497
- name: other
num_bytes: 61948768
num_examples: 2130
- name: validated
num_bytes: 93371649
num_examples: 3352
- name: invalidated
num_bytes: 10993268
num_examples: 409
download_size: 156553447
dataset_size: 213219983
- config_name: hi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 4860737
num_examples: 157
- name: test
num_bytes: 4728043
num_examples: 127
- name: validation
num_bytes: 5569352
num_examples: 135
- name: other
num_bytes: 4176110
num_examples: 139
- name: validated
num_bytes: 15158052
num_examples: 419
- name: invalidated
num_bytes: 2801051
num_examples: 60
download_size: 21424045
dataset_size: 37293345
- config_name: hsb
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 43049910
num_examples: 808
- name: test
num_bytes: 20929094
num_examples: 387
- name: validation
num_bytes: 8769458
num_examples: 172
- name: other
num_bytes: 3173841
num_examples: 62
- name: validated
num_bytes: 72748422
num_examples: 1367
- name: invalidated
num_bytes: 5589972
num_examples: 227
download_size: 79362060
dataset_size: 154260697
- config_name: hu
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 126163153
num_examples: 3348
- name: test
num_bytes: 57056435
num_examples: 1649
- name: validation
num_bytes: 50306925
num_examples: 1434
- name: other
num_bytes: 12051094
num_examples: 295
- name: validated
num_bytes: 234307671
num_examples: 6457
- name: invalidated
num_bytes: 5881521
num_examples: 169
download_size: 242758708
dataset_size: 485766799
- config_name: ia
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 96577153
num_examples: 3477
- name: test
num_bytes: 33204678
num_examples: 899
- name: validation
num_bytes: 67436779
num_examples: 1601
- name: other
num_bytes: 30937041
num_examples: 1095
- name: validated
num_bytes: 197248304
num_examples: 5978
- name: invalidated
num_bytes: 6769573
num_examples: 192
download_size: 226499645
dataset_size: 432173528
- config_name: id
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 63515863
num_examples: 2130
- name: test
num_bytes: 60711104
num_examples: 1844
- name: validation
num_bytes: 56963520
num_examples: 1835
- name: other
num_bytes: 206578628
num_examples: 6782
- name: validated
num_bytes: 272570942
num_examples: 8696
- name: invalidated
num_bytes: 16566129
num_examples: 470
download_size: 475918233
dataset_size: 676906186
- config_name: it
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 2555546829
num_examples: 58015
- name: test
num_bytes: 656285877
num_examples: 12928
- name: validation
num_bytes: 621955330
num_examples: 12928
- name: other
num_bytes: 671213467
num_examples: 14549
- name: validated
num_bytes: 4552252754
num_examples: 102579
- name: invalidated
num_bytes: 564610354
num_examples: 12189
download_size: 5585781573
dataset_size: 9621864611
- config_name: ja
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 27600264
num_examples: 722
- name: test
num_bytes: 26475556
num_examples: 632
- name: validation
num_bytes: 22098940
num_examples: 586
- name: other
num_bytes: 34588931
num_examples: 885
- name: validated
num_bytes: 106916400
num_examples: 3072
- name: invalidated
num_bytes: 17819020
num_examples: 504
download_size: 152879796
dataset_size: 235499111
- config_name: ka
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 47790695
num_examples: 1058
- name: test
num_bytes: 30301524
num_examples: 656
- name: validation
num_bytes: 24951079
num_examples: 527
- name: other
num_bytes: 2144603
num_examples: 44
- name: validated
num_bytes: 104135978
num_examples: 2275
- name: invalidated
num_bytes: 7004160
num_examples: 139
download_size: 104280554
dataset_size: 216328039
- config_name: kab
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 3219289101
num_examples: 120530
- name: test
num_bytes: 446453041
num_examples: 14622
- name: validation
num_bytes: 414159937
num_examples: 14622
- name: other
num_bytes: 2282481767
num_examples: 88021
- name: validated
num_bytes: 15310455176
num_examples: 573718
- name: invalidated
num_bytes: 581587104
num_examples: 18134
download_size: 17171606918
dataset_size: 22254426126
- config_name: ky
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 75460488
num_examples: 1955
- name: test
num_bytes: 57116561
num_examples: 1503
- name: validation
num_bytes: 61393867
num_examples: 1511
- name: other
num_bytes: 258081579
num_examples: 7223
- name: validated
num_bytes: 355742823
num_examples: 9236
- name: invalidated
num_bytes: 41007711
num_examples: 926
download_size: 579440853
dataset_size: 848803029
- config_name: lg
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 46910479
num_examples: 1250
- name: test
num_bytes: 26951803
num_examples: 584
- name: validation
num_bytes: 16709367
num_examples: 384
- name: other
num_bytes: 111180838
num_examples: 3110
- name: validated
num_bytes: 90606863
num_examples: 2220
- name: invalidated
num_bytes: 14069959
num_examples: 290
download_size: 208197149
dataset_size: 306429309
- config_name: lt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 34605356
num_examples: 931
- name: test
num_bytes: 19940391
num_examples: 466
- name: validation
num_bytes: 10462851
num_examples: 244
- name: other
num_bytes: 71150206
num_examples: 1629
- name: validated
num_bytes: 65138550
num_examples: 1644
- name: invalidated
num_bytes: 4414780
num_examples: 102
download_size: 135299706
dataset_size: 205712134
- config_name: lv
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 67269173
num_examples: 2552
- name: test
num_bytes: 56937435
num_examples: 1882
- name: validation
num_bytes: 55289058
num_examples: 2002
- name: other
num_bytes: 40259801
num_examples: 1560
- name: validated
num_bytes: 179726893
num_examples: 6444
- name: invalidated
num_bytes: 4383319
num_examples: 143
download_size: 208307691
dataset_size: 403865679
- config_name: mn
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 89913910
num_examples: 2183
- name: test
num_bytes: 86737041
num_examples: 1862
- name: validation
num_bytes: 82343275
num_examples: 1837
- name: other
num_bytes: 146365394
num_examples: 3272
- name: validated
num_bytes: 327264827
num_examples: 7487
- name: invalidated
num_bytes: 31764232
num_examples: 667
download_size: 486369317
dataset_size: 764388679
- config_name: mt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 73850815
num_examples: 2036
- name: test
num_bytes: 66520195
num_examples: 1617
- name: validation
num_bytes: 56412066
num_examples: 1516
- name: other
num_bytes: 220666971
num_examples: 5714
- name: validated
num_bytes: 218212969
num_examples: 5747
- name: invalidated
num_bytes: 12328068
num_examples: 314
download_size: 425114242
dataset_size: 647991084
- config_name: nl
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 321946148
num_examples: 9460
- name: test
num_bytes: 205287443
num_examples: 5708
- name: validation
num_bytes: 186095353
num_examples: 4938
- name: other
num_bytes: 801418
num_examples: 27
- name: validated
num_bytes: 1710636990
num_examples: 52488
- name: invalidated
num_bytes: 115133112
num_examples: 3308
download_size: 1741827548
dataset_size: 2539900464
- config_name: or
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 16067910
num_examples: 388
- name: test
num_bytes: 4270651
num_examples: 98
- name: validation
num_bytes: 5485937
num_examples: 129
- name: other
num_bytes: 177775963
num_examples: 4302
- name: validated
num_bytes: 25824418
num_examples: 615
- name: invalidated
num_bytes: 2701922
num_examples: 62
download_size: 199077358
dataset_size: 232126801
- config_name: pa-IN
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 7572499
num_examples: 211
- name: test
num_bytes: 4375532
num_examples: 116
- name: validation
num_bytes: 1702492
num_examples: 44
- name: other
num_bytes: 56683312
num_examples: 1411
- name: validated
num_bytes: 13650443
num_examples: 371
- name: invalidated
num_bytes: 1690766
num_examples: 43
download_size: 69748265
dataset_size: 85675044
- config_name: pl
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 273394509
num_examples: 7468
- name: test
num_bytes: 205047541
num_examples: 5153
- name: validation
num_bytes: 195917307
num_examples: 5153
- name: other
num_bytes: 442144781
num_examples: 12848
- name: validated
num_bytes: 3150860197
num_examples: 90791
- name: invalidated
num_bytes: 180801918
num_examples: 4601
download_size: 3537012341
dataset_size: 4448166253
- config_name: pt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 231451724
num_examples: 6514
- name: test
num_bytes: 180108694
num_examples: 4641
- name: validation
num_bytes: 165966139
num_examples: 4592
- name: other
num_bytes: 283497435
num_examples: 8390
- name: validated
num_bytes: 1480529669
num_examples: 41584
- name: invalidated
num_bytes: 67948392
num_examples: 1740
download_size: 1704252567
dataset_size: 2409502053
- config_name: rm-sursilv
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 62396326
num_examples: 1384
- name: test
num_bytes: 51707733
num_examples: 1194
- name: validation
num_bytes: 52114252
num_examples: 1205
- name: other
num_bytes: 93351293
num_examples: 2102
- name: validated
num_bytes: 166218231
num_examples: 3783
- name: invalidated
num_bytes: 30593270
num_examples: 639
download_size: 275950479
dataset_size: 456381105
- config_name: rm-vallader
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 29528457
num_examples: 574
- name: test
num_bytes: 18805466
num_examples: 378
- name: validation
num_bytes: 17012341
num_examples: 357
- name: other
num_bytes: 36890435
num_examples: 727
- name: validated
num_bytes: 65711922
num_examples: 1316
- name: invalidated
num_bytes: 9356204
num_examples: 374
download_size: 108113989
dataset_size: 177304825
- config_name: ro
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 107235430
num_examples: 3399
- name: test
num_bytes: 60106568
num_examples: 1778
- name: validation
num_bytes: 30358457
num_examples: 858
- name: other
num_bytes: 65805210
num_examples: 1945
- name: validated
num_bytes: 197820619
num_examples: 6039
- name: invalidated
num_bytes: 11108104
num_examples: 485
download_size: 261978702
dataset_size: 472434388
- config_name: ru
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 686168722
num_examples: 15481
- name: test
num_bytes: 385349488
num_examples: 8007
- name: validation
num_bytes: 361164462
num_examples: 7963
- name: other
num_bytes: 450644862
num_examples: 10247
- name: validated
num_bytes: 3212213931
num_examples: 74256
- name: invalidated
num_bytes: 145739451
num_examples: 3056
download_size: 3655676916
dataset_size: 5241280916
- config_name: rw
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 21645788973
num_examples: 515197
- name: test
num_bytes: 707959382
num_examples: 15724
- name: validation
num_bytes: 698662384
num_examples: 15032
- name: other
num_bytes: 923146896
num_examples: 22923
- name: validated
num_bytes: 35011249432
num_examples: 832929
- name: invalidated
num_bytes: 7969286423
num_examples: 206790
download_size: 42545189583
dataset_size: 66956093490
- config_name: sah
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 68286985
num_examples: 1442
- name: test
num_bytes: 38534020
num_examples: 757
- name: validation
num_bytes: 17900397
num_examples: 405
- name: other
num_bytes: 62594222
num_examples: 1275
- name: validated
num_bytes: 124800352
num_examples: 2606
- name: invalidated
num_bytes: 3594160
num_examples: 66
download_size: 181245626
dataset_size: 315710136
- config_name: sl
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 66122967
num_examples: 2038
- name: test
num_bytes: 26872195
num_examples: 881
- name: validation
num_bytes: 16353097
num_examples: 556
- name: other
num_bytes: 79268518
num_examples: 2502
- name: validated
num_bytes: 148371273
num_examples: 4669
- name: invalidated
num_bytes: 3048301
num_examples: 92
download_size: 222751292
dataset_size: 340036351
- config_name: sv-SE
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 62727263
num_examples: 2331
- name: test
num_bytes: 59127381
num_examples: 2027
- name: validation
num_bytes: 53846355
num_examples: 2019
- name: other
num_bytes: 109970049
num_examples: 3043
- name: validated
num_bytes: 327049001
num_examples: 12552
- name: invalidated
num_bytes: 13462567
num_examples: 462
download_size: 421434184
dataset_size: 626182616
- config_name: ta
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 69052658
num_examples: 2009
- name: test
num_bytes: 67616865
num_examples: 1781
- name: validation
num_bytes: 63248009
num_examples: 1779
- name: other
num_bytes: 246650792
num_examples: 7428
- name: validated
num_bytes: 438961956
num_examples: 12652
- name: invalidated
num_bytes: 23587453
num_examples: 594
download_size: 679766097
dataset_size: 909117733
- config_name: th
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 100435725
num_examples: 2917
- name: test
num_bytes: 82030679
num_examples: 2188
- name: validation
num_bytes: 63237632
num_examples: 1922
- name: other
num_bytes: 95235301
num_examples: 2671
- name: validated
num_bytes: 245734783
num_examples: 7028
- name: invalidated
num_bytes: 18247080
num_examples: 467
download_size: 341305736
dataset_size: 604921200
- config_name: tr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 57879052
num_examples: 1831
- name: test
num_bytes: 60268059
num_examples: 1647
- name: validation
num_bytes: 54914798
num_examples: 1647
- name: other
num_bytes: 10954154
num_examples: 325
- name: validated
num_bytes: 585777527
num_examples: 18685
- name: invalidated
num_bytes: 59288266
num_examples: 1726
download_size: 620848700
dataset_size: 829081856
- config_name: tt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 348132697
num_examples: 11211
- name: test
num_bytes: 135120057
num_examples: 4485
- name: validation
num_bytes: 61690964
num_examples: 2127
- name: other
num_bytes: 62158038
num_examples: 1798
- name: validated
num_bytes: 767791517
num_examples: 25781
- name: invalidated
num_bytes: 10403128
num_examples: 287
download_size: 777153207
dataset_size: 1385296401
- config_name: uk
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 161925063
num_examples: 4035
- name: test
num_bytes: 138422211
num_examples: 3235
- name: validation
num_bytes: 135483169
num_examples: 3236
- name: other
num_bytes: 327979131
num_examples: 8161
- name: validated
num_bytes: 889863965
num_examples: 22337
- name: invalidated
num_bytes: 55745301
num_examples: 1255
download_size: 1218559031
dataset_size: 1709418840
- config_name: vi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 6244454
num_examples: 221
- name: test
num_bytes: 6656365
num_examples: 198
- name: validation
num_bytes: 6531856
num_examples: 200
- name: other
num_bytes: 31315434
num_examples: 870
- name: validated
num_bytes: 19432595
num_examples: 619
- name: invalidated
num_bytes: 2981661
num_examples: 78
download_size: 51929480
dataset_size: 73162365
- config_name: vot
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 146467
num_examples: 3
- name: test
- name: validation
- name: other
num_bytes: 7963322
num_examples: 411
- name: validated
num_bytes: 146467
num_examples: 3
- name: invalidated
num_bytes: 107949
num_examples: 6
download_size: 7792602
dataset_size: 8364205
- config_name: zh-CN
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 793667379
num_examples: 18541
- name: test
num_bytes: 420202544
num_examples: 8760
- name: validation
num_bytes: 396096323
num_examples: 8743
- name: other
num_bytes: 381264783
num_examples: 8948
- name: validated
num_bytes: 1618113625
num_examples: 36405
- name: invalidated
num_bytes: 266234479
num_examples: 5305
download_size: 2184602350
dataset_size: 3875579133
- config_name: zh-HK
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 221459521
num_examples: 7506
- name: test
num_bytes: 217627041
num_examples: 5172
- name: validation
num_bytes: 196071110
num_examples: 5172
- name: other
num_bytes: 1319233252
num_examples: 38830
- name: validated
num_bytes: 1482087591
num_examples: 41835
- name: invalidated
num_bytes: 124170969
num_examples: 2999
download_size: 2774145806
dataset_size: 3560649484
- config_name: zh-TW
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 97323787
num_examples: 3507
- name: test
num_bytes: 85512325
num_examples: 2895
- name: validation
num_bytes: 80402637
num_examples: 2895
- name: other
num_bytes: 623801957
num_examples: 22477
- name: validated
num_bytes: 1568842090
num_examples: 61232
- name: invalidated
num_bytes: 100241443
num_examples: 3584
download_size: 2182836295
dataset_size: 2556124239
config_names:
- ab
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- hi
- hsb
- hu
- ia
- id
- it
- ja
- ka
- kab
- ky
- lg
- lt
- lv
- mn
- mt
- nl
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sl
- sv-SE
- ta
- th
- tr
- tt
- uk
- vi
- vot
- zh-CN
- zh-HK
- zh-TW
---
# Dataset Card for common_voice
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Deprecated:</b> Dataset "common_voice" is deprecated and will soon be deleted. Use datasets under <a href="https://huggingface.co/mozilla-foundation">mozilla-foundation</a> organisation instead. For example, you can load <a href="https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0">Common Voice 13</a> dataset via <code>load_dataset("mozilla-foundation/common_voice_13_0", "en")</code></p>
</div>
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://commonvoice.mozilla.org/en/datasets
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines.
The dataset currently consists of 7,335 validated hours in 60 languages, but were always adding more voices and languages. Take a look at our Languages page to request a language or start contributing.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment.
`
{'accent': 'netherlands', 'age': 'fourties', 'client_id': 'bbbcb732e0f422150c30ff3654bbab572e2a617da107bca22ff8b89ab2e4f124d03b6a92c48322862f60bd0179ae07baf0f9b4f9c4e11d581e0cec70f703ba54', 'down_votes': 0, 'gender': 'male', 'locale': 'nl', 'path': 'nl/clips/common_voice_nl_23522441.mp3', 'segment': "''", 'sentence': 'Ik vind dat een dubieuze procedure.', 'up_votes': 2, 'audio': {'path': `nl/clips/common_voice_nl_23522441.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000}
`
### Data Fields
client_id: An id for which client (voice) made the recording
path: The path to the audio file
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence the user was prompted to speak
up_votes: How many upvotes the audio file has received from reviewers
down_votes: How many downvotes the audio file has received from reviewers
age: The age of the speaker.
gender: The gender of the speaker
accent: Accent of the speaker
locale: The locale of the speaker
segment: Usually empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and recieved upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and recieved downvotes that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
### Contributions
Thanks to [@BirgerMoell](https://github.com/BirgerMoell) for adding this dataset. |
asahi417/seamless-align-enA-frA.speaker-embedding.w2vbert-600m | asahi417 | "2024-06-24T07:09:01Z" | 13,831 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-16T15:17:28Z" | ---
dataset_info:
- config_name: subset_1
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9566368832
num_examples: 2343
download_size: 9594006755
dataset_size: 9566368832
- config_name: subset_10
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9055495714
num_examples: 2334
download_size: 9082072956
dataset_size: 9055495714
- config_name: subset_100
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8344151850
num_examples: 2309
download_size: 8368301727
dataset_size: 8344151850
- config_name: subset_101
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8292800434
num_examples: 2322
download_size: 8317633830
dataset_size: 8292800434
- config_name: subset_102
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8224967519
num_examples: 2291
download_size: 8249842240
dataset_size: 8224967519
- config_name: subset_103
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8339581413
num_examples: 2321
download_size: 8364550280
dataset_size: 8339581413
- config_name: subset_104
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8239856479
num_examples: 2314
download_size: 8264731163
dataset_size: 8239856479
- config_name: subset_105
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8325030279
num_examples: 2318
download_size: 8349920886
dataset_size: 8325030279
- config_name: subset_106
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8294529988
num_examples: 2314
download_size: 8319483916
dataset_size: 8294529988
- config_name: subset_107
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8451145911
num_examples: 2314
download_size: 8476011221
dataset_size: 8451145911
- config_name: subset_108
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8317793444
num_examples: 2315
download_size: 8342027520
dataset_size: 8317793444
- config_name: subset_109
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8295710465
num_examples: 2310
download_size: 8320545488
dataset_size: 8295710465
- config_name: subset_11
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8923552101
num_examples: 2315
download_size: 8949074992
dataset_size: 8923552101
- config_name: subset_110
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8049675461
num_examples: 2283
download_size: 8074629289
dataset_size: 8049675461
- config_name: subset_111
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8131405681
num_examples: 2293
download_size: 8156355987
dataset_size: 8131405681
- config_name: subset_112
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8527115785
num_examples: 2326
download_size: 8552606437
dataset_size: 8527115785
- config_name: subset_113
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8231050128
num_examples: 2319
download_size: 8255918719
dataset_size: 8231050128
- config_name: subset_114
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8344222519
num_examples: 2321
download_size: 8369107301
dataset_size: 8344222519
- config_name: subset_115
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8035352964
num_examples: 2269
download_size: 8060305813
dataset_size: 8035352964
- config_name: subset_116
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8301454473
num_examples: 2309
download_size: 8326124823
dataset_size: 8301454473
- config_name: subset_117
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8135502766
num_examples: 2308
download_size: 8160552827
dataset_size: 8135502766
- config_name: subset_118
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8263147643
num_examples: 2302
download_size: 8288059886
dataset_size: 8263147643
- config_name: subset_119
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8069125453
num_examples: 2278
download_size: 8094153268
dataset_size: 8069125453
- config_name: subset_12
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9237348380
num_examples: 2349
download_size: 9263840549
dataset_size: 9237348380
- config_name: subset_120
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8168252869
num_examples: 2299
download_size: 8193165412
dataset_size: 8168252869
- config_name: subset_121
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8181444116
num_examples: 2268
download_size: 8205963945
dataset_size: 8181444116
- config_name: subset_122
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8072115742
num_examples: 2295
download_size: 8097124646
dataset_size: 8072115742
- config_name: subset_123
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8212669175
num_examples: 2311
download_size: 8237530272
dataset_size: 8212669175
- config_name: subset_124
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8162270667
num_examples: 2295
download_size: 8186191828
dataset_size: 8162270667
- config_name: subset_125
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8169686828
num_examples: 2286
download_size: 8194557789
dataset_size: 8169686828
- config_name: subset_126
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8075101768
num_examples: 2310
download_size: 8099708233
dataset_size: 8075101768
- config_name: subset_127
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8180963469
num_examples: 2306
download_size: 8205839882
dataset_size: 8180963469
- config_name: subset_128
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8019733599
num_examples: 2288
download_size: 8044566821
dataset_size: 8019733599
- config_name: subset_129
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8174740965
num_examples: 2322
download_size: 8199696891
dataset_size: 8174740965
- config_name: subset_13
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9160993101
num_examples: 2338
download_size: 9187478943
dataset_size: 9160993101
- config_name: subset_130
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8074855338
num_examples: 2305
download_size: 8099903383
dataset_size: 8074855338
- config_name: subset_131
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8206631567
num_examples: 2332
download_size: 8231549695
dataset_size: 8206631567
- config_name: subset_132
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8165670728
num_examples: 2309
download_size: 8190476539
dataset_size: 8165670728
- config_name: subset_133
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8184054104
num_examples: 2297
download_size: 8209063228
dataset_size: 8184054104
- config_name: subset_134
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8030575075
num_examples: 2301
download_size: 8054658929
dataset_size: 8030575075
- config_name: subset_135
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8104485914
num_examples: 2303
download_size: 8129471884
dataset_size: 8104485914
- config_name: subset_136
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8114533103
num_examples: 2280
download_size: 8138728487
dataset_size: 8114533103
- config_name: subset_137
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8099519394
num_examples: 2286
download_size: 8124253322
dataset_size: 8099519394
- config_name: subset_138
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7995526960
num_examples: 2286
download_size: 8019243427
dataset_size: 7995526960
- config_name: subset_139
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8026030636
num_examples: 2282
download_size: 8050810923
dataset_size: 8026030636
- config_name: subset_14
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9112887751
num_examples: 2337
download_size: 9139444886
dataset_size: 9112887751
- config_name: subset_140
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7952190959
num_examples: 2297
download_size: 7976043228
dataset_size: 7952190959
- config_name: subset_141
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8228326188
num_examples: 2300
download_size: 8253188254
dataset_size: 8228326188
- config_name: subset_142
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7986864298
num_examples: 2293
download_size: 8010568715
dataset_size: 7986864298
- config_name: subset_143
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8015380352
num_examples: 2300
download_size: 8040374582
dataset_size: 8015380352
- config_name: subset_144
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7931507394
num_examples: 2259
download_size: 7955143204
dataset_size: 7931507394
- config_name: subset_145
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7974400291
num_examples: 2243
download_size: 7997939303
dataset_size: 7974400291
- config_name: subset_146
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8022663180
num_examples: 2265
download_size: 8046776491
dataset_size: 8022663180
- config_name: subset_147
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8153227365
num_examples: 2311
download_size: 8178140467
dataset_size: 8153227365
- config_name: subset_148
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8064998279
num_examples: 2297
download_size: 8089201399
dataset_size: 8064998279
- config_name: subset_149
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7971935565
num_examples: 2285
download_size: 7995621948
dataset_size: 7971935565
- config_name: subset_15
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9024870828
num_examples: 2360
download_size: 9051498336
dataset_size: 9024870828
- config_name: subset_150
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8011231674
num_examples: 2306
download_size: 8035747763
dataset_size: 8011231674
- config_name: subset_151
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8005667589
num_examples: 2302
download_size: 8030699266
dataset_size: 8005667589
- config_name: subset_152
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7981244068
num_examples: 2304
download_size: 8004928575
dataset_size: 7981244068
- config_name: subset_153
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8085035996
num_examples: 2309
download_size: 8110022342
dataset_size: 8085035996
- config_name: subset_154
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7921123591
num_examples: 2277
download_size: 7944925000
dataset_size: 7921123591
- config_name: subset_155
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8046964534
num_examples: 2273
download_size: 8071597579
dataset_size: 8046964534
- config_name: subset_156
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7833911999
num_examples: 2271
download_size: 7857628948
dataset_size: 7833911999
- config_name: subset_157
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8007571764
num_examples: 2274
download_size: 8032448785
dataset_size: 8007571764
- config_name: subset_158
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7931573773
num_examples: 2269
download_size: 7955329482
dataset_size: 7931573773
- config_name: subset_159
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7977333052
num_examples: 2259
download_size: 8000885022
dataset_size: 7977333052
- config_name: subset_16
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9060109113
num_examples: 2345
download_size: 9086664290
dataset_size: 9060109113
- config_name: subset_160
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7776143386
num_examples: 2276
download_size: 7799390788
dataset_size: 7776143386
- config_name: subset_161
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7854263513
num_examples: 2260
download_size: 7877994222
dataset_size: 7854263513
- config_name: subset_162
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7933792904
num_examples: 2281
download_size: 7957484924
dataset_size: 7933792904
- config_name: subset_163
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7886418547
num_examples: 2295
download_size: 7910319402
dataset_size: 7886418547
- config_name: subset_164
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7693955065
num_examples: 2229
download_size: 7717661424
dataset_size: 7693955065
- config_name: subset_165
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7860425835
num_examples: 2261
download_size: 7884085492
dataset_size: 7860425835
- config_name: subset_166
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7892858435
num_examples: 2281
download_size: 7916552949
dataset_size: 7892858435
- config_name: subset_167
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7932964434
num_examples: 2278
download_size: 7955812455
dataset_size: 7932964434
- config_name: subset_168
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7889922381
num_examples: 2274
download_size: 7913639656
dataset_size: 7889922381
- config_name: subset_169
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7920978496
num_examples: 2258
download_size: 7944061095
dataset_size: 7920978496
- config_name: subset_17
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9036569527
num_examples: 2327
download_size: 9063047106
dataset_size: 9036569527
- config_name: subset_170
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7743968235
num_examples: 2245
download_size: 7766859780
dataset_size: 7743968235
- config_name: subset_171
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7886519086
num_examples: 2271
download_size: 7910304528
dataset_size: 7886519086
- config_name: subset_172
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7841868335
num_examples: 2250
download_size: 7865696763
dataset_size: 7841868335
- config_name: subset_173
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7999828366
num_examples: 2292
download_size: 8023554856
dataset_size: 7999828366
- config_name: subset_174
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7804134165
num_examples: 2277
download_size: 7827908092
dataset_size: 7804134165
- config_name: subset_175
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7846351791
num_examples: 2276
download_size: 7870165792
dataset_size: 7846351791
- config_name: subset_176
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7676078008
num_examples: 2266
download_size: 7699987238
dataset_size: 7676078008
- config_name: subset_177
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7984971312
num_examples: 2258
download_size: 8008575547
dataset_size: 7984971312
- config_name: subset_178
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7673452087
num_examples: 2243
download_size: 7697292379
dataset_size: 7673452087
- config_name: subset_179
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7595341010
num_examples: 2219
download_size: 7619112057
dataset_size: 7595341010
- config_name: subset_18
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9133950375
num_examples: 2348
download_size: 9160421871
dataset_size: 9133950375
- config_name: subset_180
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7748805911
num_examples: 2242
download_size: 7772520954
dataset_size: 7748805911
- config_name: subset_181
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7663841153
num_examples: 2236
download_size: 7687616248
dataset_size: 7663841153
- config_name: subset_182
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7817760293
num_examples: 2249
download_size: 7841543183
dataset_size: 7817760293
- config_name: subset_183
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7707257231
num_examples: 2251
download_size: 7731096605
dataset_size: 7707257231
- config_name: subset_184
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7641873882
num_examples: 2243
download_size: 7665710381
dataset_size: 7641873882
- config_name: subset_185
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7660253746
num_examples: 2238
download_size: 7684018273
dataset_size: 7660253746
- config_name: subset_186
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7680743835
num_examples: 2267
download_size: 7704644577
dataset_size: 7680743835
- config_name: subset_187
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7589971643
num_examples: 2239
download_size: 7613762688
dataset_size: 7589971643
- config_name: subset_188
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7572755532
num_examples: 2236
download_size: 7596602486
dataset_size: 7572755532
- config_name: subset_189
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7666799246
num_examples: 2261
download_size: 7690814108
dataset_size: 7666799246
- config_name: subset_19
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9057776470
num_examples: 2348
download_size: 9084303694
dataset_size: 9057776470
- config_name: subset_190
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7522820162
num_examples: 2218
download_size: 7546583799
dataset_size: 7522820162
- config_name: subset_191
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7666942711
num_examples: 2260
download_size: 7690706889
dataset_size: 7666942711
- config_name: subset_192
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7679879152
num_examples: 2221
download_size: 7703369584
dataset_size: 7679879152
- config_name: subset_193
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7698809905
num_examples: 2235
download_size: 7722394085
dataset_size: 7698809905
- config_name: subset_194
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7712874495
num_examples: 2254
download_size: 7736726509
dataset_size: 7712874495
- config_name: subset_195
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7555535314
num_examples: 2233
download_size: 7579318324
dataset_size: 7555535314
- config_name: subset_196
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7360775527
num_examples: 2164
download_size: 7383197315
dataset_size: 7360775527
- config_name: subset_197
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7249674195
num_examples: 2120
download_size: 7272015748
dataset_size: 7249674195
- config_name: subset_198
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7512676564
num_examples: 2165
download_size: 7536263945
dataset_size: 7512676564
- config_name: subset_199
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7254405688
num_examples: 2121
download_size: 7276717339
dataset_size: 7254405688
- config_name: subset_2
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9444026416
num_examples: 2363
download_size: 9470446946
dataset_size: 9444026416
- config_name: subset_20
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8841697775
num_examples: 2342
download_size: 8867352749
dataset_size: 8841697775
- config_name: subset_200
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7123192417
num_examples: 2109
download_size: 7145623137
dataset_size: 7123192417
- config_name: subset_201
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7735740929
num_examples: 2213
download_size: 7759364569
dataset_size: 7735740929
- config_name: subset_202
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7652346219
num_examples: 2204
download_size: 7675555564
dataset_size: 7652346219
- config_name: subset_203
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7674136786
num_examples: 2243
download_size: 7697931101
dataset_size: 7674136786
- config_name: subset_204
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7566382244
num_examples: 2213
download_size: 7590090536
dataset_size: 7566382244
- config_name: subset_205
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7632138122
num_examples: 2214
download_size: 7655343096
dataset_size: 7632138122
- config_name: subset_206
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7584164394
num_examples: 2218
download_size: 7607854472
dataset_size: 7584164394
- config_name: subset_207
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7429457322
num_examples: 2196
download_size: 7451953931
dataset_size: 7429457322
- config_name: subset_208
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7660223935
num_examples: 2224
download_size: 7683973103
dataset_size: 7660223935
- config_name: subset_209
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7664387931
num_examples: 2204
download_size: 7688045612
dataset_size: 7664387931
- config_name: subset_21
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8972323047
num_examples: 2333
download_size: 8997904168
dataset_size: 8972323047
- config_name: subset_210
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7482052557
num_examples: 2202
download_size: 7504612876
dataset_size: 7482052557
- config_name: subset_211
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7588451811
num_examples: 2204
download_size: 7612040944
dataset_size: 7588451811
- config_name: subset_212
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7672146103
num_examples: 2216
download_size: 7695846178
dataset_size: 7672146103
- config_name: subset_213
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7520841003
num_examples: 2232
download_size: 7544754086
dataset_size: 7520841003
- config_name: subset_214
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7466783296
num_examples: 2192
download_size: 7489150324
dataset_size: 7466783296
- config_name: subset_215
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7280682545
num_examples: 2170
download_size: 7303238538
dataset_size: 7280682545
- config_name: subset_216
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7438472789
num_examples: 2192
download_size: 7461022927
dataset_size: 7438472789
- config_name: subset_217
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7492129869
num_examples: 2196
download_size: 7514545250
dataset_size: 7492129869
- config_name: subset_218
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7290939153
num_examples: 2201
download_size: 7313537406
dataset_size: 7290939153
- config_name: subset_219
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7400911713
num_examples: 2180
download_size: 7423202668
dataset_size: 7400911713
- config_name: subset_22
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8953331556
num_examples: 2330
download_size: 8978852309
dataset_size: 8953331556
- config_name: subset_220
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7568429214
num_examples: 2226
download_size: 7592192645
dataset_size: 7568429214
- config_name: subset_221
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7510359918
num_examples: 2214
download_size: 7534046947
dataset_size: 7510359918
- config_name: subset_222
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7677144344
num_examples: 2220
download_size: 7700863980
dataset_size: 7677144344
- config_name: subset_223
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7461311158
num_examples: 2208
download_size: 7483798865
dataset_size: 7461311158
- config_name: subset_224
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7203424018
num_examples: 2193
download_size: 7226189129
dataset_size: 7203424018
- config_name: subset_225
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7541811645
num_examples: 2221
download_size: 7565575392
dataset_size: 7541811645
- config_name: subset_226
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7362673036
num_examples: 2204
download_size: 7385173913
dataset_size: 7362673036
- config_name: subset_227
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7295065396
num_examples: 2171
download_size: 7317597460
dataset_size: 7295065396
- config_name: subset_228
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7383821512
num_examples: 2213
download_size: 7405990029
dataset_size: 7383821512
- config_name: subset_229
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7361876281
num_examples: 2188
download_size: 7384222362
dataset_size: 7361876281
- config_name: subset_23
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8858248446
num_examples: 2330
download_size: 8883674752
dataset_size: 8858248446
- config_name: subset_230
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7325296424
num_examples: 2192
download_size: 7347828960
dataset_size: 7325296424
- config_name: subset_231
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7307551119
num_examples: 2186
download_size: 7330009541
dataset_size: 7307551119
- config_name: subset_232
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7237935599
num_examples: 2166
download_size: 7260436028
dataset_size: 7237935599
- config_name: subset_233
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7282142892
num_examples: 2179
download_size: 7304574078
dataset_size: 7282142892
- config_name: subset_234
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7485996441
num_examples: 2198
download_size: 7508371121
dataset_size: 7485996441
- config_name: subset_235
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7304290498
num_examples: 2172
download_size: 7326682460
dataset_size: 7304290498
- config_name: subset_236
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7291414246
num_examples: 2195
download_size: 7313997880
dataset_size: 7291414246
- config_name: subset_237
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7113317612
num_examples: 2184
download_size: 7135189023
dataset_size: 7113317612
- config_name: subset_238
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7245693699
num_examples: 2177
download_size: 7268236571
dataset_size: 7245693699
- config_name: subset_239
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7119418798
num_examples: 2139
download_size: 7141920929
dataset_size: 7119418798
- config_name: subset_24
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8990629268
num_examples: 2330
download_size: 9016126467
dataset_size: 8990629268
- config_name: subset_240
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7306415096
num_examples: 2182
download_size: 7328994692
dataset_size: 7306415096
- config_name: subset_241
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7320580371
num_examples: 2179
download_size: 7343025792
dataset_size: 7320580371
- config_name: subset_242
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7243026562
num_examples: 2151
download_size: 7265515696
dataset_size: 7243026562
- config_name: subset_243
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7355029760
num_examples: 2194
download_size: 7377496942
dataset_size: 7355029760
- config_name: subset_244
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7283342262
num_examples: 2156
download_size: 7305767447
dataset_size: 7283342262
- config_name: subset_245
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7136606526
num_examples: 2146
download_size: 7159172263
dataset_size: 7136606526
- config_name: subset_246
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7112273128
num_examples: 2147
download_size: 7134694842
dataset_size: 7112273128
- config_name: subset_247
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7290903659
num_examples: 2168
download_size: 7313454904
dataset_size: 7290903659
- config_name: subset_248
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7269479842
num_examples: 2152
download_size: 7291981118
dataset_size: 7269479842
- config_name: subset_249
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7111465855
num_examples: 2152
download_size: 7134109246
dataset_size: 7111465855
- config_name: subset_25
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8805930925
num_examples: 2311
download_size: 8831278462
dataset_size: 8805930925
- config_name: subset_250
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7236490764
num_examples: 2146
download_size: 7258935648
dataset_size: 7236490764
- config_name: subset_251
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7248958447
num_examples: 2191
download_size: 7271560789
dataset_size: 7248958447
- config_name: subset_252
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7052635368
num_examples: 2157
download_size: 7075198089
dataset_size: 7052635368
- config_name: subset_253
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7090768865
num_examples: 2150
download_size: 7113347720
dataset_size: 7090768865
- config_name: subset_254
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7002798362
num_examples: 2139
download_size: 7025383566
dataset_size: 7002798362
- config_name: subset_255
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7100794176
num_examples: 2160
download_size: 7123316335
dataset_size: 7100794176
- config_name: subset_256
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7056524050
num_examples: 2130
download_size: 7078947156
dataset_size: 7056524050
- config_name: subset_257
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6938764194
num_examples: 2134
download_size: 6960119167
dataset_size: 6938764194
- config_name: subset_258
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7020026780
num_examples: 2141
download_size: 7042693120
dataset_size: 7020026780
- config_name: subset_259
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7063187494
num_examples: 2141
download_size: 7085681649
dataset_size: 7063187494
- config_name: subset_26
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9127495910
num_examples: 2335
download_size: 9153901606
dataset_size: 9127495910
- config_name: subset_260
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7022145652
num_examples: 2131
download_size: 7044637069
dataset_size: 7022145652
- config_name: subset_261
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7042283781
num_examples: 2118
download_size: 7064765529
dataset_size: 7042283781
- config_name: subset_262
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6891511741
num_examples: 2135
download_size: 6912835277
dataset_size: 6891511741
- config_name: subset_263
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7035316731
num_examples: 2154
download_size: 7057899921
dataset_size: 7035316731
- config_name: subset_264
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7043720025
num_examples: 2133
download_size: 7066308396
dataset_size: 7043720025
- config_name: subset_265
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7006438023
num_examples: 2124
download_size: 7028939127
dataset_size: 7006438023
- config_name: subset_266
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7163112843
num_examples: 2143
download_size: 7185594807
dataset_size: 7163112843
- config_name: subset_267
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7059693660
num_examples: 2134
download_size: 7082079900
dataset_size: 7059693660
- config_name: subset_268
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6897578594
num_examples: 2121
download_size: 6919048917
dataset_size: 6897578594
- config_name: subset_269
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6992471441
num_examples: 2101
download_size: 7012925117
dataset_size: 6992471441
- config_name: subset_27
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8903658942
num_examples: 2316
download_size: 8928990634
dataset_size: 8903658942
- config_name: subset_270
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7106491318
num_examples: 2137
download_size: 7129005615
dataset_size: 7106491318
- config_name: subset_271
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7074404665
num_examples: 2136
download_size: 7096954548
dataset_size: 7074404665
- config_name: subset_272
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6962492033
num_examples: 2098
download_size: 6983746115
dataset_size: 6962492033
- config_name: subset_273
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6737833016
num_examples: 2104
download_size: 6759116103
dataset_size: 6737833016
- config_name: subset_274
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6981784789
num_examples: 2125
download_size: 7003028129
dataset_size: 6981784789
- config_name: subset_275
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6604852039
num_examples: 2034
download_size: 6625221522
dataset_size: 6604852039
- config_name: subset_276
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6683509527
num_examples: 2072
download_size: 6704818912
dataset_size: 6683509527
- config_name: subset_277
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6638868797
num_examples: 2023
download_size: 6659841207
dataset_size: 6638868797
- config_name: subset_278
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6726843976
num_examples: 2041
download_size: 6748034369
dataset_size: 6726843976
- config_name: subset_279
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6458972207
num_examples: 2012
download_size: 6478848969
dataset_size: 6458972207
- config_name: subset_28
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8953958859
num_examples: 2330
download_size: 8979400382
dataset_size: 8953958859
- config_name: subset_280
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6299794831
num_examples: 1974
download_size: 6319771019
dataset_size: 6299794831
- config_name: subset_281
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6623096556
num_examples: 2079
download_size: 6644409578
dataset_size: 6623096556
- config_name: subset_282
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6544280484
num_examples: 2057
download_size: 6565649565
dataset_size: 6544280484
- config_name: subset_283
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6848783528
num_examples: 2108
download_size: 6870163647
dataset_size: 6848783528
- config_name: subset_284
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6759412850
num_examples: 2072
download_size: 6780688043
dataset_size: 6759412850
- config_name: subset_285
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6866918251
num_examples: 2114
download_size: 6888222412
dataset_size: 6866918251
- config_name: subset_286
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6951260025
num_examples: 2122
download_size: 6972099644
dataset_size: 6951260025
- config_name: subset_287
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6882960441
num_examples: 2100
download_size: 6904246514
dataset_size: 6882960441
- config_name: subset_288
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6734579833
num_examples: 2081
download_size: 6755890079
dataset_size: 6734579833
- config_name: subset_289
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6764063840
num_examples: 2092
download_size: 6785014612
dataset_size: 6764063840
- config_name: subset_29
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8394485213
num_examples: 2180
download_size: 8418266807
dataset_size: 8394485213
- config_name: subset_290
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6729468298
num_examples: 2095
download_size: 6750818599
dataset_size: 6729468298
- config_name: subset_291
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6715707352
num_examples: 2078
download_size: 6737040292
dataset_size: 6715707352
- config_name: subset_292
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6718355023
num_examples: 2067
download_size: 6739580073
dataset_size: 6718355023
- config_name: subset_293
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6898099142
num_examples: 2119
download_size: 6919433339
dataset_size: 6898099142
- config_name: subset_294
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6708487108
num_examples: 2077
download_size: 6729900611
dataset_size: 6708487108
- config_name: subset_295
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6662048595
num_examples: 2053
download_size: 6683224716
dataset_size: 6662048595
- config_name: subset_296
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6632825418
num_examples: 2073
download_size: 6654209017
dataset_size: 6632825418
- config_name: subset_297
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6548846261
num_examples: 2037
download_size: 6570177074
dataset_size: 6548846261
- config_name: subset_298
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6748576012
num_examples: 2066
download_size: 6768914330
dataset_size: 6748576012
- config_name: subset_299
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6713393166
num_examples: 2057
download_size: 6734706256
dataset_size: 6713393166
- config_name: subset_3
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9356482401
num_examples: 2353
download_size: 9382926969
dataset_size: 9356482401
- config_name: subset_30
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7797920329
num_examples: 2048
download_size: 7820369022
dataset_size: 7797920329
- config_name: subset_300
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6531629597
num_examples: 2078
download_size: 6553125445
dataset_size: 6531629597
- config_name: subset_301
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6549099955
num_examples: 2031
download_size: 6570366069
dataset_size: 6549099955
- config_name: subset_302
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6703520838
num_examples: 2063
download_size: 6724755548
dataset_size: 6703520838
- config_name: subset_303
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6436790846
num_examples: 2058
download_size: 6456989280
dataset_size: 6436790846
- config_name: subset_304
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6588380848
num_examples: 2066
download_size: 6609715368
dataset_size: 6588380848
- config_name: subset_305
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6574448278
num_examples: 2057
download_size: 6595840841
dataset_size: 6574448278
- config_name: subset_306
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6461397151
num_examples: 2034
download_size: 6480540034
dataset_size: 6461397151
- config_name: subset_307
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6463794072
num_examples: 2015
download_size: 6483749566
dataset_size: 6463794072
- config_name: subset_308
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6463221975
num_examples: 2038
download_size: 6483279476
dataset_size: 6463221975
- config_name: subset_309
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6506242415
num_examples: 2028
download_size: 6527482846
dataset_size: 6506242415
- config_name: subset_31
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8924480662
num_examples: 2340
download_size: 8949916765
dataset_size: 8924480662
- config_name: subset_310
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6553213645
num_examples: 2048
download_size: 6574510919
dataset_size: 6553213645
- config_name: subset_311
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6533852529
num_examples: 2036
download_size: 6555128690
dataset_size: 6533852529
- config_name: subset_312
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6554722676
num_examples: 2051
download_size: 6575603235
dataset_size: 6554722676
- config_name: subset_313
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6552279682
num_examples: 2058
download_size: 6573616559
dataset_size: 6552279682
- config_name: subset_314
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6337892658
num_examples: 2032
download_size: 6358012965
dataset_size: 6337892658
- config_name: subset_315
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6581733741
num_examples: 2054
download_size: 6603132499
dataset_size: 6581733741
- config_name: subset_316
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6435664911
num_examples: 2027
download_size: 6455772395
dataset_size: 6435664911
- config_name: subset_317
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6399136973
num_examples: 2014
download_size: 6419119371
dataset_size: 6399136973
- config_name: subset_318
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6252209970
num_examples: 2021
download_size: 6272450211
dataset_size: 6252209970
- config_name: subset_319
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6380364448
num_examples: 2031
download_size: 6400507882
dataset_size: 6380364448
- config_name: subset_32
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8881670166
num_examples: 2310
download_size: 8907038673
dataset_size: 8881670166
- config_name: subset_320
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6317997120
num_examples: 2004
download_size: 6337947635
dataset_size: 6317997120
- config_name: subset_321
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6331490807
num_examples: 2011
download_size: 6351621606
dataset_size: 6331490807
- config_name: subset_322
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6351405822
num_examples: 2027
download_size: 6371656380
dataset_size: 6351405822
- config_name: subset_323
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6325060772
num_examples: 2046
download_size: 6345265024
dataset_size: 6325060772
- config_name: subset_324
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6184725540
num_examples: 1990
download_size: 6204934648
dataset_size: 6184725540
- config_name: subset_325
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6319432479
num_examples: 2008
download_size: 6339505832
dataset_size: 6319432479
- config_name: subset_326
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6120208061
num_examples: 1992
download_size: 6140405716
dataset_size: 6120208061
- config_name: subset_327
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6354123343
num_examples: 2017
download_size: 6374232212
dataset_size: 6354123343
- config_name: subset_328
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6421284354
num_examples: 2054
download_size: 6441458953
dataset_size: 6421284354
- config_name: subset_329
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5246902051
num_examples: 1667
download_size: 5263845837
dataset_size: 5246902051
- config_name: subset_33
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8759622346
num_examples: 2322
download_size: 8785014160
dataset_size: 8759622346
- config_name: subset_330
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6231295792
num_examples: 1970
download_size: 6251385558
dataset_size: 6231295792
- config_name: subset_331
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6181322311
num_examples: 1987
download_size: 6201458060
dataset_size: 6181322311
- config_name: subset_332
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6358642319
num_examples: 1977
download_size: 6378626072
dataset_size: 6358642319
- config_name: subset_333
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6297775069
num_examples: 1993
download_size: 6317843539
dataset_size: 6297775069
- config_name: subset_334
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6308794262
num_examples: 1973
download_size: 6328713694
dataset_size: 6308794262
- config_name: subset_335
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5986136662
num_examples: 1975
download_size: 6005142941
dataset_size: 5986136662
- config_name: subset_336
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6214945586
num_examples: 1977
download_size: 6235030588
dataset_size: 6214945586
- config_name: subset_337
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6106307930
num_examples: 1978
download_size: 6126421723
dataset_size: 6106307930
- config_name: subset_338
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6151542174
num_examples: 1965
download_size: 6171512230
dataset_size: 6151542174
- config_name: subset_339
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6161322339
num_examples: 1985
download_size: 6181525157
dataset_size: 6161322339
- config_name: subset_34
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8887898838
num_examples: 2320
download_size: 8913230521
dataset_size: 8887898838
- config_name: subset_340
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6358575291
num_examples: 2009
download_size: 6378715099
dataset_size: 6358575291
- config_name: subset_341
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6367940706
num_examples: 2022
download_size: 6387990388
dataset_size: 6367940706
- config_name: subset_342
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6126652681
num_examples: 1985
download_size: 6146866251
dataset_size: 6126652681
- config_name: subset_343
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6151449604
num_examples: 1986
download_size: 6170956669
dataset_size: 6151449604
- config_name: subset_344
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5977931189
num_examples: 1958
download_size: 5996223420
dataset_size: 5977931189
- config_name: subset_345
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6120336894
num_examples: 1962
download_size: 6140390854
dataset_size: 6120336894
- config_name: subset_346
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6065171013
num_examples: 1958
download_size: 6085222896
dataset_size: 6065171013
- config_name: subset_347
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6122378839
num_examples: 1964
download_size: 6142518981
dataset_size: 6122378839
- config_name: subset_348
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5991266077
num_examples: 1928
download_size: 6010056276
dataset_size: 5991266077
- config_name: subset_349
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6139742590
num_examples: 1968
download_size: 6159869844
dataset_size: 6139742590
- config_name: subset_35
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8802997723
num_examples: 2340
download_size: 8828641931
dataset_size: 8802997723
- config_name: subset_350
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5891647151
num_examples: 1913
download_size: 5910473078
dataset_size: 5891647151
- config_name: subset_351
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5765979984
num_examples: 1887
download_size: 5784800201
dataset_size: 5765979984
- config_name: subset_352
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6077818820
num_examples: 1950
download_size: 6097846139
dataset_size: 6077818820
- config_name: subset_353
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5832582838
num_examples: 1917
download_size: 5851415389
dataset_size: 5832582838
- config_name: subset_354
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5980685506
num_examples: 1947
download_size: 5998953301
dataset_size: 5980685506
- config_name: subset_355
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5955178205
num_examples: 1932
download_size: 5973908583
dataset_size: 5955178205
- config_name: subset_356
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5995340028
num_examples: 1960
download_size: 6014308195
dataset_size: 5995340028
- config_name: subset_357
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6135958683
num_examples: 1972
download_size: 6156035741
dataset_size: 6135958683
- config_name: subset_358
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5978883398
num_examples: 1920
download_size: 5997121564
dataset_size: 5978883398
- config_name: subset_359
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5895716296
num_examples: 1937
download_size: 5914476963
dataset_size: 5895716296
- config_name: subset_36
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8755944989
num_examples: 2327
download_size: 8781534003
dataset_size: 8755944989
- config_name: subset_360
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5830451187
num_examples: 1921
download_size: 5849315641
dataset_size: 5830451187
- config_name: subset_361
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6017628275
num_examples: 1968
download_size: 6037888450
dataset_size: 6017628275
- config_name: subset_362
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5903356320
num_examples: 1958
download_size: 5922282687
dataset_size: 5903356320
- config_name: subset_363
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5970640937
num_examples: 1952
download_size: 5989544653
dataset_size: 5970640937
- config_name: subset_364
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5808712178
num_examples: 1911
download_size: 5827549991
dataset_size: 5808712178
- config_name: subset_365
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5906597080
num_examples: 1980
download_size: 5925657093
dataset_size: 5906597080
- config_name: subset_366
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5998479435
num_examples: 1946
download_size: 6017320362
dataset_size: 5998479435
- config_name: subset_367
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5756579316
num_examples: 1945
download_size: 5775550570
dataset_size: 5756579316
- config_name: subset_368
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5798128158
num_examples: 1888
download_size: 5816868464
dataset_size: 5798128158
- config_name: subset_369
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2452867597
num_examples: 873
download_size: 2461139884
dataset_size: 2452867597
- config_name: subset_37
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8663476510
num_examples: 2312
download_size: 8688796165
dataset_size: 8663476510
- config_name: subset_370
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5709979559
num_examples: 1905
download_size: 5729028191
dataset_size: 5709979559
- config_name: subset_371
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5884377644
num_examples: 1911
download_size: 5903250383
dataset_size: 5884377644
- config_name: subset_372
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5835145060
num_examples: 1913
download_size: 5853988277
dataset_size: 5835145060
- config_name: subset_373
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5992779852
num_examples: 1939
download_size: 6011612647
dataset_size: 5992779852
- config_name: subset_374
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5840043249
num_examples: 1948
download_size: 5858371824
dataset_size: 5840043249
- config_name: subset_375
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5889834852
num_examples: 1912
download_size: 5908585289
dataset_size: 5889834852
- config_name: subset_376
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5670055347
num_examples: 1874
download_size: 5688812377
dataset_size: 5670055347
- config_name: subset_377
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6031417104
num_examples: 1976
download_size: 6051654907
dataset_size: 6031417104
- config_name: subset_378
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5879008286
num_examples: 1931
download_size: 5897890362
dataset_size: 5879008286
- config_name: subset_379
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5664924944
num_examples: 1909
download_size: 5683077833
dataset_size: 5664924944
- config_name: subset_38
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8841557472
num_examples: 2331
download_size: 8867078339
dataset_size: 8841557472
- config_name: subset_380
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5733481490
num_examples: 1914
download_size: 5752427689
dataset_size: 5733481490
- config_name: subset_381
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5783073429
num_examples: 1894
download_size: 5801898722
dataset_size: 5783073429
- config_name: subset_382
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5816608627
num_examples: 1909
download_size: 5835484018
dataset_size: 5816608627
- config_name: subset_383
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5758182955
num_examples: 1901
download_size: 5777040234
dataset_size: 5758182955
- config_name: subset_384
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5632821440
num_examples: 1902
download_size: 5651843511
dataset_size: 5632821440
- config_name: subset_385
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5648127204
num_examples: 1908
download_size: 5667070855
dataset_size: 5648127204
- config_name: subset_386
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5661361429
num_examples: 1901
download_size: 5680200298
dataset_size: 5661361429
- config_name: subset_387
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5623621855
num_examples: 1912
download_size: 5642608788
dataset_size: 5623621855
- config_name: subset_388
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5622099227
num_examples: 1893
download_size: 5641026689
dataset_size: 5622099227
- config_name: subset_389
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5791999588
num_examples: 1899
download_size: 5810900103
dataset_size: 5791999588
- config_name: subset_39
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8748306071
num_examples: 2319
download_size: 8773105177
dataset_size: 8748306071
- config_name: subset_390
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5740544823
num_examples: 1902
download_size: 5759371304
dataset_size: 5740544823
- config_name: subset_391
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5652458312
num_examples: 1876
download_size: 5671279631
dataset_size: 5652458312
- config_name: subset_392
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5599404556
num_examples: 1879
download_size: 5618290219
dataset_size: 5599404556
- config_name: subset_393
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5686329613
num_examples: 1891
download_size: 5705177968
dataset_size: 5686329613
- config_name: subset_394
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5679522067
num_examples: 1922
download_size: 5698544840
dataset_size: 5679522067
- config_name: subset_395
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5501346168
num_examples: 1895
download_size: 5520384395
dataset_size: 5501346168
- config_name: subset_396
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5651232402
num_examples: 1876
download_size: 5670065250
dataset_size: 5651232402
- config_name: subset_397
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5633568538
num_examples: 1913
download_size: 5652600075
dataset_size: 5633568538
- config_name: subset_398
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5714196421
num_examples: 1880
download_size: 5732965907
dataset_size: 5714196421
- config_name: subset_399
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5572336185
num_examples: 1877
download_size: 5591244870
dataset_size: 5572336185
- config_name: subset_4
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9222279718
num_examples: 2335
download_size: 9248723966
dataset_size: 9222279718
- config_name: subset_40
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8804710649
num_examples: 2330
download_size: 8830298390
dataset_size: 8804710649
- config_name: subset_400
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5549849671
num_examples: 1851
download_size: 5568713975
dataset_size: 5549849671
- config_name: subset_401
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5486210817
num_examples: 1867
download_size: 5503891276
dataset_size: 5486210817
- config_name: subset_402
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5512734340
num_examples: 1875
download_size: 5531168971
dataset_size: 5512734340
- config_name: subset_403
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5459967313
num_examples: 1875
download_size: 5477702565
dataset_size: 5459967313
- config_name: subset_404
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5421089700
num_examples: 1858
download_size: 5438768930
dataset_size: 5421089700
- config_name: subset_405
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5381438301
num_examples: 1854
download_size: 5399138146
dataset_size: 5381438301
- config_name: subset_406
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5668814613
num_examples: 1893
download_size: 5687655791
dataset_size: 5668814613
- config_name: subset_407
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5585531228
num_examples: 1892
download_size: 5604454885
dataset_size: 5585531228
- config_name: subset_408
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5615108470
num_examples: 1890
download_size: 5634076031
dataset_size: 5615108470
- config_name: subset_409
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5465452138
num_examples: 1863
download_size: 5483125223
dataset_size: 5465452138
- config_name: subset_41
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8710287860
num_examples: 2333
download_size: 8735694166
dataset_size: 8710287860
- config_name: subset_410
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5359572368
num_examples: 1846
download_size: 5377282763
dataset_size: 5359572368
- config_name: subset_411
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5516317168
num_examples: 1868
download_size: 5535311622
dataset_size: 5516317168
- config_name: subset_412
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5400216533
num_examples: 1857
download_size: 5417969613
dataset_size: 5400216533
- config_name: subset_413
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5494005531
num_examples: 1875
download_size: 5511716165
dataset_size: 5494005531
- config_name: subset_414
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5675888785
num_examples: 1914
download_size: 5694277198
dataset_size: 5675888785
- config_name: subset_415
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5474545409
num_examples: 1857
download_size: 5492267703
dataset_size: 5474545409
- config_name: subset_416
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5234769508
num_examples: 1810
download_size: 5252442928
dataset_size: 5234769508
- config_name: subset_417
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5486319755
num_examples: 1897
download_size: 5504200614
dataset_size: 5486319755
- config_name: subset_418
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5401473662
num_examples: 1837
download_size: 5419134555
dataset_size: 5401473662
- config_name: subset_419
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5606806053
num_examples: 1891
download_size: 5625758744
dataset_size: 5606806053
- config_name: subset_42
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8698716545
num_examples: 2319
download_size: 8723977082
dataset_size: 8698716545
- config_name: subset_420
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5471254988
num_examples: 1830
download_size: 5488846642
dataset_size: 5471254988
- config_name: subset_421
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5545693199
num_examples: 1863
download_size: 5564570217
dataset_size: 5545693199
- config_name: subset_422
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5462490753
num_examples: 1848
download_size: 5480157171
dataset_size: 5462490753
- config_name: subset_423
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5304669190
num_examples: 1795
download_size: 5322220839
dataset_size: 5304669190
- config_name: subset_424
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5341365493
num_examples: 1861
download_size: 5359110752
dataset_size: 5341365493
- config_name: subset_425
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5350785057
num_examples: 1834
download_size: 5368018737
dataset_size: 5350785057
- config_name: subset_426
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5449254258
num_examples: 1828
download_size: 5466866251
dataset_size: 5449254258
- config_name: subset_427
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5428177053
num_examples: 1839
download_size: 5445753421
dataset_size: 5428177053
- config_name: subset_428
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5341277629
num_examples: 1840
download_size: 5358123345
dataset_size: 5341277629
- config_name: subset_429
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5515616506
num_examples: 1874
download_size: 5534643620
dataset_size: 5515616506
- config_name: subset_43
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8756164503
num_examples: 2304
download_size: 8781405107
dataset_size: 8756164503
- config_name: subset_430
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5406352593
num_examples: 1836
download_size: 5424006146
dataset_size: 5406352593
- config_name: subset_431
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5423823295
num_examples: 1844
download_size: 5441482198
dataset_size: 5423823295
- config_name: subset_432
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5198823582
num_examples: 1795
download_size: 5216439406
dataset_size: 5198823582
- config_name: subset_433
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5450659403
num_examples: 1864
download_size: 5468386310
dataset_size: 5450659403
- config_name: subset_434
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5472901013
num_examples: 1853
download_size: 5490612375
dataset_size: 5472901013
- config_name: subset_435
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5241153157
num_examples: 1860
download_size: 5259159985
dataset_size: 5241153157
- config_name: subset_436
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5402502532
num_examples: 1835
download_size: 5420139167
dataset_size: 5402502532
- config_name: subset_437
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5359433459
num_examples: 1852
download_size: 5377151912
dataset_size: 5359433459
- config_name: subset_438
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5421456536
num_examples: 1831
download_size: 5438898240
dataset_size: 5421456536
- config_name: subset_439
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5195835510
num_examples: 1805
download_size: 5213515018
dataset_size: 5195835510
- config_name: subset_44
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8672719846
num_examples: 2336
download_size: 8698287917
dataset_size: 8672719846
- config_name: subset_440
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5211028929
num_examples: 1791
download_size: 5228711324
dataset_size: 5211028929
- config_name: subset_441
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5287799022
num_examples: 1811
download_size: 5304657780
dataset_size: 5287799022
- config_name: subset_442
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5460203931
num_examples: 1861
download_size: 5477916788
dataset_size: 5460203931
- config_name: subset_443
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5341559976
num_examples: 1833
download_size: 5359231128
dataset_size: 5341559976
- config_name: subset_444
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5282972327
num_examples: 1850
download_size: 5300772291
dataset_size: 5282972327
- config_name: subset_445
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5370295753
num_examples: 1819
download_size: 5387858600
dataset_size: 5370295753
- config_name: subset_446
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5527721828
num_examples: 1900
download_size: 5546847385
dataset_size: 5527721828
- config_name: subset_447
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5170537916
num_examples: 1798
download_size: 5188149369
dataset_size: 5170537916
- config_name: subset_448
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5045110329
num_examples: 1793
download_size: 5062894618
dataset_size: 5045110329
- config_name: subset_449
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5296134974
num_examples: 1819
download_size: 5313691031
dataset_size: 5296134974
- config_name: subset_45
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8678093474
num_examples: 2317
download_size: 8703505346
dataset_size: 8678093474
- config_name: subset_450
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5227888259
num_examples: 1792
download_size: 5245524026
dataset_size: 5227888259
- config_name: subset_451
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5395914600
num_examples: 1844
download_size: 5413495899
dataset_size: 5395914600
- config_name: subset_452
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5152087937
num_examples: 1798
download_size: 5169792209
dataset_size: 5152087937
- config_name: subset_453
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5354382541
num_examples: 1856
download_size: 5372179786
dataset_size: 5354382541
- config_name: subset_454
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5235241164
num_examples: 1812
download_size: 5252935001
dataset_size: 5235241164
- config_name: subset_455
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5238618363
num_examples: 1797
download_size: 5256110820
dataset_size: 5238618363
- config_name: subset_456
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5138542420
num_examples: 1809
download_size: 5155786935
dataset_size: 5138542420
- config_name: subset_457
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5190319997
num_examples: 1793
download_size: 5207940405
dataset_size: 5190319997
- config_name: subset_458
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5074698400
num_examples: 1773
download_size: 5092001002
dataset_size: 5074698400
- config_name: subset_459
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5224119837
num_examples: 1836
download_size: 5241960117
dataset_size: 5224119837
- config_name: subset_46
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8865996991
num_examples: 2324
download_size: 8891271214
dataset_size: 8865996991
- config_name: subset_460
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5167811132
num_examples: 1794
download_size: 5185456732
dataset_size: 5167811132
- config_name: subset_461
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5212817150
num_examples: 1798
download_size: 5230451847
dataset_size: 5212817150
- config_name: subset_462
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5179933781
num_examples: 1781
download_size: 5197606502
dataset_size: 5179933781
- config_name: subset_463
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5167789916
num_examples: 1786
download_size: 5185423269
dataset_size: 5167789916
- config_name: subset_464
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5217441411
num_examples: 1792
download_size: 5235072768
dataset_size: 5217441411
- config_name: subset_465
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5105153684
num_examples: 1779
download_size: 5122742615
dataset_size: 5105153684
- config_name: subset_466
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5210588024
num_examples: 1814
download_size: 5228295537
dataset_size: 5210588024
- config_name: subset_467
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5081125540
num_examples: 1803
download_size: 5098824949
dataset_size: 5081125540
- config_name: subset_468
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5117948515
num_examples: 1814
download_size: 5135748560
dataset_size: 5117948515
- config_name: subset_469
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5162087650
num_examples: 1796
download_size: 5179699320
dataset_size: 5162087650
- config_name: subset_47
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8574366128
num_examples: 2300
download_size: 8599715311
dataset_size: 8574366128
- config_name: subset_470
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4994072235
num_examples: 1777
download_size: 5010475002
dataset_size: 4994072235
- config_name: subset_471
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5276243788
num_examples: 1794
download_size: 5293756032
dataset_size: 5276243788
- config_name: subset_472
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5062841734
num_examples: 1780
download_size: 5080529463
dataset_size: 5062841734
- config_name: subset_473
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5107836870
num_examples: 1801
download_size: 5125586955
dataset_size: 5107836870
- config_name: subset_474
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5246748720
num_examples: 1848
download_size: 5264586935
dataset_size: 5246748720
- config_name: subset_475
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5042490978
num_examples: 1800
download_size: 5060329717
dataset_size: 5042490978
- config_name: subset_476
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5001039986
num_examples: 1750
download_size: 5018672922
dataset_size: 5001039986
- config_name: subset_477
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5092615719
num_examples: 1777
download_size: 5110210276
dataset_size: 5092615719
- config_name: subset_478
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4850526048
num_examples: 1764
download_size: 4867021896
dataset_size: 4850526048
- config_name: subset_479
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5084336568
num_examples: 1762
download_size: 5101875876
dataset_size: 5084336568
- config_name: subset_48
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8492481742
num_examples: 2260
download_size: 8516829975
dataset_size: 8492481742
- config_name: subset_480
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5040367452
num_examples: 1753
download_size: 5058002277
dataset_size: 5040367452
- config_name: subset_481
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5211223950
num_examples: 1769
download_size: 5228721656
dataset_size: 5211223950
- config_name: subset_482
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5064160961
num_examples: 1768
download_size: 5081793340
dataset_size: 5064160961
- config_name: subset_483
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5002447439
num_examples: 1764
download_size: 5020141842
dataset_size: 5002447439
- config_name: subset_484
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5139564443
num_examples: 1772
download_size: 5157067881
dataset_size: 5139564443
- config_name: subset_485
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4972193135
num_examples: 1759
download_size: 4988670327
dataset_size: 4972193135
- config_name: subset_486
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5080451174
num_examples: 1779
download_size: 5098075891
dataset_size: 5080451174
- config_name: subset_487
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4992071883
num_examples: 1783
download_size: 5008589727
dataset_size: 4992071883
- config_name: subset_488
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5131980173
num_examples: 1782
download_size: 5149629797
dataset_size: 5131980173
- config_name: subset_489
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5160652126
num_examples: 1789
download_size: 5178222922
dataset_size: 5160652126
- config_name: subset_49
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8462521240
num_examples: 2288
download_size: 8487219724
dataset_size: 8462521240
- config_name: subset_490
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5078433909
num_examples: 1778
download_size: 5096096974
dataset_size: 5078433909
- config_name: subset_491
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4954058939
num_examples: 1760
download_size: 4969657618
dataset_size: 4954058939
- config_name: subset_492
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5179632754
num_examples: 1811
download_size: 5197330616
dataset_size: 5179632754
- config_name: subset_493
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4836379979
num_examples: 1751
download_size: 4852873749
dataset_size: 4836379979
- config_name: subset_494
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5196505650
num_examples: 1778
download_size: 5214082178
dataset_size: 5196505650
- config_name: subset_495
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5056214426
num_examples: 1759
download_size: 5073880256
dataset_size: 5056214426
- config_name: subset_496
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5107787013
num_examples: 1793
download_size: 5125460032
dataset_size: 5107787013
- config_name: subset_497
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4988903117
num_examples: 1739
download_size: 5005180518
dataset_size: 4988903117
- config_name: subset_498
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5098723977
num_examples: 1768
download_size: 5116262210
dataset_size: 5098723977
- config_name: subset_499
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4967100912
num_examples: 1759
download_size: 4983585885
dataset_size: 4967100912
- config_name: subset_5
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9393375462
num_examples: 2369
download_size: 9419862425
dataset_size: 9393375462
- config_name: subset_50
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8583867866
num_examples: 2325
download_size: 8609386789
dataset_size: 8583867866
- config_name: subset_500
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5006880381
num_examples: 1774
download_size: 5024610599
dataset_size: 5006880381
- config_name: subset_501
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5124536633
num_examples: 1756
download_size: 5142049601
dataset_size: 5124536633
- config_name: subset_502
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5094027911
num_examples: 1799
download_size: 5111825820
dataset_size: 5094027911
- config_name: subset_503
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4992996616
num_examples: 1760
download_size: 5009428612
dataset_size: 4992996616
- config_name: subset_504
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5069483101
num_examples: 1772
download_size: 5087169476
dataset_size: 5069483101
- config_name: subset_505
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4950612917
num_examples: 1786
download_size: 4967166669
dataset_size: 4950612917
- config_name: subset_506
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4986456478
num_examples: 1752
download_size: 5002822686
dataset_size: 4986456478
- config_name: subset_507
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5192519648
num_examples: 1818
download_size: 5210282825
dataset_size: 5192519648
- config_name: subset_508
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4957357313
num_examples: 1784
download_size: 4973917560
dataset_size: 4957357313
- config_name: subset_509
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4848142665
num_examples: 1748
download_size: 4864595255
dataset_size: 4848142665
- config_name: subset_51
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8641516945
num_examples: 2312
download_size: 8666933327
dataset_size: 8641516945
- config_name: subset_510
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5017822475
num_examples: 1764
download_size: 5035531086
dataset_size: 5017822475
- config_name: subset_511
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4841543467
num_examples: 1720
download_size: 4857830756
dataset_size: 4841543467
- config_name: subset_512
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4827210214
num_examples: 1724
download_size: 4843605887
dataset_size: 4827210214
- config_name: subset_513
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4959797391
num_examples: 1741
download_size: 4976126672
dataset_size: 4959797391
- config_name: subset_514
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4917507806
num_examples: 1765
download_size: 4934041292
dataset_size: 4917507806
- config_name: subset_515
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5015255942
num_examples: 1765
download_size: 5032999391
dataset_size: 5015255942
- config_name: subset_516
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4956023006
num_examples: 1762
download_size: 4972504492
dataset_size: 4956023006
- config_name: subset_517
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4841543042
num_examples: 1715
download_size: 4857906126
dataset_size: 4841543042
- config_name: subset_518
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5000516431
num_examples: 1765
download_size: 5018163539
dataset_size: 5000516431
- config_name: subset_519
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4883604577
num_examples: 1761
download_size: 4900132817
dataset_size: 4883604577
- config_name: subset_52
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8576893504
num_examples: 2322
download_size: 8602406520
dataset_size: 8576893504
- config_name: subset_520
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4827222367
num_examples: 1723
download_size: 4843625416
dataset_size: 4827222367
- config_name: subset_521
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5119527004
num_examples: 1763
download_size: 5137013660
dataset_size: 5119527004
- config_name: subset_522
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4751182678
num_examples: 1709
download_size: 4767561341
dataset_size: 4751182678
- config_name: subset_523
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4966546392
num_examples: 1746
download_size: 4982933866
dataset_size: 4966546392
- config_name: subset_524
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4839982423
num_examples: 1733
download_size: 4856397564
dataset_size: 4839982423
- config_name: subset_525
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4880100887
num_examples: 1733
download_size: 4896388400
dataset_size: 4880100887
- config_name: subset_526
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4944956194
num_examples: 1751
download_size: 4961404165
dataset_size: 4944956194
- config_name: subset_527
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4724774049
num_examples: 1753
download_size: 4741422004
dataset_size: 4724774049
- config_name: subset_528
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4720728084
num_examples: 1712
download_size: 4737228510
dataset_size: 4720728084
- config_name: subset_529
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4728970510
num_examples: 1679
download_size: 4745230535
dataset_size: 4728970510
- config_name: subset_53
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8683782150
num_examples: 2342
download_size: 8709352655
dataset_size: 8683782150
- config_name: subset_530
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4848843006
num_examples: 1739
download_size: 4865321547
dataset_size: 4848843006
- config_name: subset_531
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4925676164
num_examples: 1729
download_size: 4941999578
dataset_size: 4925676164
- config_name: subset_532
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4824313412
num_examples: 1747
download_size: 4840865788
dataset_size: 4824313412
- config_name: subset_533
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4962296049
num_examples: 1763
download_size: 4978769125
dataset_size: 4962296049
- config_name: subset_534
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4854030176
num_examples: 1747
download_size: 4870430020
dataset_size: 4854030176
- config_name: subset_535
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4993758460
num_examples: 1751
download_size: 5010062374
dataset_size: 4993758460
- config_name: subset_536
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4886473611
num_examples: 1749
download_size: 4902879445
dataset_size: 4886473611
- config_name: subset_537
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4676667636
num_examples: 1689
download_size: 4693037818
dataset_size: 4676667636
- config_name: subset_538
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4833313763
num_examples: 1708
download_size: 4849661983
dataset_size: 4833313763
- config_name: subset_539
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4820731256
num_examples: 1713
download_size: 4837106172
dataset_size: 4820731256
- config_name: subset_54
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8710326861
num_examples: 2307
download_size: 8735582058
dataset_size: 8710326861
- config_name: subset_540
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4719107719
num_examples: 1651
download_size: 4735329499
dataset_size: 4719107719
- config_name: subset_541
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4934806935
num_examples: 1730
download_size: 4951205229
dataset_size: 4934806935
- config_name: subset_542
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4701271733
num_examples: 1689
download_size: 4717578781
dataset_size: 4701271733
- config_name: subset_543
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4899658757
num_examples: 1744
download_size: 4916097136
dataset_size: 4899658757
- config_name: subset_544
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4763250993
num_examples: 1684
download_size: 4779521855
dataset_size: 4763250993
- config_name: subset_545
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4852631027
num_examples: 1734
download_size: 4868988741
dataset_size: 4852631027
- config_name: subset_546
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4963449637
num_examples: 1781
download_size: 4979978435
dataset_size: 4963449637
- config_name: subset_547
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4793351141
num_examples: 1709
download_size: 4809748563
dataset_size: 4793351141
- config_name: subset_548
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4900748379
num_examples: 1731
download_size: 4917165264
dataset_size: 4900748379
- config_name: subset_549
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4796440863
num_examples: 1738
download_size: 4812935036
dataset_size: 4796440863
- config_name: subset_55
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8589374844
num_examples: 2333
download_size: 8614804390
dataset_size: 8589374844
- config_name: subset_550
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4851146480
num_examples: 1730
download_size: 4867486443
dataset_size: 4851146480
- config_name: subset_551
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4806920169
num_examples: 1735
download_size: 4823404934
dataset_size: 4806920169
- config_name: subset_552
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4939535183
num_examples: 1741
download_size: 4955957949
dataset_size: 4939535183
- config_name: subset_553
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4887296227
num_examples: 1731
download_size: 4903711098
dataset_size: 4887296227
- config_name: subset_554
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4876913467
num_examples: 1712
download_size: 4893235389
dataset_size: 4876913467
- config_name: subset_555
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4763891878
num_examples: 1699
download_size: 4780271424
dataset_size: 4763891878
- config_name: subset_556
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4813542869
num_examples: 1700
download_size: 4829870452
dataset_size: 4813542869
- config_name: subset_557
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4949813511
num_examples: 1737
download_size: 4966212812
dataset_size: 4949813511
- config_name: subset_558
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4863256507
num_examples: 1713
download_size: 4879563526
dataset_size: 4863256507
- config_name: subset_559
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4813857641
num_examples: 1688
download_size: 4830149888
dataset_size: 4813857641
- config_name: subset_56
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8404663446
num_examples: 2305
download_size: 8429593232
dataset_size: 8404663446
- config_name: subset_560
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5027125575
num_examples: 1767
download_size: 5044772404
dataset_size: 5027125575
- config_name: subset_561
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4753351049
num_examples: 1702
download_size: 4769709294
dataset_size: 4753351049
- config_name: subset_562
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4959336086
num_examples: 1766
download_size: 4975839356
dataset_size: 4959336086
- config_name: subset_563
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4677555681
num_examples: 1719
download_size: 4693995389
dataset_size: 4677555681
- config_name: subset_564
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4886582568
num_examples: 1728
download_size: 4902929704
dataset_size: 4886582568
- config_name: subset_565
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4668792783
num_examples: 1704
download_size: 4685195058
dataset_size: 4668792783
- config_name: subset_566
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4767108269
num_examples: 1724
download_size: 4783586217
dataset_size: 4767108269
- config_name: subset_567
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4833945864
num_examples: 1717
download_size: 4850348458
dataset_size: 4833945864
- config_name: subset_568
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4915556985
num_examples: 1774
download_size: 4932130855
dataset_size: 4915556985
- config_name: subset_569
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4709907674
num_examples: 1706
download_size: 4726315419
dataset_size: 4709907674
- config_name: subset_57
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8707960859
num_examples: 2305
download_size: 8733233210
dataset_size: 8707960859
- config_name: subset_570
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4729178004
num_examples: 1710
download_size: 4745570672
dataset_size: 4729178004
- config_name: subset_571
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4764986718
num_examples: 1701
download_size: 4781352426
dataset_size: 4764986718
- config_name: subset_572
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5080588950
num_examples: 1759
download_size: 5098172894
dataset_size: 5080588950
- config_name: subset_573
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4592087345
num_examples: 1672
download_size: 4608522215
dataset_size: 4592087345
- config_name: subset_574
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4749233430
num_examples: 1738
download_size: 4765716987
dataset_size: 4749233430
- config_name: subset_575
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4681780379
num_examples: 1690
download_size: 4698184776
dataset_size: 4681780379
- config_name: subset_576
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4797287612
num_examples: 1715
download_size: 4813708231
dataset_size: 4797287612
- config_name: subset_577
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4706443914
num_examples: 1664
download_size: 4722710311
dataset_size: 4706443914
- config_name: subset_578
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4792974308
num_examples: 1713
download_size: 4809316855
dataset_size: 4792974308
- config_name: subset_579
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4782616285
num_examples: 1696
download_size: 4798825625
dataset_size: 4782616285
- config_name: subset_58
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8547880971
num_examples: 2310
download_size: 8573354613
dataset_size: 8547880971
- config_name: subset_580
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4798359664
num_examples: 1738
download_size: 4814906810
dataset_size: 4798359664
- config_name: subset_581
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4764282563
num_examples: 1714
download_size: 4779919842
dataset_size: 4764282563
- config_name: subset_582
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4747468598
num_examples: 1716
download_size: 4764007272
dataset_size: 4747468598
- config_name: subset_583
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4764823347
num_examples: 1709
download_size: 4781204415
dataset_size: 4764823347
- config_name: subset_584
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4604051537
num_examples: 1677
download_size: 4620416021
dataset_size: 4604051537
- config_name: subset_585
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4672035755
num_examples: 1703
download_size: 4688380573
dataset_size: 4672035755
- config_name: subset_586
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4681218202
num_examples: 1684
download_size: 4697593808
dataset_size: 4681218202
- config_name: subset_587
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4825361292
num_examples: 1726
download_size: 4841817262
dataset_size: 4825361292
- config_name: subset_588
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4645749382
num_examples: 1687
download_size: 4662093781
dataset_size: 4645749382
- config_name: subset_589
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4912237521
num_examples: 1743
download_size: 4928630235
dataset_size: 4912237521
- config_name: subset_59
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8461827260
num_examples: 2325
download_size: 8486775143
dataset_size: 8461827260
- config_name: subset_590
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4727258290
num_examples: 1699
download_size: 4743690927
dataset_size: 4727258290
- config_name: subset_591
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4711724836
num_examples: 1666
download_size: 4727922665
dataset_size: 4711724836
- config_name: subset_592
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4675518139
num_examples: 1671
download_size: 4691855590
dataset_size: 4675518139
- config_name: subset_593
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4545694594
num_examples: 1657
download_size: 4562097226
dataset_size: 4545694594
- config_name: subset_594
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4738827399
num_examples: 1685
download_size: 4755062127
dataset_size: 4738827399
- config_name: subset_595
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4735527365
num_examples: 1691
download_size: 4751819081
dataset_size: 4735527365
- config_name: subset_596
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4765212701
num_examples: 1707
download_size: 4781602554
dataset_size: 4765212701
- config_name: subset_597
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4726372225
num_examples: 1693
download_size: 4742749061
dataset_size: 4726372225
- config_name: subset_598
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4684168718
num_examples: 1666
download_size: 4700439467
dataset_size: 4684168718
- config_name: subset_599
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4767996765
num_examples: 1709
download_size: 4784410520
dataset_size: 4767996765
- config_name: subset_6
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9285287309
num_examples: 2370
download_size: 9311895445
dataset_size: 9285287309
- config_name: subset_60
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8408317150
num_examples: 2312
download_size: 8433001852
dataset_size: 8408317150
- config_name: subset_600
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4545942524
num_examples: 1681
download_size: 4562436361
dataset_size: 4545942524
- config_name: subset_61
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8499865155
num_examples: 2301
download_size: 8524551243
dataset_size: 8499865155
- config_name: subset_62
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8559091459
num_examples: 2324
download_size: 8584522214
dataset_size: 8559091459
- config_name: subset_63
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6142862341
num_examples: 1662
download_size: 6161227141
dataset_size: 6142862341
- config_name: subset_64
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 5259741271
num_examples: 1442
download_size: 5275748694
dataset_size: 5259741271
- config_name: subset_65
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8625578309
num_examples: 2339
download_size: 8651120510
dataset_size: 8625578309
- config_name: subset_66
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8551743800
num_examples: 2318
download_size: 8577326362
dataset_size: 8551743800
- config_name: subset_67
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8615093849
num_examples: 2330
download_size: 8640632032
dataset_size: 8615093849
- config_name: subset_68
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8543966902
num_examples: 2328
download_size: 8569476616
dataset_size: 8543966902
- config_name: subset_69
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8183497934
num_examples: 2264
download_size: 8208230720
dataset_size: 8183497934
- config_name: subset_7
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9197384611
num_examples: 2336
download_size: 9223916148
dataset_size: 9197384611
- config_name: subset_70
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8599789401
num_examples: 2341
download_size: 8625410074
dataset_size: 8599789401
- config_name: subset_71
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8669374501
num_examples: 2339
download_size: 8694859707
dataset_size: 8669374501
- config_name: subset_72
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8580990554
num_examples: 2335
download_size: 8606529844
dataset_size: 8580990554
- config_name: subset_73
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8325387592
num_examples: 2326
download_size: 8350346767
dataset_size: 8325387592
- config_name: subset_74
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8541577615
num_examples: 2340
download_size: 8567444538
dataset_size: 8541577615
- config_name: subset_75
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8310831871
num_examples: 2317
download_size: 8335693581
dataset_size: 8310831871
- config_name: subset_76
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8496448393
num_examples: 2334
download_size: 8521302853
dataset_size: 8496448393
- config_name: subset_77
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8366158586
num_examples: 2334
download_size: 8391162879
dataset_size: 8366158586
- config_name: subset_78
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8602831441
num_examples: 2338
download_size: 8628474767
dataset_size: 8602831441
- config_name: subset_79
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8551438760
num_examples: 2348
download_size: 8577261634
dataset_size: 8551438760
- config_name: subset_8
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9179600010
num_examples: 2351
download_size: 9205367072
dataset_size: 9179600010
- config_name: subset_80
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8443566109
num_examples: 2328
download_size: 8467915489
dataset_size: 8443566109
- config_name: subset_81
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8330670656
num_examples: 2304
download_size: 8355508310
dataset_size: 8330670656
- config_name: subset_82
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8638407293
num_examples: 2340
download_size: 8663896901
dataset_size: 8638407293
- config_name: subset_83
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8395628361
num_examples: 2321
download_size: 8420104468
dataset_size: 8395628361
- config_name: subset_84
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8423777933
num_examples: 2308
download_size: 8448661958
dataset_size: 8423777933
- config_name: subset_85
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8467387205
num_examples: 2329
download_size: 8491979188
dataset_size: 8467387205
- config_name: subset_86
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8513583339
num_examples: 2347
download_size: 8539393130
dataset_size: 8513583339
- config_name: subset_87
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8499498235
num_examples: 2327
download_size: 8524419370
dataset_size: 8499498235
- config_name: subset_88
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8414775980
num_examples: 2328
download_size: 8439586837
dataset_size: 8414775980
- config_name: subset_89
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8435909957
num_examples: 2310
download_size: 8460553342
dataset_size: 8435909957
- config_name: subset_9
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9105639775
num_examples: 2347
download_size: 9132191568
dataset_size: 9105639775
- config_name: subset_90
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8168274604
num_examples: 2314
download_size: 8193277402
dataset_size: 8168274604
- config_name: subset_91
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8558554978
num_examples: 2331
download_size: 8584007759
dataset_size: 8558554978
- config_name: subset_92
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8420749628
num_examples: 2328
download_size: 8445549861
dataset_size: 8420749628
- config_name: subset_93
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8448064167
num_examples: 2332
download_size: 8472928020
dataset_size: 8448064167
- config_name: subset_94
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8255768422
num_examples: 2312
download_size: 8280666681
dataset_size: 8255768422
- config_name: subset_95
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8235367705
num_examples: 2323
download_size: 8260286198
dataset_size: 8235367705
- config_name: subset_96
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8489898026
num_examples: 2301
download_size: 8514738968
dataset_size: 8489898026
- config_name: subset_97
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8455644248
num_examples: 2322
download_size: 8480516230
dataset_size: 8455644248
- config_name: subset_98
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8338854283
num_examples: 2304
download_size: 8363670342
dataset_size: 8338854283
- config_name: subset_99
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: frA.id
dtype: string
- name: frA.laser_score
dtype: float64
- name: frA.audio.speaker_embedding
sequence: float32
- name: frA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8270322432
num_examples: 2300
download_size: 8295131223
dataset_size: 8270322432
configs:
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_145
data_files:
- split: train
path: subset_145/train-*
- config_name: subset_146
data_files:
- split: train
path: subset_146/train-*
- config_name: subset_147
data_files:
- split: train
path: subset_147/train-*
- config_name: subset_148
data_files:
- split: train
path: subset_148/train-*
- config_name: subset_149
data_files:
- split: train
path: subset_149/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_150
data_files:
- split: train
path: subset_150/train-*
- config_name: subset_151
data_files:
- split: train
path: subset_151/train-*
- config_name: subset_152
data_files:
- split: train
path: subset_152/train-*
- config_name: subset_153
data_files:
- split: train
path: subset_153/train-*
- config_name: subset_154
data_files:
- split: train
path: subset_154/train-*
- config_name: subset_155
data_files:
- split: train
path: subset_155/train-*
- config_name: subset_156
data_files:
- split: train
path: subset_156/train-*
- config_name: subset_157
data_files:
- split: train
path: subset_157/train-*
- config_name: subset_158
data_files:
- split: train
path: subset_158/train-*
- config_name: subset_159
data_files:
- split: train
path: subset_159/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_160
data_files:
- split: train
path: subset_160/train-*
- config_name: subset_161
data_files:
- split: train
path: subset_161/train-*
- config_name: subset_162
data_files:
- split: train
path: subset_162/train-*
- config_name: subset_163
data_files:
- split: train
path: subset_163/train-*
- config_name: subset_164
data_files:
- split: train
path: subset_164/train-*
- config_name: subset_165
data_files:
- split: train
path: subset_165/train-*
- config_name: subset_166
data_files:
- split: train
path: subset_166/train-*
- config_name: subset_167
data_files:
- split: train
path: subset_167/train-*
- config_name: subset_168
data_files:
- split: train
path: subset_168/train-*
- config_name: subset_169
data_files:
- split: train
path: subset_169/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_170
data_files:
- split: train
path: subset_170/train-*
- config_name: subset_171
data_files:
- split: train
path: subset_171/train-*
- config_name: subset_172
data_files:
- split: train
path: subset_172/train-*
- config_name: subset_173
data_files:
- split: train
path: subset_173/train-*
- config_name: subset_174
data_files:
- split: train
path: subset_174/train-*
- config_name: subset_175
data_files:
- split: train
path: subset_175/train-*
- config_name: subset_176
data_files:
- split: train
path: subset_176/train-*
- config_name: subset_177
data_files:
- split: train
path: subset_177/train-*
- config_name: subset_178
data_files:
- split: train
path: subset_178/train-*
- config_name: subset_179
data_files:
- split: train
path: subset_179/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_180
data_files:
- split: train
path: subset_180/train-*
- config_name: subset_181
data_files:
- split: train
path: subset_181/train-*
- config_name: subset_182
data_files:
- split: train
path: subset_182/train-*
- config_name: subset_183
data_files:
- split: train
path: subset_183/train-*
- config_name: subset_184
data_files:
- split: train
path: subset_184/train-*
- config_name: subset_185
data_files:
- split: train
path: subset_185/train-*
- config_name: subset_186
data_files:
- split: train
path: subset_186/train-*
- config_name: subset_187
data_files:
- split: train
path: subset_187/train-*
- config_name: subset_188
data_files:
- split: train
path: subset_188/train-*
- config_name: subset_189
data_files:
- split: train
path: subset_189/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_190
data_files:
- split: train
path: subset_190/train-*
- config_name: subset_191
data_files:
- split: train
path: subset_191/train-*
- config_name: subset_192
data_files:
- split: train
path: subset_192/train-*
- config_name: subset_193
data_files:
- split: train
path: subset_193/train-*
- config_name: subset_194
data_files:
- split: train
path: subset_194/train-*
- config_name: subset_195
data_files:
- split: train
path: subset_195/train-*
- config_name: subset_196
data_files:
- split: train
path: subset_196/train-*
- config_name: subset_197
data_files:
- split: train
path: subset_197/train-*
- config_name: subset_198
data_files:
- split: train
path: subset_198/train-*
- config_name: subset_199
data_files:
- split: train
path: subset_199/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_200
data_files:
- split: train
path: subset_200/train-*
- config_name: subset_201
data_files:
- split: train
path: subset_201/train-*
- config_name: subset_202
data_files:
- split: train
path: subset_202/train-*
- config_name: subset_203
data_files:
- split: train
path: subset_203/train-*
- config_name: subset_204
data_files:
- split: train
path: subset_204/train-*
- config_name: subset_205
data_files:
- split: train
path: subset_205/train-*
- config_name: subset_206
data_files:
- split: train
path: subset_206/train-*
- config_name: subset_207
data_files:
- split: train
path: subset_207/train-*
- config_name: subset_208
data_files:
- split: train
path: subset_208/train-*
- config_name: subset_209
data_files:
- split: train
path: subset_209/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_210
data_files:
- split: train
path: subset_210/train-*
- config_name: subset_211
data_files:
- split: train
path: subset_211/train-*
- config_name: subset_212
data_files:
- split: train
path: subset_212/train-*
- config_name: subset_213
data_files:
- split: train
path: subset_213/train-*
- config_name: subset_214
data_files:
- split: train
path: subset_214/train-*
- config_name: subset_215
data_files:
- split: train
path: subset_215/train-*
- config_name: subset_216
data_files:
- split: train
path: subset_216/train-*
- config_name: subset_217
data_files:
- split: train
path: subset_217/train-*
- config_name: subset_218
data_files:
- split: train
path: subset_218/train-*
- config_name: subset_219
data_files:
- split: train
path: subset_219/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_220
data_files:
- split: train
path: subset_220/train-*
- config_name: subset_221
data_files:
- split: train
path: subset_221/train-*
- config_name: subset_222
data_files:
- split: train
path: subset_222/train-*
- config_name: subset_223
data_files:
- split: train
path: subset_223/train-*
- config_name: subset_224
data_files:
- split: train
path: subset_224/train-*
- config_name: subset_225
data_files:
- split: train
path: subset_225/train-*
- config_name: subset_226
data_files:
- split: train
path: subset_226/train-*
- config_name: subset_227
data_files:
- split: train
path: subset_227/train-*
- config_name: subset_228
data_files:
- split: train
path: subset_228/train-*
- config_name: subset_229
data_files:
- split: train
path: subset_229/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_230
data_files:
- split: train
path: subset_230/train-*
- config_name: subset_231
data_files:
- split: train
path: subset_231/train-*
- config_name: subset_232
data_files:
- split: train
path: subset_232/train-*
- config_name: subset_233
data_files:
- split: train
path: subset_233/train-*
- config_name: subset_234
data_files:
- split: train
path: subset_234/train-*
- config_name: subset_235
data_files:
- split: train
path: subset_235/train-*
- config_name: subset_236
data_files:
- split: train
path: subset_236/train-*
- config_name: subset_237
data_files:
- split: train
path: subset_237/train-*
- config_name: subset_238
data_files:
- split: train
path: subset_238/train-*
- config_name: subset_239
data_files:
- split: train
path: subset_239/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_240
data_files:
- split: train
path: subset_240/train-*
- config_name: subset_241
data_files:
- split: train
path: subset_241/train-*
- config_name: subset_242
data_files:
- split: train
path: subset_242/train-*
- config_name: subset_243
data_files:
- split: train
path: subset_243/train-*
- config_name: subset_244
data_files:
- split: train
path: subset_244/train-*
- config_name: subset_245
data_files:
- split: train
path: subset_245/train-*
- config_name: subset_246
data_files:
- split: train
path: subset_246/train-*
- config_name: subset_247
data_files:
- split: train
path: subset_247/train-*
- config_name: subset_248
data_files:
- split: train
path: subset_248/train-*
- config_name: subset_249
data_files:
- split: train
path: subset_249/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_250
data_files:
- split: train
path: subset_250/train-*
- config_name: subset_251
data_files:
- split: train
path: subset_251/train-*
- config_name: subset_252
data_files:
- split: train
path: subset_252/train-*
- config_name: subset_253
data_files:
- split: train
path: subset_253/train-*
- config_name: subset_254
data_files:
- split: train
path: subset_254/train-*
- config_name: subset_255
data_files:
- split: train
path: subset_255/train-*
- config_name: subset_256
data_files:
- split: train
path: subset_256/train-*
- config_name: subset_257
data_files:
- split: train
path: subset_257/train-*
- config_name: subset_258
data_files:
- split: train
path: subset_258/train-*
- config_name: subset_259
data_files:
- split: train
path: subset_259/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_260
data_files:
- split: train
path: subset_260/train-*
- config_name: subset_261
data_files:
- split: train
path: subset_261/train-*
- config_name: subset_262
data_files:
- split: train
path: subset_262/train-*
- config_name: subset_263
data_files:
- split: train
path: subset_263/train-*
- config_name: subset_264
data_files:
- split: train
path: subset_264/train-*
- config_name: subset_265
data_files:
- split: train
path: subset_265/train-*
- config_name: subset_266
data_files:
- split: train
path: subset_266/train-*
- config_name: subset_267
data_files:
- split: train
path: subset_267/train-*
- config_name: subset_268
data_files:
- split: train
path: subset_268/train-*
- config_name: subset_269
data_files:
- split: train
path: subset_269/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_270
data_files:
- split: train
path: subset_270/train-*
- config_name: subset_271
data_files:
- split: train
path: subset_271/train-*
- config_name: subset_272
data_files:
- split: train
path: subset_272/train-*
- config_name: subset_273
data_files:
- split: train
path: subset_273/train-*
- config_name: subset_274
data_files:
- split: train
path: subset_274/train-*
- config_name: subset_275
data_files:
- split: train
path: subset_275/train-*
- config_name: subset_276
data_files:
- split: train
path: subset_276/train-*
- config_name: subset_277
data_files:
- split: train
path: subset_277/train-*
- config_name: subset_278
data_files:
- split: train
path: subset_278/train-*
- config_name: subset_279
data_files:
- split: train
path: subset_279/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_280
data_files:
- split: train
path: subset_280/train-*
- config_name: subset_281
data_files:
- split: train
path: subset_281/train-*
- config_name: subset_282
data_files:
- split: train
path: subset_282/train-*
- config_name: subset_283
data_files:
- split: train
path: subset_283/train-*
- config_name: subset_284
data_files:
- split: train
path: subset_284/train-*
- config_name: subset_285
data_files:
- split: train
path: subset_285/train-*
- config_name: subset_286
data_files:
- split: train
path: subset_286/train-*
- config_name: subset_287
data_files:
- split: train
path: subset_287/train-*
- config_name: subset_288
data_files:
- split: train
path: subset_288/train-*
- config_name: subset_289
data_files:
- split: train
path: subset_289/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_290
data_files:
- split: train
path: subset_290/train-*
- config_name: subset_291
data_files:
- split: train
path: subset_291/train-*
- config_name: subset_292
data_files:
- split: train
path: subset_292/train-*
- config_name: subset_293
data_files:
- split: train
path: subset_293/train-*
- config_name: subset_294
data_files:
- split: train
path: subset_294/train-*
- config_name: subset_295
data_files:
- split: train
path: subset_295/train-*
- config_name: subset_296
data_files:
- split: train
path: subset_296/train-*
- config_name: subset_297
data_files:
- split: train
path: subset_297/train-*
- config_name: subset_298
data_files:
- split: train
path: subset_298/train-*
- config_name: subset_299
data_files:
- split: train
path: subset_299/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_300
data_files:
- split: train
path: subset_300/train-*
- config_name: subset_301
data_files:
- split: train
path: subset_301/train-*
- config_name: subset_302
data_files:
- split: train
path: subset_302/train-*
- config_name: subset_303
data_files:
- split: train
path: subset_303/train-*
- config_name: subset_304
data_files:
- split: train
path: subset_304/train-*
- config_name: subset_305
data_files:
- split: train
path: subset_305/train-*
- config_name: subset_306
data_files:
- split: train
path: subset_306/train-*
- config_name: subset_307
data_files:
- split: train
path: subset_307/train-*
- config_name: subset_308
data_files:
- split: train
path: subset_308/train-*
- config_name: subset_309
data_files:
- split: train
path: subset_309/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_310
data_files:
- split: train
path: subset_310/train-*
- config_name: subset_311
data_files:
- split: train
path: subset_311/train-*
- config_name: subset_312
data_files:
- split: train
path: subset_312/train-*
- config_name: subset_313
data_files:
- split: train
path: subset_313/train-*
- config_name: subset_314
data_files:
- split: train
path: subset_314/train-*
- config_name: subset_315
data_files:
- split: train
path: subset_315/train-*
- config_name: subset_316
data_files:
- split: train
path: subset_316/train-*
- config_name: subset_317
data_files:
- split: train
path: subset_317/train-*
- config_name: subset_318
data_files:
- split: train
path: subset_318/train-*
- config_name: subset_319
data_files:
- split: train
path: subset_319/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_320
data_files:
- split: train
path: subset_320/train-*
- config_name: subset_321
data_files:
- split: train
path: subset_321/train-*
- config_name: subset_322
data_files:
- split: train
path: subset_322/train-*
- config_name: subset_323
data_files:
- split: train
path: subset_323/train-*
- config_name: subset_324
data_files:
- split: train
path: subset_324/train-*
- config_name: subset_325
data_files:
- split: train
path: subset_325/train-*
- config_name: subset_326
data_files:
- split: train
path: subset_326/train-*
- config_name: subset_327
data_files:
- split: train
path: subset_327/train-*
- config_name: subset_328
data_files:
- split: train
path: subset_328/train-*
- config_name: subset_329
data_files:
- split: train
path: subset_329/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_330
data_files:
- split: train
path: subset_330/train-*
- config_name: subset_331
data_files:
- split: train
path: subset_331/train-*
- config_name: subset_332
data_files:
- split: train
path: subset_332/train-*
- config_name: subset_333
data_files:
- split: train
path: subset_333/train-*
- config_name: subset_334
data_files:
- split: train
path: subset_334/train-*
- config_name: subset_335
data_files:
- split: train
path: subset_335/train-*
- config_name: subset_336
data_files:
- split: train
path: subset_336/train-*
- config_name: subset_337
data_files:
- split: train
path: subset_337/train-*
- config_name: subset_338
data_files:
- split: train
path: subset_338/train-*
- config_name: subset_339
data_files:
- split: train
path: subset_339/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_340
data_files:
- split: train
path: subset_340/train-*
- config_name: subset_341
data_files:
- split: train
path: subset_341/train-*
- config_name: subset_342
data_files:
- split: train
path: subset_342/train-*
- config_name: subset_343
data_files:
- split: train
path: subset_343/train-*
- config_name: subset_344
data_files:
- split: train
path: subset_344/train-*
- config_name: subset_345
data_files:
- split: train
path: subset_345/train-*
- config_name: subset_346
data_files:
- split: train
path: subset_346/train-*
- config_name: subset_347
data_files:
- split: train
path: subset_347/train-*
- config_name: subset_348
data_files:
- split: train
path: subset_348/train-*
- config_name: subset_349
data_files:
- split: train
path: subset_349/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_350
data_files:
- split: train
path: subset_350/train-*
- config_name: subset_351
data_files:
- split: train
path: subset_351/train-*
- config_name: subset_352
data_files:
- split: train
path: subset_352/train-*
- config_name: subset_353
data_files:
- split: train
path: subset_353/train-*
- config_name: subset_354
data_files:
- split: train
path: subset_354/train-*
- config_name: subset_355
data_files:
- split: train
path: subset_355/train-*
- config_name: subset_356
data_files:
- split: train
path: subset_356/train-*
- config_name: subset_357
data_files:
- split: train
path: subset_357/train-*
- config_name: subset_358
data_files:
- split: train
path: subset_358/train-*
- config_name: subset_359
data_files:
- split: train
path: subset_359/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_360
data_files:
- split: train
path: subset_360/train-*
- config_name: subset_361
data_files:
- split: train
path: subset_361/train-*
- config_name: subset_362
data_files:
- split: train
path: subset_362/train-*
- config_name: subset_363
data_files:
- split: train
path: subset_363/train-*
- config_name: subset_364
data_files:
- split: train
path: subset_364/train-*
- config_name: subset_365
data_files:
- split: train
path: subset_365/train-*
- config_name: subset_366
data_files:
- split: train
path: subset_366/train-*
- config_name: subset_367
data_files:
- split: train
path: subset_367/train-*
- config_name: subset_368
data_files:
- split: train
path: subset_368/train-*
- config_name: subset_369
data_files:
- split: train
path: subset_369/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_370
data_files:
- split: train
path: subset_370/train-*
- config_name: subset_371
data_files:
- split: train
path: subset_371/train-*
- config_name: subset_372
data_files:
- split: train
path: subset_372/train-*
- config_name: subset_373
data_files:
- split: train
path: subset_373/train-*
- config_name: subset_374
data_files:
- split: train
path: subset_374/train-*
- config_name: subset_375
data_files:
- split: train
path: subset_375/train-*
- config_name: subset_376
data_files:
- split: train
path: subset_376/train-*
- config_name: subset_377
data_files:
- split: train
path: subset_377/train-*
- config_name: subset_378
data_files:
- split: train
path: subset_378/train-*
- config_name: subset_379
data_files:
- split: train
path: subset_379/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_380
data_files:
- split: train
path: subset_380/train-*
- config_name: subset_381
data_files:
- split: train
path: subset_381/train-*
- config_name: subset_382
data_files:
- split: train
path: subset_382/train-*
- config_name: subset_383
data_files:
- split: train
path: subset_383/train-*
- config_name: subset_384
data_files:
- split: train
path: subset_384/train-*
- config_name: subset_385
data_files:
- split: train
path: subset_385/train-*
- config_name: subset_386
data_files:
- split: train
path: subset_386/train-*
- config_name: subset_387
data_files:
- split: train
path: subset_387/train-*
- config_name: subset_388
data_files:
- split: train
path: subset_388/train-*
- config_name: subset_389
data_files:
- split: train
path: subset_389/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_390
data_files:
- split: train
path: subset_390/train-*
- config_name: subset_391
data_files:
- split: train
path: subset_391/train-*
- config_name: subset_392
data_files:
- split: train
path: subset_392/train-*
- config_name: subset_393
data_files:
- split: train
path: subset_393/train-*
- config_name: subset_394
data_files:
- split: train
path: subset_394/train-*
- config_name: subset_395
data_files:
- split: train
path: subset_395/train-*
- config_name: subset_396
data_files:
- split: train
path: subset_396/train-*
- config_name: subset_397
data_files:
- split: train
path: subset_397/train-*
- config_name: subset_398
data_files:
- split: train
path: subset_398/train-*
- config_name: subset_399
data_files:
- split: train
path: subset_399/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_400
data_files:
- split: train
path: subset_400/train-*
- config_name: subset_401
data_files:
- split: train
path: subset_401/train-*
- config_name: subset_402
data_files:
- split: train
path: subset_402/train-*
- config_name: subset_403
data_files:
- split: train
path: subset_403/train-*
- config_name: subset_404
data_files:
- split: train
path: subset_404/train-*
- config_name: subset_405
data_files:
- split: train
path: subset_405/train-*
- config_name: subset_406
data_files:
- split: train
path: subset_406/train-*
- config_name: subset_407
data_files:
- split: train
path: subset_407/train-*
- config_name: subset_408
data_files:
- split: train
path: subset_408/train-*
- config_name: subset_409
data_files:
- split: train
path: subset_409/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_410
data_files:
- split: train
path: subset_410/train-*
- config_name: subset_411
data_files:
- split: train
path: subset_411/train-*
- config_name: subset_412
data_files:
- split: train
path: subset_412/train-*
- config_name: subset_413
data_files:
- split: train
path: subset_413/train-*
- config_name: subset_414
data_files:
- split: train
path: subset_414/train-*
- config_name: subset_415
data_files:
- split: train
path: subset_415/train-*
- config_name: subset_416
data_files:
- split: train
path: subset_416/train-*
- config_name: subset_417
data_files:
- split: train
path: subset_417/train-*
- config_name: subset_418
data_files:
- split: train
path: subset_418/train-*
- config_name: subset_419
data_files:
- split: train
path: subset_419/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_420
data_files:
- split: train
path: subset_420/train-*
- config_name: subset_421
data_files:
- split: train
path: subset_421/train-*
- config_name: subset_422
data_files:
- split: train
path: subset_422/train-*
- config_name: subset_423
data_files:
- split: train
path: subset_423/train-*
- config_name: subset_424
data_files:
- split: train
path: subset_424/train-*
- config_name: subset_425
data_files:
- split: train
path: subset_425/train-*
- config_name: subset_426
data_files:
- split: train
path: subset_426/train-*
- config_name: subset_427
data_files:
- split: train
path: subset_427/train-*
- config_name: subset_428
data_files:
- split: train
path: subset_428/train-*
- config_name: subset_429
data_files:
- split: train
path: subset_429/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_430
data_files:
- split: train
path: subset_430/train-*
- config_name: subset_431
data_files:
- split: train
path: subset_431/train-*
- config_name: subset_432
data_files:
- split: train
path: subset_432/train-*
- config_name: subset_433
data_files:
- split: train
path: subset_433/train-*
- config_name: subset_434
data_files:
- split: train
path: subset_434/train-*
- config_name: subset_435
data_files:
- split: train
path: subset_435/train-*
- config_name: subset_436
data_files:
- split: train
path: subset_436/train-*
- config_name: subset_437
data_files:
- split: train
path: subset_437/train-*
- config_name: subset_438
data_files:
- split: train
path: subset_438/train-*
- config_name: subset_439
data_files:
- split: train
path: subset_439/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_440
data_files:
- split: train
path: subset_440/train-*
- config_name: subset_441
data_files:
- split: train
path: subset_441/train-*
- config_name: subset_442
data_files:
- split: train
path: subset_442/train-*
- config_name: subset_443
data_files:
- split: train
path: subset_443/train-*
- config_name: subset_444
data_files:
- split: train
path: subset_444/train-*
- config_name: subset_445
data_files:
- split: train
path: subset_445/train-*
- config_name: subset_446
data_files:
- split: train
path: subset_446/train-*
- config_name: subset_447
data_files:
- split: train
path: subset_447/train-*
- config_name: subset_448
data_files:
- split: train
path: subset_448/train-*
- config_name: subset_449
data_files:
- split: train
path: subset_449/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_450
data_files:
- split: train
path: subset_450/train-*
- config_name: subset_451
data_files:
- split: train
path: subset_451/train-*
- config_name: subset_452
data_files:
- split: train
path: subset_452/train-*
- config_name: subset_453
data_files:
- split: train
path: subset_453/train-*
- config_name: subset_454
data_files:
- split: train
path: subset_454/train-*
- config_name: subset_455
data_files:
- split: train
path: subset_455/train-*
- config_name: subset_456
data_files:
- split: train
path: subset_456/train-*
- config_name: subset_457
data_files:
- split: train
path: subset_457/train-*
- config_name: subset_458
data_files:
- split: train
path: subset_458/train-*
- config_name: subset_459
data_files:
- split: train
path: subset_459/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_460
data_files:
- split: train
path: subset_460/train-*
- config_name: subset_461
data_files:
- split: train
path: subset_461/train-*
- config_name: subset_462
data_files:
- split: train
path: subset_462/train-*
- config_name: subset_463
data_files:
- split: train
path: subset_463/train-*
- config_name: subset_464
data_files:
- split: train
path: subset_464/train-*
- config_name: subset_465
data_files:
- split: train
path: subset_465/train-*
- config_name: subset_466
data_files:
- split: train
path: subset_466/train-*
- config_name: subset_467
data_files:
- split: train
path: subset_467/train-*
- config_name: subset_468
data_files:
- split: train
path: subset_468/train-*
- config_name: subset_469
data_files:
- split: train
path: subset_469/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_470
data_files:
- split: train
path: subset_470/train-*
- config_name: subset_471
data_files:
- split: train
path: subset_471/train-*
- config_name: subset_472
data_files:
- split: train
path: subset_472/train-*
- config_name: subset_473
data_files:
- split: train
path: subset_473/train-*
- config_name: subset_474
data_files:
- split: train
path: subset_474/train-*
- config_name: subset_475
data_files:
- split: train
path: subset_475/train-*
- config_name: subset_476
data_files:
- split: train
path: subset_476/train-*
- config_name: subset_477
data_files:
- split: train
path: subset_477/train-*
- config_name: subset_478
data_files:
- split: train
path: subset_478/train-*
- config_name: subset_479
data_files:
- split: train
path: subset_479/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_480
data_files:
- split: train
path: subset_480/train-*
- config_name: subset_481
data_files:
- split: train
path: subset_481/train-*
- config_name: subset_482
data_files:
- split: train
path: subset_482/train-*
- config_name: subset_483
data_files:
- split: train
path: subset_483/train-*
- config_name: subset_484
data_files:
- split: train
path: subset_484/train-*
- config_name: subset_485
data_files:
- split: train
path: subset_485/train-*
- config_name: subset_486
data_files:
- split: train
path: subset_486/train-*
- config_name: subset_487
data_files:
- split: train
path: subset_487/train-*
- config_name: subset_488
data_files:
- split: train
path: subset_488/train-*
- config_name: subset_489
data_files:
- split: train
path: subset_489/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_490
data_files:
- split: train
path: subset_490/train-*
- config_name: subset_491
data_files:
- split: train
path: subset_491/train-*
- config_name: subset_492
data_files:
- split: train
path: subset_492/train-*
- config_name: subset_493
data_files:
- split: train
path: subset_493/train-*
- config_name: subset_494
data_files:
- split: train
path: subset_494/train-*
- config_name: subset_495
data_files:
- split: train
path: subset_495/train-*
- config_name: subset_496
data_files:
- split: train
path: subset_496/train-*
- config_name: subset_497
data_files:
- split: train
path: subset_497/train-*
- config_name: subset_498
data_files:
- split: train
path: subset_498/train-*
- config_name: subset_499
data_files:
- split: train
path: subset_499/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_500
data_files:
- split: train
path: subset_500/train-*
- config_name: subset_501
data_files:
- split: train
path: subset_501/train-*
- config_name: subset_502
data_files:
- split: train
path: subset_502/train-*
- config_name: subset_503
data_files:
- split: train
path: subset_503/train-*
- config_name: subset_504
data_files:
- split: train
path: subset_504/train-*
- config_name: subset_505
data_files:
- split: train
path: subset_505/train-*
- config_name: subset_506
data_files:
- split: train
path: subset_506/train-*
- config_name: subset_507
data_files:
- split: train
path: subset_507/train-*
- config_name: subset_508
data_files:
- split: train
path: subset_508/train-*
- config_name: subset_509
data_files:
- split: train
path: subset_509/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_510
data_files:
- split: train
path: subset_510/train-*
- config_name: subset_511
data_files:
- split: train
path: subset_511/train-*
- config_name: subset_512
data_files:
- split: train
path: subset_512/train-*
- config_name: subset_513
data_files:
- split: train
path: subset_513/train-*
- config_name: subset_514
data_files:
- split: train
path: subset_514/train-*
- config_name: subset_515
data_files:
- split: train
path: subset_515/train-*
- config_name: subset_516
data_files:
- split: train
path: subset_516/train-*
- config_name: subset_517
data_files:
- split: train
path: subset_517/train-*
- config_name: subset_518
data_files:
- split: train
path: subset_518/train-*
- config_name: subset_519
data_files:
- split: train
path: subset_519/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_520
data_files:
- split: train
path: subset_520/train-*
- config_name: subset_521
data_files:
- split: train
path: subset_521/train-*
- config_name: subset_522
data_files:
- split: train
path: subset_522/train-*
- config_name: subset_523
data_files:
- split: train
path: subset_523/train-*
- config_name: subset_524
data_files:
- split: train
path: subset_524/train-*
- config_name: subset_525
data_files:
- split: train
path: subset_525/train-*
- config_name: subset_526
data_files:
- split: train
path: subset_526/train-*
- config_name: subset_527
data_files:
- split: train
path: subset_527/train-*
- config_name: subset_528
data_files:
- split: train
path: subset_528/train-*
- config_name: subset_529
data_files:
- split: train
path: subset_529/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_530
data_files:
- split: train
path: subset_530/train-*
- config_name: subset_531
data_files:
- split: train
path: subset_531/train-*
- config_name: subset_532
data_files:
- split: train
path: subset_532/train-*
- config_name: subset_533
data_files:
- split: train
path: subset_533/train-*
- config_name: subset_534
data_files:
- split: train
path: subset_534/train-*
- config_name: subset_535
data_files:
- split: train
path: subset_535/train-*
- config_name: subset_536
data_files:
- split: train
path: subset_536/train-*
- config_name: subset_537
data_files:
- split: train
path: subset_537/train-*
- config_name: subset_538
data_files:
- split: train
path: subset_538/train-*
- config_name: subset_539
data_files:
- split: train
path: subset_539/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_540
data_files:
- split: train
path: subset_540/train-*
- config_name: subset_541
data_files:
- split: train
path: subset_541/train-*
- config_name: subset_542
data_files:
- split: train
path: subset_542/train-*
- config_name: subset_543
data_files:
- split: train
path: subset_543/train-*
- config_name: subset_544
data_files:
- split: train
path: subset_544/train-*
- config_name: subset_545
data_files:
- split: train
path: subset_545/train-*
- config_name: subset_546
data_files:
- split: train
path: subset_546/train-*
- config_name: subset_547
data_files:
- split: train
path: subset_547/train-*
- config_name: subset_548
data_files:
- split: train
path: subset_548/train-*
- config_name: subset_549
data_files:
- split: train
path: subset_549/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_550
data_files:
- split: train
path: subset_550/train-*
- config_name: subset_551
data_files:
- split: train
path: subset_551/train-*
- config_name: subset_552
data_files:
- split: train
path: subset_552/train-*
- config_name: subset_553
data_files:
- split: train
path: subset_553/train-*
- config_name: subset_554
data_files:
- split: train
path: subset_554/train-*
- config_name: subset_555
data_files:
- split: train
path: subset_555/train-*
- config_name: subset_556
data_files:
- split: train
path: subset_556/train-*
- config_name: subset_557
data_files:
- split: train
path: subset_557/train-*
- config_name: subset_558
data_files:
- split: train
path: subset_558/train-*
- config_name: subset_559
data_files:
- split: train
path: subset_559/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_560
data_files:
- split: train
path: subset_560/train-*
- config_name: subset_561
data_files:
- split: train
path: subset_561/train-*
- config_name: subset_562
data_files:
- split: train
path: subset_562/train-*
- config_name: subset_563
data_files:
- split: train
path: subset_563/train-*
- config_name: subset_564
data_files:
- split: train
path: subset_564/train-*
- config_name: subset_565
data_files:
- split: train
path: subset_565/train-*
- config_name: subset_566
data_files:
- split: train
path: subset_566/train-*
- config_name: subset_567
data_files:
- split: train
path: subset_567/train-*
- config_name: subset_568
data_files:
- split: train
path: subset_568/train-*
- config_name: subset_569
data_files:
- split: train
path: subset_569/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_570
data_files:
- split: train
path: subset_570/train-*
- config_name: subset_571
data_files:
- split: train
path: subset_571/train-*
- config_name: subset_572
data_files:
- split: train
path: subset_572/train-*
- config_name: subset_573
data_files:
- split: train
path: subset_573/train-*
- config_name: subset_574
data_files:
- split: train
path: subset_574/train-*
- config_name: subset_575
data_files:
- split: train
path: subset_575/train-*
- config_name: subset_576
data_files:
- split: train
path: subset_576/train-*
- config_name: subset_577
data_files:
- split: train
path: subset_577/train-*
- config_name: subset_578
data_files:
- split: train
path: subset_578/train-*
- config_name: subset_579
data_files:
- split: train
path: subset_579/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_580
data_files:
- split: train
path: subset_580/train-*
- config_name: subset_581
data_files:
- split: train
path: subset_581/train-*
- config_name: subset_582
data_files:
- split: train
path: subset_582/train-*
- config_name: subset_583
data_files:
- split: train
path: subset_583/train-*
- config_name: subset_584
data_files:
- split: train
path: subset_584/train-*
- config_name: subset_585
data_files:
- split: train
path: subset_585/train-*
- config_name: subset_586
data_files:
- split: train
path: subset_586/train-*
- config_name: subset_587
data_files:
- split: train
path: subset_587/train-*
- config_name: subset_588
data_files:
- split: train
path: subset_588/train-*
- config_name: subset_589
data_files:
- split: train
path: subset_589/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_590
data_files:
- split: train
path: subset_590/train-*
- config_name: subset_591
data_files:
- split: train
path: subset_591/train-*
- config_name: subset_592
data_files:
- split: train
path: subset_592/train-*
- config_name: subset_593
data_files:
- split: train
path: subset_593/train-*
- config_name: subset_594
data_files:
- split: train
path: subset_594/train-*
- config_name: subset_595
data_files:
- split: train
path: subset_595/train-*
- config_name: subset_596
data_files:
- split: train
path: subset_596/train-*
- config_name: subset_597
data_files:
- split: train
path: subset_597/train-*
- config_name: subset_598
data_files:
- split: train
path: subset_598/train-*
- config_name: subset_599
data_files:
- split: train
path: subset_599/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_600
data_files:
- split: train
path: subset_600/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
alvations/c4p0-x1-es-en | alvations | "2024-03-24T03:52:27Z" | 13,809 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-23T10:16:40Z" | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: timestamp[us]
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 3583
num_examples: 4
download_size: 14029
dataset_size: 3583
configs:
- config_name: default
data_files:
- split: train
path: d595973734f0a325/train-*
---
|
daiua/video | daiua | "2023-11-20T02:05:48Z" | 13,771 | 1 | [
"license:other",
"modality:video",
"region:us"
] | null | "2023-11-16T06:27:40Z" | ---
license: other
license_name: '11111'
license_link: LICENSE
---
|
HAERAE-HUB/KMMLU-HARD | HAERAE-HUB | "2024-03-09T23:46:06Z" | 13,716 | 8 | [
"task_categories:question-answering",
"language:ko",
"license:cc-by-nd-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2402.11548",
"region:us",
"haerae",
"mmlu"
] | [
"question-answering"
] | "2024-01-12T05:49:07Z" | ---
configs:
- config_name: maritime_engineering
data_files:
- split: dev
path: data/maritime_engineering-dev.csv
- split: test
path: data/maritime_engineering-hard-test.csv
- config_name: materials_engineering
data_files:
- split: dev
path: data/materials_engineering-dev.csv
- split: test
path: data/materials_engineering-hard-test.csv
- config_name: railway_and_automotive_engineering
data_files:
- split: dev
path: data/railway_and_automotive_engineering-dev.csv
- split: test
path: data/railway_and_automotive_engineering-hard-test.csv
- config_name: biology
data_files:
- split: dev
path: data/biology-dev.csv
- split: test
path: data/biology-hard-test.csv
- config_name: public_safety
data_files:
- split: dev
path: data/public_safety-dev.csv
- split: test
path: data/public_safety-hard-test.csv
- config_name: criminal_law
data_files:
- split: dev
path: data/criminal_law-dev.csv
- split: test
path: data/criminal_law-hard-test.csv
- config_name: information_technology
data_files:
- split: dev
path: data/information_technology-dev.csv
- split: test
path: data/information_technology-hard-test.csv
- config_name: geomatics
data_files:
- split: dev
path: data/geomatics-dev.csv
- split: test
path: data/geomatics-hard-test.csv
- config_name: management
data_files:
- split: dev
path: data/management-dev.csv
- split: test
path: data/management-hard-test.csv
- config_name: math
data_files:
- split: dev
path: data/math-dev.csv
- split: test
path: data/math-hard-test.csv
- config_name: accounting
data_files:
- split: dev
path: data/accounting-dev.csv
- split: test
path: data/accounting-hard-test.csv
- config_name: chemistry
data_files:
- split: dev
path: data/chemistry-dev.csv
- split: test
path: data/chemistry-hard-test.csv
- config_name: nondestructive_testing
data_files:
- split: dev
path: data/nondestructive_testing-dev.csv
- split: test
path: data/nondestructive_testing-hard-test.csv
- config_name: computer_science
data_files:
- split: dev
path: data/computer_science-dev.csv
- split: test
path: data/computer_science-hard-test.csv
- config_name: ecology
data_files:
- split: dev
path: data/ecology-dev.csv
- split: test
path: data/ecology-hard-test.csv
- config_name: health
data_files:
- split: dev
path: data/health-dev.csv
- split: test
path: data/health-hard-test.csv
- config_name: political_science_and_sociology
data_files:
- split: dev
path: data/political_science_and_sociology-dev.csv
- split: test
path: data/political_science_and_sociology-hard-test.csv
- config_name: patent
data_files:
- split: dev
path: data/patent-dev.csv
- split: test
path: data/patent-hard-test.csv
- config_name: electrical_engineering
data_files:
- split: dev
path: data/electrical_engineering-dev.csv
- split: test
path: data/electrical_engineering-hard-test.csv
- config_name: electronics_engineering
data_files:
- split: dev
path: data/electronics_engineering-dev.csv
- split: test
path: data/electronics_engineering-hard-test.csv
- config_name: korean_history
data_files:
- split: dev
path: data/korean_history-dev.csv
- split: test
path: data/korean_history-hard-test.csv
- config_name: gas_technology_and_engineering
data_files:
- split: dev
path: data/gas_technology_and_engineering-dev.csv
- split: test
path: data/gas_technology_and_engineering-hard-test.csv
- config_name: machine_design_and_manufacturing
data_files:
- split: dev
path: data/machine_design_and_manufacturing-dev.csv
- split: test
path: data/machine_design_and_manufacturing-hard-test.csv
- config_name: chemical_engineering
data_files:
- split: dev
path: data/chemical_engineering-dev.csv
- split: test
path: data/chemical_engineering-hard-test.csv
- config_name: telecommunications_and_wireless_technology
data_files:
- split: dev
path: data/telecommunications_and_wireless_technology-dev.csv
- split: test
path: data/telecommunications_and_wireless_technology-hard-test.csv
- config_name: food_processing
data_files:
- split: dev
path: data/food_processing-dev.csv
- split: test
path: data/food_processing-hard-test.csv
- config_name: social_welfare
data_files:
- split: dev
path: data/social_welfare-dev.csv
- split: test
path: data/social_welfare-hard-test.csv
- config_name: real_estate
data_files:
- split: dev
path: data/real_estate-dev.csv
- split: test
path: data/real_estate-hard-test.csv
- config_name: marketing
data_files:
- split: dev
path: data/marketing-dev.csv
- split: test
path: data/marketing-hard-test.csv
- config_name: mechanical_engineering
data_files:
- split: dev
path: data/mechanical_engineering-dev.csv
- split: test
path: data/mechanical_engineering-hard-test.csv
- config_name: fashion
data_files:
- split: dev
path: data/fashion-dev.csv
- split: test
path: data/fashion-hard-test.csv
- config_name: psychology
data_files:
- split: dev
path: data/psychology-dev.csv
- split: test
path: data/psychology-hard-test.csv
- config_name: taxation
data_files:
- split: dev
path: data/taxation-dev.csv
- split: test
path: data/taxation-hard-test.csv
- config_name: environmental_science
data_files:
- split: dev
path: data/environmental_science-dev.csv
- split: test
path: data/environmental_science-hard-test.csv
- config_name: refrigerating_machinery
data_files:
- split: dev
path: data/refrigerating_machinery-dev.csv
- split: test
path: data/refrigerating_machinery-hard-test.csv
- config_name: education
data_files:
- split: dev
path: data/education-dev.csv
- split: test
path: data/education-hard-test.csv
- config_name: industrial_engineer
data_files:
- split: dev
path: data/industrial_engineer-dev.csv
- split: test
path: data/industrial_engineer-hard-test.csv
- config_name: civil_engineering
data_files:
- split: dev
path: data/civil_engineering-dev.csv
- split: test
path: data/civil_engineering-hard-test.csv
- config_name: energy_management
data_files:
- split: dev
path: data/energy_management-dev.csv
- split: test
path: data/energy_management-hard-test.csv
- config_name: law
data_files:
- split: dev
path: data/law-dev.csv
- split: test
path: data/law-hard-test.csv
- config_name: agricultural_sciences
data_files:
- split: dev
path: data/agricultural_sciences-dev.csv
- split: test
path: data/agricultural_sciences-hard-test.csv
- config_name: interior_architecture_and_design
data_files:
- split: dev
path: data/interior_architecture_and_design-dev.csv
- split: test
path: data/interior_architecture_and_design-hard-test.csv
- config_name: aviation_engineering_and_maintenance
data_files:
- split: dev
path: data/aviation_engineering_and_maintenance-dev.csv
- split: test
path: data/aviation_engineering_and_maintenance-hard-test.csv
- config_name: construction
data_files:
- split: dev
path: data/construction-dev.csv
- split: test
path: data/construction-hard-test.csv
- config_name: economics
data_files:
- split: dev
path: data/economics-dev.csv
- split: test
path: data/economics-hard-test.csv
license: cc-by-nd-4.0
task_categories:
- question-answering
language:
- ko
tags:
- haerae
- mmlu
size_categories:
- 100K<n<1M
---
### KMMLU (Korean-MMLU)
We propose KMMLU, a new Korean benchmark with 35,030 expert-level multiple-choice questions across 45 subjects ranging from humanities to STEM.
Unlike previous Korean benchmarks that are translated from existing English benchmarks, KMMLU is collected from original Korean exams, capturing linguistic and cultural aspects of the Korean language.
We test 26 publically available and proprietary LLMs, identifying significant room for improvement.
The best publicly available model achieves 50.54% on KMMLU, far below the average human performance of 62.6%.
This model was primarily trained for English and Chinese, not Korean.
Current LLMs tailored to Korean, such as Polyglot-Ko, perform far worse. Surprisingly, even the most capable proprietary LLMs, e.g., GPT-4 and HyperCLOVA X, achieve 59.95% and 53.40%, respectively.
This suggests that further work is needed to improve Korean LLMs, and KMMLU offers the right tool to track this progress.
We make our dataset publicly available on the Hugging Face Hub and integrate the benchmark into EleutherAI's Language Model Evaluation Harness.
Link to Paper: [KMMLU: Measuring Massive Multitask Language Understanding in Korean](https://arxiv.org/abs/2402.11548)
### KMMLU Statistics
| Category | # Questions |
|------------------------------|-------------|
| **Prerequisites** | |
| None | 59,909 |
| 1 Prerequisite Test | 12,316 |
| 2 Prerequisite Tests | 776 |
| 2+ Years of Experience | 65,135 |
| 4+ Years of Experience | 98,678 |
| 9+ Years of Experience | 6,963 |
| **Question Type** | |
| Positive | 207,030 |
| Negation | 36,777 |
| **Split** | |
| Train | 208,522 |
| Validation | 225 |
| Test | 35,030 |
| **Total** | 243,777 |
### Categories
To reimplement the categories in the paper, refer to the following:
```
supercategories = {
"accounting": "HUMSS",
"agricultural_sciences": "Other",
"aviation_engineering_and_maintenance": "Applied Science",
"biology": "STEM",
"chemical_engineering": "STEM",
"chemistry": "STEM",
"civil_engineering": "STEM",
"computer_science": "STEM",
"construction": "Other",
"criminal_law": "HUMSS",
"ecology": "STEM",
"economics": "HUMSS",
"education": "HUMSS",
"electrical_engineering": "STEM",
"electronics_engineering": "Applied Science",
"energy_management": "Applied Science",
"environmental_science": "Applied Science",
"fashion": "Other",
"food_processing": "Other",
"gas_technology_and_engineering": "Applied Science",
"geomatics": "Applied Science",
"health": "Other",
"industrial_engineer": "Applied Science",
"information_technology": "STEM",
"interior_architecture_and_design": "Other",
"law": "HUMSS",
"machine_design_and_manufacturing": "Applied Science",
"management": "HUMSS",
"maritime_engineering": "Applied Science",
"marketing": "Other",
"materials_engineering": "STEM",
"mechanical_engineering": "STEM",
"nondestructive_testing": "Applied Science",
"patent": "Other",
"political_science_and_sociology": "HUMSS",
"psychology": "HUMSS",
"public_safety": "Other",
"railway_and_automotive_engineering": "Applied Science",
"real_estate": "Other",
"refrigerating_machinery": "Other",
"social_welfare": "HUMSS",
"taxation": "HUMSS",
"telecommunications_and_wireless_technology": "Applied Science",
"korean_history": "HUMSS",
"math": "STEM"
}
```
### Point of Contact
For any questions contact us via the following email:)
```
[email protected]
``` |
Gen-Verse/WideRange4D | Gen-Verse | "2025-03-25T00:37:31Z" | 13,709 | 10 | [
"task_categories:image-to-video",
"license:unknown",
"size_categories:10K<n<100K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"arxiv:2503.13435",
"region:us",
"4d-reconstruction",
"gaussian-splatting"
] | [
"image-to-video"
] | "2025-03-11T02:02:06Z" | ---
task_categories:
- image-to-video
tags:
- 4d-reconstruction
- gaussian-splatting
license: unknown
---
<div align="center">
# WideRange4D: Enabling High-Quality 4D Reconstruction with Wide-Range Movements and Scenes

[**Github Page**](https://github.com/Gen-Verse/WideRange4D) | [**arXiv Paper**](https://arxiv.org/abs/2503.13435)
[Ling Yang](https://yangling0818.github.io)<sup>1*</sup>, [Kaixin Zhu](https://chriszkxxx.github.io)<sup>1*</sup>, [Juanxi Tian](https://tianshijing.github.io)<sup>1*</sup>, [Bohan Zeng](https://scholar.google.com/citations?user=MHo_d3YAAAAJ&hl=en)<sup>1*</sup>, [Mingbao Lin](http://lmb.bjbxit.cn/)<sup>3</sup>, [Hongjuan Pei](https://openreview.net/profile?id=~Hongjuan_Pei1)<sup>2</sup>, [Wentao Zhang](https://zwt233.github.io)<sup>1‡</sup>, [Shuicheng Yan](http://yanshuicheng.info)<sup>3‡</sup>
<sup>1</sup> Peking University   <sup>2</sup> University of the Chinese Academy of Sciences   <sup>3</sup> National University of Singapore
<sup>*</sup> Equal Contributions. <sup>‡</sup> Corresponding Author.
</div>
## Example
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/670880950e79a8b46f7ff9dd/b_29Bkf1mWcVWPqhOsut4.mp4"></video>
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6708920aeae29d1cd41a703b/cyyvOw8hjvQ6JU-uvngtT.mp4"></video>
```
@article{yang2025widerange4d,
title={WideRange4D: Enabling High-Quality 4D Reconstruction with Wide-Range Movements and Scenes},
author={Yang, Ling and Zhu, Kaixin and Tian, Juanxi and Zeng, Bohan and Lin, Mingbao and Pei, Hongjuan and Zhang, Wentao and Yan, Shuichen},
journal={arXiv preprint arXiv:2503.13435},
year={2025}
}
``` |
taesiri/arxiv_audio | taesiri | "2023-12-22T04:56:39Z" | 13,690 | 2 | [
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:text",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2023-07-03T03:46:57Z" | ---
license: cc-by-4.0
---
|
EleutherAI/race | EleutherAI | "2023-07-03T21:27:18Z" | 13,677 | 6 | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:1K<n<10K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:1704.04683",
"region:us"
] | [
"multiple-choice"
] | "2023-07-03T13:20:38Z" | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: RACE
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: race
dataset_info:
---
# "race" Grouped by Article
This is a modified version of https://huggingface.co/datasets/race that returns documents grouped by article context instead of by question. **Note:** This dataset currently only contains that test set of the ```high``` subset of the data.
The original readme is contained below.
# Dataset Card for "race"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.cs.cmu.edu/~glai1/data/race/](http://www.cs.cmu.edu/~glai1/data/race/)
- **Repository:** https://github.com/qizhex/RACE_AR_baselines
- **Paper:** [RACE: Large-scale ReAding Comprehension Dataset From Examinations](https://arxiv.org/abs/1704.04683)
- **Point of Contact:** [Guokun Lai](mailto:[email protected]), [Qizhe Xie](mailto:[email protected])
- **Size of downloaded dataset files:** 76.33 MB
- **Size of the generated dataset:** 349.46 MB
- **Total amount of disk used:** 425.80 MB
### Dataset Summary
RACE is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
dataset is collected from English examinations in China, which are designed for middle school and high school students.
The dataset can be served as the training and test sets for machine comprehension.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### all
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 174.73 MB
- **Total amount of disk used:** 200.17 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "A",
"article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
"example_id": "high132.txt",
"options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
"question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
}
```
#### high
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 140.12 MB
- **Total amount of disk used:** 165.56 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "A",
"article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
"example_id": "high132.txt",
"options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
"question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
}
```
#### middle
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 34.61 MB
- **Total amount of disk used:** 60.05 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "B",
"article": "\"There is not enough oil in the world now. As time goes by, it becomes less and less, so what are we going to do when it runs ou...",
"example_id": "middle3.txt",
"options": ["There is more petroleum than we can use now.", "Trees are needed for some other things besides making gas.", "We got electricity from ocean tides in the old days.", "Gas wasn't used to run cars in the Second World War."],
"question": "According to the passage, which of the following statements is TRUE?"
}
```
### Data Fields
The data fields are the same among all splits.
#### all
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
#### high
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
#### middle
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|------|----:|---------:|---:|
|all |87866| 4887|4934|
|high |62445| 3451|3498|
|middle|25421| 1436|1436|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
http://www.cs.cmu.edu/~glai1/data/race/
1. RACE dataset is available for non-commercial research purpose only.
2. All passages are obtained from the Internet which is not property of Carnegie Mellon University. We are not responsible for the content nor the meaning of these passages.
3. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.
4. We reserve the right to terminate your access to the RACE dataset at any time.
### Citation Information
```
@inproceedings{lai-etal-2017-race,
title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
author = "Lai, Guokun and
Xie, Qizhe and
Liu, Hanxiao and
Yang, Yiming and
Hovy, Eduard",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D17-1082",
doi = "10.18653/v1/D17-1082",
pages = "785--794",
}
```
### Contributions
Thanks to [@abarbosa94](https://github.com/abarbosa94), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
alvations/c4p0-x1-fr-en | alvations | "2024-03-24T03:54:05Z" | 13,644 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-23T10:15:37Z" | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: timestamp[us]
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 9747
num_examples: 7
download_size: 21197
dataset_size: 9747
configs:
- config_name: default
data_files:
- split: train
path: 02b9d52b317f351f/train-*
---
|
facebook/natural_reasoning | facebook | "2025-02-21T06:02:40Z" | 13,626 | 455 | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13124",
"region:us"
] | [
"text-generation"
] | "2025-01-30T23:29:32Z" | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- en
pretty_name: Natural Reasoning
size_categories:
- 1M<n<10M
---
[NaturalReasoning](https://arxiv.org/abs/2502.13124) is a large-scale dataset for general reasoning tasks. It consists of high-quality challenging reasoning questions backtranslated from pretraining corpora [DCLM](https://github.com/mlfoundations/dclm) and [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath). The questions have been deduplicated and decontaminated from popular reasoning benchmarks including MATH, GPQA, MMLU-Pro, MMLU-STEM. For each question, we extract the reference final answer from the original document from the pretraining corpora if possible. We also provide a model-generated response from Llama3.3-70B-Instruct.
We release a 1.1 million subset of NaturalReasoning to the research community to foster research on training strong LLM reasoners.
You can load the dataset as follows
```python
from datasets import load_dataset
ds = load_dataset("facebook/natural_reasoning")
```
For more information regarding data collection, please refer to our [paper](https://arxiv.org/abs/2502.13124).
## Reference Answer Statistics
In the 1.1 million subset, 18.29% of the questions do not have a reference answer, 9.71% of the questions have a single word answer, 21.58% of the questions have a short answer while 50.42% of the questions have a long reference answer.
## Scaling Curve
Training on NaturalReasoning shows better scaling effects than training on other datasets when training Llama3.1-8B-Instruct model. In particular, we measure the average performance on three benchmarks: MATH, GPQA, MMLU-Pro.
<img src="https://cdn-uploads.huggingface.co/production/uploads/659a395421a7431643caedda/S6aO-agjRRhc0JLkohZ5z.jpeg" style="width:50%; max-width:400px;">
## Citation
If you use data from NaturalReasoning, please cite with the following BibTex entry:
```
@misc{yuan2025naturalreasoningreasoningwild28m,
title={NaturalReasoning: Reasoning in the Wild with 2.8M Challenging Questions},
author={Weizhe Yuan and Jane Yu and Song Jiang and Karthik Padthe and Yang Li and Dong Wang and Ilia Kulikov and Kyunghyun Cho and Yuandong Tian and Jason E Weston and Xian Li},
year={2025},
eprint={2502.13124},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.13124},
}
```
|
databricks/databricks-dolly-15k | databricks | "2023-06-30T18:34:13Z" | 13,570 | 800 | [
"task_categories:question-answering",
"task_categories:summarization",
"language:en",
"license:cc-by-sa-3.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2203.02155",
"region:us"
] | [
"question-answering",
"summarization"
] | "2023-04-11T16:43:13Z" | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- en
size_categories:
- 10K<n<100K
---
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: English
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT.
Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including
the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using
information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly
instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors.
They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context`
field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts,
this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper.
For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a
corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to
restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might
provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from
these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source,
human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT.
Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including
academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization)
contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the
target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical
of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of
rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# Citation
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors. |
qmeeus/voxpopuli | qmeeus | "2024-02-06T23:13:46Z" | 13,547 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-06T15:17:31Z" | ---
dataset_info:
- config_name: de
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 61603981153.568
num_examples: 108473
- name: validation
num_bytes: 1149586917.507
num_examples: 2109
download_size: 52060225655
dataset_size: 62753568071.075
- config_name: es
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 36533665201.936
num_examples: 50922
- name: validation
num_bytes: 1173444834.383
num_examples: 1631
download_size: 1005381345
dataset_size: 37707110036.319
- config_name: fr
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 48346650213.26
num_examples: 73561
- name: validation
num_bytes: 1149779276.605
num_examples: 1727
download_size: 17314564262
dataset_size: 49496429489.865005
- config_name: nl
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 10436544940.608
num_examples: 20968
- name: validation
num_bytes: 636925883.64
num_examples: 1230
download_size: 9404833804
dataset_size: 11073470824.248
configs:
- config_name: de
data_files:
- split: train
path: de/train-*
- split: validation
path: de/validation-*
- config_name: es
data_files:
- split: train
path: es/train-*
- split: validation
path: es/validation-*
- config_name: fr
data_files:
- split: train
path: fr/train-*
- split: validation
path: fr/validation-*
- config_name: nl
data_files:
- split: train
path: nl/train-*
- split: validation
path: nl/validation-*
---
# Dataset Card for "voxpopuli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
parrotzone/sdxl-1.0 | parrotzone | "2023-09-20T12:27:51Z" | 13,462 | 11 | [
"license:openrail++",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2023-07-31T07:18:18Z" | ---
license: openrail++
---
# check [sdxl.parrotzone.art](https://sdxl.parrotzone.art) for easy viewing ⋆。°✩
---
## all images were made with SDXL 1.0 + the 0.9 VAE
- steps: 20
- cfg scale: 7
- no refiner
- random seeds
|
mlfoundations/datacomp_1b | mlfoundations | "2023-08-21T21:43:05Z" | 13,456 | 33 | [
"license:cc-by-4.0",
"size_categories:1B<n<10B",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-06-11T20:12:44Z" | ---
license: cc-by-4.0
---
## DataComp-1B
This repository contains metadata files for DataComp-1B. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp).
We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
## Terms and Conditions
We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage. |
Anthropic/hh-rlhf | Anthropic | "2023-05-26T18:47:34Z" | 13,433 | 1,300 | [
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2204.05862",
"region:us",
"human-feedback"
] | null | "2022-12-08T20:11:33Z" | ---
license: mit
tags:
- human-feedback
---
# Dataset Card for HH-RLHF
## Dataset Summary
This repository provides access to two different kinds of data:
1. Human preference data about helpfulness and harmlessness from [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862). These data are meant to train preference (or reward) models for subsequent RLHF training. These data are *not* meant for supervised training of dialogue agents. Training dialogue agents on these data is likely to lead to harmful models and this shold be avoided.
2. Human-generated and annotated red teaming dialogues from [Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned](https://www.anthropic.com/red_teaming.pdf). These data are meant to understand how crowdworkers red team models and what types of red team attacks are succesful or not. The data are *not* meant for fine-tuning or preference modeling (use the data above for preference modeling). These data are entire transcripts of conversations that are derived from the harmlessness preference modeling data described above, where only the chosen response is incorporated into the overall transcript. Furthermore, the transcripts are annotated with human and automated measurements of how harmful the overall dialogues are.
**Disclaimer**: The data (especially the harmlessness preference data and the red team data) contain content that may be offensive or upsetting. Topics include, but are not limited to, discriminatory language and discussions of abuse, violence, self-harm, exploitation, and other potentially upsetting subject matter. Please only engage with the data in accordance with your own personal risk tolerance. The data are intended for research purposes, especially research that can make models *less* harmful. The views expressed in the data do not reflect the views of Anthropic or any of its employees. As mentioned above, these data are *not* intended for training dialogue agents as this will likely lead to harmful model behavior.
Each of these datasets are described further below.
## Human preference data about helpfulness and harmlessness (PM Data)
The data are described in the paper: [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862). If you find the data useful, please cite the paper. The data format is very simple -- each line of the jsonl files contains a pair of texts, one "chosen" and one "rejected".
For **helpfulness**, the data are grouped into train/test splits in three tranches: from our base models (context-distilled 52B language models), via rejection sampling (mostly with best-of-16 sampling) against an early preference model, and a dataset sampled during our iterated "online" process.
For **harmlessness**, the data are only collected for our base models, but otherwise formatted in the same way.
Details about the data collection process and crowdworker population can be found in the paper, specifically in section 2 and appendix D.
## Red teaming data (not PM Data)
The data are described in the paper: [Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned](https://www.anthropic.com/red_teaming.pdf). If you find the data useful, please cite the paper. At a high level these data can be examined to understand what succesful red team attempts look like.
Details about the data and data collection procedures can be found in the Datasheet in the appendix of the paper.
Each line of the jsonl file contains a dictionary with the following fields:
- `transcript` a text transcript of a conversation between a human adversary (red team member) and an AI assistant
- `min_harmlessness_score_transcript` a real value score of the harmlessness of the AI assistant (lower is more harmful) as obtained from a preference model
- `num_params` number of parameters in the language model powering the AI assistant
- `model_type` type of model powering the AI assistant
- `rating` the red team member's rating of how successful they were at breaking the AI assistant (Likert scale, higher is more successful)
- `task_description` a short text description written by the red team member about how they tried to red team the AI assistant
- `task_description_harmlessness_score` a real value score of the harmlessness of the task description (lower is more harmful) as obtained from a preference model
- `red_team_member_id` an arbitrary identifier of the red team member. one red team member can generate multiple red team attacks
- `is_upworker` a binary indicator that is true if the red team member was from the crowd platform Upwork or false if they were from MTurk
- `tags` a list of up to 6 tags per transcript. tags are short descriptions of the red team attempts generated by crowdworkers who reviewed red team data post-hoc. tags were only provided for a random sample of 1000 red team attempts for two of four model types.
## Usage
Each of the above datasets is located in a separate sub-directory. To load an individual subset, use the `data_dir` argument of the `load_dataset()` function as follows:
```python
from datasets import load_dataset
# Load all helpfulness/harmless subsets (share the same schema)
dataset = load_dataset("Anthropic/hh-rlhf")
# Load one of the harmless subsets
dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")
# Load the red teaming subset
dataset = load_dataset("Anthropic/hh-rlhf", data_dir="red-team-attempts")
```
## Contact
The original authors host this dataset on GitHub here: https://github.com/anthropics/hh-rlhf
You can submit inquiries to: [email protected] |
Avelina/smollm-corpus | Avelina | "2025-01-11T16:41:28Z" | 13,421 | 5 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"region:us"
] | [
"text-generation"
] | "2025-01-11T01:39:39Z" | ---
license: odc-by
dataset_info:
- config_name: default
features:
- name: text
dtype: string
configs:
- config_name: default
data_files:
- split: train
path: data*/train-*
task_categories:
- text-generation
language:
- en
size_categories:
- 100M<n<1B
---
# SmolLM-Corpus: Now shuffled and sharded!
This is a version of the SmolLM-Corpus where the 3 subsets have been interleved, shuffled and sharded as 23698 `jsonl.zst` files for easy streaming!
The dataset is comprised of the `cosmopedia-v2` and `fineweb-edu-dedup` subsets from the original [SmolLM-Corpus repo](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus), with the `python-edu` subset being pulled from my [python-edu repo](https://huggingface.co/datasets/Avelina/python-edu).
## Dataset Structure
The dataset is split into 24 subdirectories, with the first 23 containing 1000 shards and the 24th containing the final 698. The repository is structured as follows:
```
data00/
├── train-00000-of-23698.jsonl.zst
├── ...
└── train-00999-of-23698.jsonl.zst
data01/
├── train-01000-of-23698.jsonl.zst
├── ...
└── train-01999-of-23698.jsonl.zst
...
data22/
├── train-22000-of-23698.jsonl.zst
├── ...
└── train-22999-of-23698.jsonl.zst
data23/
├── train-23000-of-23698.jsonl.zst
├── ...
└── train-23697-of-23698.jsonl.zst
```
In general, you can obtain the exact download URL for all shards using the following python function:
```py
def get_url_from_shard( index: int ) -> str:
if index >= 23_698:
raise ValueError( f'Shard index must be less than 23,698 but received {index}' )
group = index // 1000
return f'https://huggingface.co/datasets/Avelina/smollm-corpus/resolve/main/data{group:02d}/train-{index:05d}-of-23698.jsonl.zst'
```
## Generation Code
Here is the code which was used to generate the shuffled shards. Note the use of non-contiguous interleaving in attempt to uniformly pull documents from across entire subsets to loosely decouple shard index from original document position.
Please make sure you `pip install zstandard`!!!
```py
import tqdm
import datasets
from datasets import load_dataset
# Output directory and file format. Note that the file extension enforces zst compression is used.
OUTPUT_FMT = '/YOUR/FILE/PATH/HERE/data/train-{index:05d}-of-{num_shards:05d}.jsonl.zst'
# Total number of shards giving approximately 10,000 documents per shard
OUTPUT_NUM_SHARDS = 23698
# Grab the three datasets
ds_python = load_dataset( 'Avelina/python-edu' )
ds_cosmo = load_dataset( 'HuggingFaceTB/smollm-corpus', 'cosmopedia-v2' )
ds_edu = load_dataset( 'HuggingFaceTB/smollm-corpus', 'fineweb-edu-dedup' )
# Retain only the text columns and the train splits
ds_python = ds_python.select_columns( 'text' )[ 'train' ]
ds_cosmo = ds_cosmo.select_columns( 'text' )[ 'train' ]
ds_edu = ds_edu.select_columns( 'text' )[ 'train' ]
# Iterate over all shards with a nice progbar
for index in tqdm.tqdm( range( OUTPUT_NUM_SHARDS ) ):
# Get non-contiguous in-memory sub-shards for the three datasets
curr_python = ds_python.shard( num_shards=OUTPUT_NUM_SHARDS, index=index, contiguous=False, keep_in_memory=True )
curr_cosmo = ds_cosmo.shard( num_shards=OUTPUT_NUM_SHARDS, index=index, contiguous=False, keep_in_memory=True )
curr_edu = ds_edu.shard( num_shards=OUTPUT_NUM_SHARDS, index=index, contiguous=False, keep_in_memory=True )
# Concatenate the sub-shards
curr_shard = datasets.concatenate_datasets( [ curr_python, curr_cosmo, curr_edu ] )
# Deterministically shuffle using the current shard index for reproducibility
curr_shard = curr_shard.shuffle( seed=index, keep_in_memory=True )
# Dump the shards to .jsonl.zst
curr_shard.to_json( OUTPUT_FMT.format( index=index, num_shards=OUTPUT_NUM_SHARDS ) )
```
## In-Memory Decompression
Zstandard was chosen as it enables trivial in-memory decompression to minimise the storage impact of the dataset. Here is some example code which creates a python generator that yields each json line from a compressed shard stored at `file_name`, and a second function which creates a python generator that parses and yields the compressed shard.
```py
import json
from json import JSONDecodeError
import zstandard
def read_lines_zst( file_name ):
# Open the file for reading in binary mode
with open( file_name, 'rb' ) as file_handle:
# Initialise an empty buffer
buffer = ''
# Create a reader for the opened file
reader = zstandard.ZstdDecompressor( max_window_size=2**31 ).stream_reader( file_handle )
while True:
# Read a chunk of up to 128MB
chunk = reader.read( 2**27 ).decode()
# If chunk is empty we've reached the end of the file and can break out
if not chunk:
break
# Combine any prior buffer with the current chunk and split by newline
lines = ( buffer + chunk ).split( '\n' )
# Yield the full lines so far
for line in lines[ : -1 ]:
yield line
# The last 'line' is incomplete, so place in buffer for next chunk
buffer = lines[ -1 ]
# Always remember to close your reader!
reader.close()
def parse_jsonl_zst( file_name ):
# Iterate over the yielded lines of the compressed shard
for i, line in enumerate( read_lines_zst( file_name ) ):
try:
# Convert the line into a python dict and yield the text field
yield json.loads( line )[ 'text' ]
except ( KeyError, JSONDecodeError ):
# Catch KeyError for 'text' not present in dict
# Catch JSONDecodeError for malformed line
print( f'JSON error @ shard={file_name}, line={i}' )
```
Of course you *could* use HuggingFace's in-built streaming mechanics to handle things for you, but in my experience that approach is less reliable, doesn't handle `JSONDecodeError`s if there are malformed lines, can cause memory leaks, and has forced sharding behaviour when used inside a multi-worker PyTorch `DataLoader` which I've not yet found a way to disable! |
pretraining-playground/pythia-training-metrics | pretraining-playground | "2024-05-07T14:17:41Z" | 13,318 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-02T10:28:12Z" | ---
license: apache-2.0
---
|
taesiri/arxiv_db | taesiri | "2024-05-07T22:36:44Z" | 13,285 | 4 | [
"license:cc-by-nc-4.0",
"region:us"
] | null | "2023-10-26T23:53:25Z" | ---
license: cc-by-nc-4.0
---
|
Cnam-LMSSC/vibravox | Cnam-LMSSC | "2024-11-06T16:02:47Z" | 13,240 | 17 | [
"task_categories:audio-to-audio",
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_categories:text-to-speech",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"language:fr",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2407.11828",
"arxiv:2006.11477",
"arxiv:2303.10008",
"arxiv:2401.08342",
"doi:10.57967/hf/2727",
"region:us"
] | [
"audio-to-audio",
"automatic-speech-recognition",
"audio-classification",
"text-to-speech"
] | "2023-10-18T19:15:20Z" | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- fr
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- audio-to-audio
- automatic-speech-recognition
- audio-classification
- text-to-speech
task_ids:
- speaker-identification
pretty_name: Vibravox
viewer: true
dataset_info:
- config_name: speech_clean
features:
- name: audio.headset_microphone
dtype: audio
- name: audio.forehead_accelerometer
dtype: audio
- name: audio.soft_in_ear_microphone
dtype: audio
- name: audio.rigid_in_ear_microphone
dtype: audio
- name: audio.temple_vibration_pickup
dtype: audio
- name: audio.throat_microphone
dtype: audio
- name: gender
dtype: string
- name: speaker_id
dtype: string
- name: sentence_id
dtype: int64
- name: duration
dtype: float64
- name: raw_text
dtype: string
- name: normalized_text
dtype: string
- name: phonemized_text
dtype: string
splits:
- name: train
num_bytes: 100144385419.375
num_examples: 20981
- name: validation
num_bytes: 11821970622.625
num_examples: 2523
- name: test
num_bytes: 14647423280.0
num_examples: 3064
download_size: 124418585390
dataset_size: 126613779322.0
- config_name: speech_noisy
features:
- name: audio.headset_microphone
dtype: audio
- name: audio.forehead_accelerometer
dtype: audio
- name: audio.soft_in_ear_microphone
dtype: audio
- name: audio.rigid_in_ear_microphone
dtype: audio
- name: audio.temple_vibration_pickup
dtype: audio
- name: audio.throat_microphone
dtype: audio
- name: gender
dtype: string
- name: speaker_id
dtype: string
- name: sentence_id
dtype: int64
- name: duration
dtype: float64
- name: raw_text
dtype: string
- name: normalized_text
dtype: string
- name: phonemized_text
dtype: string
splits:
- name: train
num_bytes: 5978781164.5
num_examples: 1220
- name: validation
num_bytes: 647300251.0
num_examples: 132
- name: test
num_bytes: 859092188.0
num_examples: 175
download_size: 7471066223
dataset_size: 7485173603.5
- config_name: speechless_clean
features:
- name: audio.headset_microphone
dtype: audio
- name: audio.forehead_accelerometer
dtype: audio
- name: audio.soft_in_ear_microphone
dtype: audio
- name: audio.rigid_in_ear_microphone
dtype: audio
- name: audio.temple_vibration_pickup
dtype: audio
- name: audio.throat_microphone
dtype: audio
- name: gender
dtype: string
- name: speaker_id
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 8512005740.0
num_examples: 149
- name: validation
num_bytes: 1028286672.0
num_examples: 18
- name: test
num_bytes: 1199717890.0
num_examples: 21
download_size: 9548480336
dataset_size: 10740010302.0
- config_name: speechless_noisy
features:
- name: audio.headset_microphone
dtype: audio
- name: audio.forehead_accelerometer
dtype: audio
- name: audio.soft_in_ear_microphone
dtype: audio
- name: audio.rigid_in_ear_microphone
dtype: audio
- name: audio.temple_vibration_pickup
dtype: audio
- name: audio.throat_microphone
dtype: audio
- name: gender
dtype: string
- name: speaker_id
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 24723250192.0
num_examples: 149
- name: validation
num_bytes: 2986606278.0
num_examples: 18
- name: test
num_bytes: 3484522468.0
num_examples: 21
download_size: 30881658818
dataset_size: 31194378938.0
configs:
- config_name: speech_clean
data_files:
- split: train
path: speech_clean/train-*
- split: validation
path: speech_clean/validation-*
- split: test
path: speech_clean/test-*
- config_name: speech_noisy
data_files:
- split: train
path: speech_noisy/train-*
- split: validation
path: speech_noisy/validation-*
- split: test
path: speech_noisy/test-*
- config_name: speechless_clean
data_files:
- split: train
path: speechless_clean/train-*
- split: validation
path: speechless_clean/validation-*
- split: test
path: speechless_clean/test-*
- config_name: speechless_noisy
data_files:
- split: train
path: speechless_noisy/train-*
- split: validation
path: speechless_noisy/validation-*
- split: test
path: speechless_noisy/test-*
---
# Dataset Card for VibraVox
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/65302a613ecbe51d6a6ddcec/zhB1fh-c0pjlj-Tr4Vpmr.png" style="object-fit:contain; width:280px; height:280px;" >
</p>
---
👀 While waiting for the [TooBigContentError issue](https://github.com/huggingface/dataset-viewer/issues/2215) to be resolved by the HuggingFace team, you can explore the dataset viewer of [vibravox-test](https://huggingface.co/datasets/Cnam-LMSSC/vibravox-test)
which has exactly the same architecture.
## DATASET SUMMARY
The [VibraVox dataset](https://vibravox.cnam.fr) is a general purpose audio dataset of french speech captured with body-conduction transducers.
This dataset can be used for various audio machine learning tasks :
- **Automatic Speech Recognition (ASR)** (Speech-to-Text , Speech-to-Phoneme)
- **Audio Bandwidth Extension (BWE)**
- **Speaker Verification (SPKV)** / identification
- **Voice cloning**
- etc ...
### Dataset usage
VibraVox contains 4 subsets, corresponding to different situations tailored for specific tasks. To load a specific subset simply use the following command (```subset``` can be any of the following : ``` "speech_clean" ``` , ``` "speech_noisy" ``` , ``` "speechless_clean" ``` , ``` "speechless_noisy" ```):
```python
from datasets import load_dataset
subset = "speech_clean"
vibravox = load_dataset("Cnam-LMSSC/vibravox", subset)
```
The dataset is also compatible with the `streaming` mode:
```python
from datasets import load_dataset
subset = "speech_clean"
vibravox = load_dataset("Cnam-LMSSC/vibravox", subset, streaming=True)
```
### Citations, links and details
- **Homepage:** For more information about the project, visit our project page on [https://vibravox.cnam.fr](https://vibravox.cnam.fr)
- **Github repository:** [jhauret/vibravox](https://github.com/jhauret/vibravox) : Source code for ASR, BWE and SPKV tasks using the Vibravox dataset
- **Point of Contact:** [Julien Hauret](https://www.linkedin.com/in/julienhauret/) and [Éric Bavu](https://acoustique.cnam.fr/contacts/bavu/en/#contact)
- **Curated by:** [AVA Team](https://lmssc.cnam.fr/fr/recherche/identification-localisation-synthese-de-sources-acoustiques-et-vibratoires) of the [LMSSC Research Laboratory](https://lmssc.cnam.fr)
- **Funded by:** [Agence Nationale Pour la Recherche / AHEAD Project](https://anr.fr/en/funded-projects-and-impact/funded-projects/project/funded/project/b2d9d3668f92a3b9fbbf7866072501ef-5aac4914c7/?tx_anrprojects_funded%5Bcontroller%5D=Funded&cHash=fa352121b44b60bf6a5917180d5205e6)
- **Language:** French
- **Download size** : 186.64 GB
- **Total audio duration** : 45.62 hours (x6 audio channels)
- **Number of speech utterances** : 28,095
- **License:** Creative Commons Attributions 4.0
I you use the Vibravox dataset for research, **cite this paper** :
```bibtex
@article{jhauret-et-al-2024-vibravox,
title={{Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors}},
author={Hauret, Julien and Olivier, Malo and Joubaud, Thomas and Langrenne, Christophe and
Poir{\'e}e, Sarah and Zimpfer, Véronique and Bavu, {\'E}ric},
year={2024},
eprint={2407.11828},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2407.11828},
}
```
**and this repository**, which is linked to a DOI :
```bibtex
@misc{cnamlmssc2024vibravoxdataset,
author={Hauret, Julien and Olivier, Malo and Langrenne, Christophe and
Poir{\'e}e, Sarah and Bavu, {\'E}ric},
title = { {Vibravox} (Revision 7990b7d) },
year = 2024,
url = { https://huggingface.co/datasets/Cnam-LMSSC/vibravox },
doi = { 10.57967/hf/2727 },
publisher = { Hugging Face }
}
```
---
## SUPPORTED TASKS
<!-- and Leaderboards -->
### Automatic-speech-recognition
- The model is presented with an audio file and asked to transcribe the audio file to written text (either normalized text of phonemized text). The most common evaluation metrics are the word error rate (WER), character error rate (CER), or phoneme error rate (PER).
- **Training code:** An example of implementation for the speech-to-phoneme task using [wav2vec2.0](https://arxiv.org/abs/2006.11477) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox).
- **Trained models:** We also provide trained models for the speech-to-phoneme task for each of the 6 speech sensors of the Vibravox dataset on Huggingface at [Cnam-LMSSC/vibravox_phonemizers](https://huggingface.co/Cnam-LMSSC/vibravox_phonemizers)
### Bandwidth-extension
- Also known as audio super-resolution, which is required to enhance the audio quality of body-conducted captured speech. The model is presented with a pair of audio clips (from a body-conducted captured speech, and from the corresponding clean, full bandwidth airborne-captured speech), and asked to enhance the audio by denoising and regenerating mid and high frequencies from low frequency content only.
- **Training code:** An example of implementation of this task using [Configurable EBEN](https://ieeexplore.ieee.org/document/10244161) ([arXiv link](https://arxiv.org/abs/2303.10008)) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox).
- **Trained models:** We also provide trained models for the BWE task for each of the 6 speech sensors of the Vibravox dataset on Huggingface at [Cnam-LMSSC/vibravox_EBEN_bwe_models](https://huggingface.co/Cnam-LMSSC/vibravox_EBEN_bwe_models).
- **BWE-Enhanced dataset:** An EBEN-enhanced version of the `test`splits of the Vibravox dataset, generated using these 6 bwe models, is also available on Huggingface at [Cnam-LMSSC/vibravox_enhanced_by_EBEN](https://huggingface.co/datasets/Cnam-LMSSC/vibravox_enhanced_by_EBEN).
### Speaker-verification
- Given an input audio clip and a reference audio clip of a known speaker, the model's objective is to compare the two clips and verify if they are from the same individual. This often involves extracting embeddings from a deep neural network trained on a large dataset of voices. The model then measures the similarity between these feature sets using techniques like cosine similarity or a learned distance metric. This task is crucial in applications requiring secure access control, such as biometric authentication systems, where a person's voice acts as a unique identifier.
- **Testing code:** An example of implementation of this task using a pretrained [ECAPA2 model](https://arxiv.org/abs/2401.08342) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox).
### Adding your models for supported tasks or contributing for new tasks
Feel free to contribute at the [Vibravox Github repository](https://github.com/jhauret/vibravox), by following the [contributor guidelines](https://github.com/jhauret/vibravox/blob/main/CONTRIBUTING.md).
---
## DATASET DETAILS
### Dataset Description
VibraVox ([vibʁavɔks]) is a GDPR-compliant dataset scheduled released in June 2024. It includes speech recorded simultaneously using multiple audio and vibration sensors (from top to bottom on the following figure) :
- a forehead miniature vibration sensor (green)
- an in-ear comply foam-embedded microphone (red)
- an in-ear rigid earpiece-embedded microphone (blue)
- a temple vibration pickup (cyan)
- a headset microphone located near the mouth (purple)
- a laryngophone (orange)
The technology and references of each sensor is described and documented in [the dataset creation](#dataset-creation) section and [https://vibravox.cnam.fr/documentation/hardware/](https://vibravox.cnam.fr/documentation/hardware).
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/P-_IWM3IMED5RBS3Lhydc.png" />
</p>
### Goals
The VibraVox speech corpus has been recorded with 200 participants under various acoustic conditions imposed by a [5th order ambisonics spatialization sphere](https://vibravox.cnam.fr/documentation/hardware/sphere/index.html).
VibraVox aims at serving as a valuable resource for advancing the field of **body-conducted speech analysis** and facilitating the development of **robust communication systems for real-world applications**.
Unlike traditional microphones, which rely on airborne sound waves, body-conduction sensors capture speech signals directly from the body, offering advantages in noisy environments by eliminating the capture of ambient noise. Although body-conduction sensors have been available for decades, their limited bandwidth has restricted their widespread usage. However, this may be the awakening of this technology to a wide public for speech capture and communication in noisy environments.
### Data / sensor mapping
Even if the names of the columns in Vibravox dataset are self-explanatory, here is the mapping, with informations on the positioning of sensors and their technology :
| Vibravox dataset column name | Sensor | Location | Technology |
|:------------------------------------ |:------------------------------------------ |:---------------- |:-------------------------------------------------- |
| ```audio.headset_microphone``` | Headset microphone | Near the mouth | Cardioid electrodynamic microphone |
| ```audio.throat_microphone``` | Laryngophone | Throat / Larynx | Piezoelectric sensor |
| ```audio.soft_in_ear_microphone``` | In-ear soft foam-embedded microphone | Right ear canal | Omnidirectional electret condenser microphone |
| ```audio.rigid_in_ear_microphone``` | In-ear rigid earpiece-embedded microphone | Left ear-canal | Omnidirectional MEMS microphone |
| ```audio.forehead_accelerometer``` | Forehead vibration sensor | Frontal bone | One-axis accelerometer |
| ```audio.temple_vibration_pickup``` | Temple vibration pickup | Zygomatic bone | Figure of-eight pre-polarized condenser transducer |
---
## DATASET STRUCTURE
### Subsets
Each of the 4 subsets contain **6 columns of audio data**, corresponding to the 5 different body conduction sensors, plus the standard headset microphone.
Recording was carried out simultaneously on all 6 sensors, **audio files being sampled at 48 kHz and encoded as .wav PCM32 files**.
The 4 subsets correspond to :
- **```speech_clean```** : the speaker reads sentences sourced from the French Wikipedia. This split contains the most data for training for various tasks.
- **```speech_noisy```** : the speaker reads sentences sourced from the French Wikipedia, in a noisy environment based on ambisonic recordings replayed in a spatialization sphere equipped with 56 loudspeakers surrounding the speaker. This will primarily serve to test the different systems (Speech Enhancement, Automatic Speech Recognition, Speaker Verification) that will be developed based on the recordings from the first three phases. It is primarily intended for testing the various systems (speech enhancement, automatic speech recognition, speaker verification) that will be developed on the basis of the recordings from ```speech_clean```.
- **```speechless_clean```** : wearer of the devices remains speechless in a complete silence, but are free to move their bodies and faces, and can swallow and breathe naturally. This configuration can be conveniently used to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors. These samples can be valuable for tasks such as heart rate tracking or simply analyzing the noise properties of the various microphones, but also to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors.
- **```speechless_noisy```** : wearer of the devices remains speechless in a noisy environment created using [AudioSet](https://research.google.com/audioset/) noise samples. These samples have been selected from relevant classes, normalized in loudness, pseudo-spatialized and are played from random directions around the participant using [5th order ambisonic 3D sound spatializer](https://vibravox.cnam.fr/documentation/hardware/sphere/index.html) equipped with 56 loudspeakers. The objective of this split is to gather background noises that can be combined with the `speech_clean` recordings to maintain a clean reference. This allows to use those samples for **realistic data-augmentation** using noise captured by body-conduction sensors, with the inherent attenuation of each sensors on different device wearers.
### Splits
All the subsets are available in 3 splits (train, validation and test), with a standard 80% / 10% / 10% repartition, without overlapping any speaker in each split.
The speakers / participants in specific splits are the same for each subset, thus allowing to:
- use the `speechless_noisy` for data augmentation for example
- test on the `speech_noisy` testset your models trained on the `speech_clean` trainset without having to worry that a speaker would have been presented in the training phase.
### Data Fields
In non-streaming mode (default), the path value of all dataset. Audio dictionnary points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
**Common Data Fields for all subsets :**
* `audio.headset_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the headset microphone, the decoded (mono) audio array, and the sampling rate.
* `audio.forehead_accelerometer` (datasets.Audio) - a dictionary containing the path to the audio recorded by the forehead miniature accelerometer, the decoded (mono) audio array, and the sampling rate.
* `audio.soft_in_ear_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear soft foam-embedded microphone, the decoded (mono) audio array, and the sampling rate.
* `audio.rigid_in_ear_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear rigid earpiece-embedded microphone, the decoded (mono) audio array, and the sampling rate.
* `audio.temple_vibration_pickup` (datasets.Audio) - a dictionary containing the path to the audio recorded by the temple vibration pickup, the decoded (mono) audio array, and the sampling rate.
* `audio.throat_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the piezeoelectric laryngophone, the decoded (mono) audio array, and the sampling rate.
* `gender` (string) - gender of speaker (```male```or ```female```)
* `speaker_id` (string) - encrypted id of speaker
* `duration` (float32) - the audio length in seconds.
**Extra Data Fields for `speech_clean` and `speech_noisy` splits:**
For **speech** subsets, the datasets has columns corresponding to the pronounced sentences, which are absent of the **speechless** subsets :
* `sentence_id` (int) - id of the pronounced sentence
* `raw_text` (string) - audio segment text (cased and with punctuation preserved)
* `normalized_text` (string) - audio segment normalized text (lower cased, no punctuation, diacritics replaced by standard 26 french alphabet letters, plus 3 accented characters : é,è,ê and ç -- which hold phonetic significance -- and the space character, which corresponds to 31 possible characters : ``` [' ', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'ç', 'è', 'é', 'ê'] ```).
* `phonemes` (string) - audio segment phonemized text using exclusively the strict french IPA (33) characters
### Phonemes list and tokenizer
- The strict french IPA characters used in Vibravox are : ``` [' ', 'a', 'b', 'd', 'e', 'f', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 's', 't', 'u', 'v', 'w', 'y', 'z', 'ø', 'ŋ', 'œ', 'ɑ', 'ɔ', 'ə', 'ɛ', 'ɡ', 'ɲ', 'ʁ', 'ʃ', 'ʒ', '̃'] ```.
- For convience and research reproducibility, we provide a tokenizer for speech-to-phonemes tasks that corresponds to those phonemes at [https://huggingface.co/Cnam-LMSSC/vibravox-phonemes-tokenizer](https://huggingface.co/Cnam-LMSSC/vibravox-phonemes-tokenizer).
### Examples of data Instances
#### `speech_clean` or `speech_noisy` splits:
```python
{
'audio.headset_mic': {
'path': '02472_headset_mic.wav',
'array': array([ 0.00045776, 0.00039673, 0.0005188 , ..., -0.00149536,
-0.00094604, 0.00036621]),
'sampling_rate': 48000},
'audio.forehead_accelerometer': {
'path': '02472_forehead_accelerometer.wav',
'array': array([ 0.0010376 , -0.00045776, -0.00085449, ..., -0.00491333,
-0.00524902, -0.00302124]),
'sampling_rate': 48000},
'audio.soft_in_ear_mic': {
'path': '02472_soft_in_ear_mic.wav',
'array': array([-0.06472778, -0.06384277, -0.06292725, ..., -0.02133179,
-0.0213623 , -0.02145386]),
'sampling_rate': 48000},
'audio.rigid_in_ear_mic': {
'path': '02472_rigid_in_ear_mic.wav',
'array': array([-0.01824951, -0.01821899, -0.01812744, ..., -0.00387573,
-0.00427246, -0.00439453]),
'sampling_rate': 48000},
'audio.temple_vibration_pickup':{
'path': '02472_temple_vibration_pickup.wav',
'array': array([-0.0177002 , -0.01791382, -0.01745605, ..., 0.01098633,
0.01260376, 0.01220703]),
'sampling_rate': 48000},
'audio.laryngophone': {
'path': '02472_laryngophone.wav',
'array': array([-2.44140625e-04, -3.05175781e-05, 2.13623047e-04, ...,
4.88281250e-04, 4.27246094e-04, 3.66210938e-04]),
'sampling_rate': 48000},
'gender': 'female',
'speaker_id': 'qt4TPMEPwF',
'sentence_id': 2472,
'duration': 4.5,
'raw_text': "Cette mémoire utilise le changement de phase du verre pour enregistrer l'information.",
'normalized_text': 'cette mémoire utilise le changement de phase du verre pour enregistrer l information',
'phonemized_text': 'sɛt memwaʁ ytiliz lə ʃɑ̃ʒmɑ̃ də faz dy vɛʁ puʁ ɑ̃ʁʒistʁe lɛ̃fɔʁmasjɔ̃'
}
```
#### `speechless_clean` or `speechless_noisy` splits
(thus missing the text-related fields)
```python
{
'audio.headset_mic': {
'path': 'jMngOy7BdQ_headset_mic.wav',
'array': array([-1.92260742e-03, -2.44140625e-03, -2.99072266e-03, ...,
0.00000000e+00, 3.05175781e-05, -3.05175781e-05]),
'sampling_rate': 48000},
'audio.forehead_accelerometer': {
'path': 'jMngOy7BdQ_forehead_accelerometer.wav',
'array': array([-0.0032959 , -0.00259399, 0.00177002, ..., -0.00073242,
-0.00076294, -0.0005188 ]),
'sampling_rate': 48000},
'audio.soft_in_ear_mic': {
'path': 'jMngOy7BdQ_soft_in_ear_mic.wav',
'array': array([0.00653076, 0.00671387, 0.00683594, ..., 0.00045776, 0.00042725,
0.00042725]),
'sampling_rate': 48000},
'audio.rigid_in_ear_mic': {
'path': 'jMngOy7BdQ_rigid_in_ear_mic.wav',
'array': array([ 1.05895996e-02, 1.03759766e-02, 1.05590820e-02, ...,
0.00000000e+00, -3.05175781e-05, -9.15527344e-05]),
'sampling_rate': 48000},
'audio.temple_vibration_pickup': {
'path': 'jMngOy7BdQ_temple_vibration_pickup.wav',
'array': array([-0.00082397, -0.0020752 , -0.0012207 , ..., -0.00738525,
-0.00814819, -0.00579834]), 'sampling_rate': 48000},
'audio.laryngophone': {
'path': 'jMngOy7BdQ_laryngophone.wav',
'array': array([ 0.00000000e+00, 3.05175781e-05, 1.83105469e-04, ...,
-6.10351562e-05, -1.22070312e-04, -9.15527344e-05]),
'sampling_rate': 48000},
'gender': 'male',
'speaker_id': 'jMngOy7BdQ',
'duration': 54.097
}
```
---
## DATA STATISTICS
### Speakers gender balance
To increase the representativeness and inclusivity of the dataset, a deliberate effort was made to recruit a diverse and gender-balanced group of speakers. The overall gender repartition in terms of number of speakers included in the dataset is **51.6% female participants / 48.4% male participants for all subsets**.
### Speakers age balance
| Gender | Mean age (years) | Median age (years) | Min age (years) | Max age (years) |
|:------------|:-----------------|:--------------------|:-------------------|:--------------------|
| Female | 25.9 | 22 | 19 | 59 |
| Male | 31.4 | 27 | 18 | 82 |
| **All** | **28.55** | **25** | **18** | **82** |
### Audio data
| Subset | Split | Audio duration (hours) | Number of audio clips | Download size | Number of Speakers <br> (Female/Male) | F/M Gender repartition <br> (audio duration) | Mean audio duration (s) | Median audio duration (s) | Max audio duration (s) | Min audio duration (s) |
|:-------------------|:---------------------------------------|:--------------------------------|:-----------------------------------|:------------------------------------|:---------------------------------------|:---------------------------------------------------------|:----------------------------------|:---------------------------------|:------------------------------------|:-------------------------------|
| `speech_clean` | `train` <br> `validation` <br> `test` | 6x26.34 <br> 6x3.11 <br> 6x3.85 | 6x20,981 <br> 6x2,523 <br> 6x3,064 | 108.32GB <br> 12.79GB <br> 15.84GB | 77F/72M <br> 9F/9M <br> 11F/10M | 52.13%/47.87% <br> 51.66%/48.34% <br> 54.43%/45.57% | 4.52 <br> 4.44 <br> 4.53 | 4.43 <br> 4.36 <br> 4.44 | 13.03 <br> 10.64 <br> 10.27 | 1.1 <br> 1.47 <br> 1.38 |
| `speech_noisy` | `train` <br> `validation` <br> `test` | 6x1.57 <br> 6x0.17 <br> 6x0.23 | 6x1,220 <br> 6x132 <br> 6x175 | 6.52GB <br> 0.71GB <br> 0.94GB | 77F/72M <br> 9F/9M <br> 11F/10M | 54.0%/46.0% <br> 55.77%/44.23% <br> 53.92%/46.08% | 4.64 <br> 4.64 <br> 4.65 | 4.59 <br> 4.47 <br> 4.7 | 9.86 <br> 8.56 <br> 7.67 | 1.36 <br> 2.3 <br> 1.85 |
| `speechless_clean` | `train` <br> `validation` <br> `test` | 6x2.24 <br> 6x0.27 <br> 6x0.32 | 6x149 <br> 6x18 <br> 6x21 | 8.44GB <br> 1.02GB <br> 1.19GB | 77F/72M <br> 9F/9M <br> 11F/10M | 51.68%/48.32% <br> 50.00%/50.00% <br> 52.38%/47.62% | 54.10 <br> 54.10 <br> 54.10 | 54.10 <br> 54.10 <br> 54.10 | 54.10 <br> 54.10 <br> 54.10 | 53.99 <br> 54.05 <br> 54.10 |
| `speechless_noisy` | `train` <br> `validation` <br> `test` | 6x5.96 <br> 6x0.72 <br> 6x0.84 | 6x149 <br> 6x18 <br> 6x21 | 24.48GB <br> 2.96GB <br> 3.45GB | 77F/72M <br> 9F/9M <br> 11F/10M | 51.68%/48.32% <br> 50.00%/50.00% <br> 52.38%/47.62% | 144.03 <br> 144.03 <br> 144.04 | 144.03 <br> 144.03 <br> 144.03 | 144.17 <br> 144.05 <br> 144.05 | 143.84 <br> 143.94 <br> 144.03 |
| **Total** | | **6x45.62** | **6x28,471** | **186.64GB** | **97F/91M** | **52.11%/47.89%** | | | | |
---
## DATASET CREATION
### Textual source data
The text read by all participants is collected from the French Wikipedia subset of Common voice ( [link1](https://github.com/common-voice/common-voice/blob/6e43e7e61318bf4605b59379e3f35ba5333d7a29/server/data/fr/wiki-1.fr.txt) [link2](https://github.com/common-voice/common-voice/blob/6e43e7e61318bf4605b59379e3f35ba5333d7a29/server/data/fr/wiki-2.fr.txt) ) . We applied some additional filters to these textual datasets in order to create a simplified dataset with a minimum number of tokens and to reduce the uncertainty of the pronunciation of some proper names. We therefore removed all proper names except common first names and the list of french towns. We also removed any utterances that contain numbers, Greek letters, math symbols, or that are syntactically incorrect.
All lines of the textual source data from Wikipedia-extracted textual dataset has then been phonemized using the [bootphon/phonemizer](https://github.com/bootphon/phonemizer) and manually edited to only keep strict french IPA characters.
### Audio Data Collection
#### Sensors positioning and documentation
| **Sensor** | **Image** | **Transducer** | **Online documentation** |
|:---------------------------|:---------------------|:-------------|:----------------------------------------------------------------------------------------------------------------------|
| Reference headset microphone |  | Shure WH20 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/airborne/index.html) |
| In-ear comply foam-embedded microphone || Knowles FG-23329-P07 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/soft_inear/index.html) |
| In-ear rigid earpiece-embedded microphone || Knowles SPH1642HT5H | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/rigid_inear/index.html) |
| Forehead miniature vibration sensor || Knowles BU23173-000 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/forehead/index.html) |
| Temple vibration pickup || AKG C411 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/temple/index.html) |
| Laryngophone | | iXRadio XVTM822D-D35 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/throat/index.html) |
#### Recorded audio data post-processing
Across the sentences collected from the participants, a small number of audio clips exhibited various shortcomings. Despite researchers monitoring and validating each recording individually, the process was not entirely foolproof : mispronounced sentences, sensors shifting from their initial positions, or more significant microphone malfunctions occasionally occurred. In instances where sensors were functional but not ideally positioned—such as when the participant's ear canal was too small for the rigid in-ear microphone to achieve proper acoustic sealing—we chose to retain samples where the bandwidth was slightly narrower than desired. This decision was made to enhance the robustness of our models against the effects of misplaced sensors.
To address those occasional shortcomings and offer a high-quality dataset, we implemented a series of 3 automatic filters to retain only the best audio from the speech_clean subset. We preserved only those sentences where all sensors were in optimal recording condition, adhering to predefined criteria, defined in [our paper](https://arxiv.org/abs/2407.11828) :
- The first filter uses a pre-trained ASR model run on the headset microphone data, which allows to address discrepancies between the labeled transcription and actual pronunciation, ensuring high-quality labels for the speech-to-phoneme task.
- The second filter confirms that the sensor is functioning correctly by verifying that speech exhibits higher energy than silence, thereby identifying potentially unreliable recordings with low vocal energy levels or sensor malfunction.
- The third filter detects sensitivity drift in the sensors, which can occur due to electronic malfunctions or mechanical blockages in the transducer.
- If an audio clip passes all filters, it is not immediately added to the dataset. Instead, VAD-generated timestamps from [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) are used, extending them by 0.3 seconds on both sides. This method helps remove mouse clicks at audio boundaries and ensures the capture of vocal segments without excluding valid speech portions.
### Personal and Sensitive Information
The VibraVox dataset does not contain any data that might be considered as personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.).
The `speaker_id` were generated using a powerful Fernet encryption algorithm, and the extraction of a subset of the encrypted id, guaranteeing a strict anonymisation of the voice recordings, while allowing the dataset maintainers to delete corresponding data under the right to oblivion.
A [consent form](https://vibravox.cnam.fr/documentation/consent/index.html) has been signed by each participant to the VibraVox dataset. This consent form has been approved by the Cnam lawyer. All [Cnil](https://www.cnil.fr/en) requirements have been checked, including the right to oblivion during 50 years.
|
HAERAE-HUB/HAE_RAE_BENCH_1.1 | HAERAE-HUB | "2024-03-30T02:27:25Z" | 13,161 | 19 | [
"task_categories:multiple-choice",
"language:ko",
"license:cc-by-nc-nd-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2309.02706",
"region:us"
] | [
"multiple-choice"
] | "2023-09-25T04:16:13Z" | ---
license: cc-by-nc-nd-4.0
extra_gated_prompt: To request access to the dataset, please fill out this form, and
we'll review and let you know if your use case is approved.
extra_gated_fields:
First Name: text
Last Name: text
Institution: text
Intended Use: text
I agree to use this dataset for non-commercial research ONLY: checkbox
task_categories:
- multiple-choice
language:
- ko
configs:
- config_name: correct_definition_matching
data_files:
- split: test
path: "data/correct_definition_matching-00000-of-00001-65122a307e90d70a.parquet"
- config_name: csat_geo
data_files:
- split: test
path: "data/csat_geo-00000-of-00001-68ce37d4952173a9.parquet"
- config_name: csat_law
data_files:
- split: test
path: "data/csat_law-00000-of-00001-ef0ec0fac2f2238f.parquet"
- config_name: csat_socio
data_files:
- split: test
path: "data/csat_socio-00000-of-00001-0cfadd93827d0d48.parquet"
- config_name: date_understanding
data_files:
- split: test
path: "data/date_understanding-00000-of-00001-36887d05afd4c57f.parquet"
- config_name: general_knowledge
data_files:
- split: test
path: "data/general_knowledge-00000-of-00001-7418d1a0b8c4a615.parquet"
- config_name: history
data_files:
- split: test
path: "data/history-00000-of-00001-9ad551d475d58413.parquet"
- config_name: loan_words
data_files:
- split: test
path: "data/loan_words-00000-of-00001-96ed08632b9a2ae8.parquet"
- config_name: lyrics_denoising
data_files:
- split: test
path: "data/lyrics_denoising-00000-of-00001-ede1ce975e9bad54.parquet"
- config_name: proverbs_denoising
data_files:
- split: test
path: "data/proverbs_denoising-00000-of-00001-f274ad136676b2e3.parquet"
- config_name: rare_words
data_files:
- split: test
path: "data/rare_words-00000-of-00001-4f34314c09209524.parquet"
- config_name: standard_nomenclature
data_files:
- split: test
path: "data/standard_nomenclature-00000-of-00001-52814989a6843328.parquet"
- config_name: reading_comprehension
data_files:
- split: test
path: "data/reading_comprehension-00000-of-00001-f9c8df20c22e46d0.parquet"
---
The HAE_RAE_BENCH 1.1 is an ongoing project to develop a suite of evaluation tasks designed to test the
understanding of models regarding Korean cultural and contextual nuances.
Currently, it comprises 13 distinct tasks, with a total of 4900 instances.
Please note that although this repository contains datasets from the original [HAE-RAE BENCH paper](https://arxiv.org/abs/2309.02706),
the contents are not completely identical. Specifically, the reading comprehension subset from the original version has been removed due to copyright constraints.
In its place, an updated reading comprehension subset has been introduced, sourced from the CSAT, the Korean university entrance examination.
To replicate the studies from the paper, please see [code](https://huggingface.co/datasets/HAERAE-HUB/HAE_RAE_BENCH_1.0).
### Dataset Overview
| Task | Instances | Version | Explanation |
|-----------------------------|-----------|---------|---------------------------------------------------------------------|
| standard_nomenclature | 153 | v1.0 | Multiple-choice questions about Korean standard nomenclatures from NIKL. |
| loan_word | 169 | v1.0 | Multiple-choice questions about Korean loan words from NIKL. |
| rare_word | 405 | v1.0 | Multiple-choice questions about rare Korean words from NIKL. |
| general_knowledge | 176 | v1.0 | Multiple-choice questions on Korean cultural knowledge. |
| history | 188 | v1.0 | Multiple-choice questions on Korean history. |
| date_understanding | 475 | v1.1 | Multiple-choice questions about understanding Korean date expressions. |
| proverbs_denoising | 672 | v1.1 | Questions for generating Korean proverbs. |
| correct_definition_matching | 439 | v1.1 | Multiple-choice questions on matching the correct definition to words in context. |
| lyrics_denoising | 622 | v1.1 | Questions for reconstructing Korean pop song lyrics. |
| reading_comprehension | 936 | v1.1 | Multiple-choice questions on Korean reading comprehension from the CSAT. |
| csat_law | 217 | v1.1 | Multiple-choice questions on Korean law from the CSAT. |
| csat_geo | 150 | v1.1 | Multiple-choice questions on Korean geography from the CSAT. |
| csat_socio | 298 | v1.1 | Multiple-choice questions on Korean social studies from the CSAT. |
| **Total** | **4900** | | |
### Point of Contact
For any questions contact us via the following email:)
```
[email protected]
``` |
lmms-lab/MMMU | lmms-lab | "2024-03-08T05:09:42Z" | 13,131 | 4 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-15T06:32:16Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 57719107.0
num_examples: 150
- name: validation
num_bytes: 347519954.0
num_examples: 900
- name: test
num_bytes: 3271046267.0
num_examples: 10500
download_size: 3377778136
dataset_size: 3676285328.0
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
This is a merged version of [MMMU/MMMU](https://huggingface.co/datasets/MMMU/MMMU) with all subsets concatenated.
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [MMMU](https://github.com/MMMU-Benchmark/MMMU). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{yue2023mmmu,
title={Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi},
author={Yue, Xiang and Ni, Yuansheng and Zhang, Kai and Zheng, Tianyu and Liu, Ruoqi and Zhang, Ge and Stevens, Samuel and Jiang, Dongfu and Ren, Weiming and Sun, Yuxuan and others},
journal={arXiv preprint arXiv:2311.16502},
year={2023}
}
``` |
dylanebert/3d-arena | dylanebert | "2025-02-27T20:39:18Z" | 13,121 | 11 | [
"license:mit",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:3d",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"image-to-3d"
] | null | "2024-04-05T20:53:13Z" | ---
license: mit
tags:
- image-to-3d
---
For more information, visit the [3D Arena Space](https://huggingface.co/spaces/dylanebert/3d-arena).
Inputs are sourced from [iso3D](https://huggingface.co/datasets/dylanebert/iso3d). |
AmazonScience/massive | AmazonScience | "2022-11-16T15:44:51Z" | 13,119 | 65 | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:af-ZA",
"multilinguality:am-ET",
"multilinguality:ar-SA",
"multilinguality:az-AZ",
"multilinguality:bn-BD",
"multilinguality:ca-ES",
"multilinguality:cy-GB",
"multilinguality:da-DK",
"multilinguality:de-DE",
"multilinguality:el-GR",
"multilinguality:en-US",
"multilinguality:es-ES",
"multilinguality:fa-IR",
"multilinguality:fi-FI",
"multilinguality:fr-FR",
"multilinguality:he-IL",
"multilinguality:hi-IN",
"multilinguality:hu-HU",
"multilinguality:hy-AM",
"multilinguality:id-ID",
"multilinguality:is-IS",
"multilinguality:it-IT",
"multilinguality:ja-JP",
"multilinguality:jv-ID",
"multilinguality:ka-GE",
"multilinguality:km-KH",
"multilinguality:kn-IN",
"multilinguality:ko-KR",
"multilinguality:lv-LV",
"multilinguality:ml-IN",
"multilinguality:mn-MN",
"multilinguality:ms-MY",
"multilinguality:my-MM",
"multilinguality:nb-NO",
"multilinguality:nl-NL",
"multilinguality:pl-PL",
"multilinguality:pt-PT",
"multilinguality:ro-RO",
"multilinguality:ru-RU",
"multilinguality:sl-SL",
"multilinguality:sq-AL",
"multilinguality:sv-SE",
"multilinguality:sw-KE",
"multilinguality:ta-IN",
"multilinguality:te-IN",
"multilinguality:th-TH",
"multilinguality:tl-PH",
"multilinguality:tr-TR",
"multilinguality:ur-PK",
"multilinguality:vi-VN",
"multilinguality:zh-CN",
"multilinguality:zh-TW",
"source_datasets:original",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2204.08582",
"region:us",
"natural-language-understanding"
] | [
"text-classification"
] | "2022-04-27T20:48:46Z" | ---
annotations_creators:
- expert-generated
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- ca-ES
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
paperswithcode_id: massive
pretty_name: MASSIVE
language_bcp47:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- ca-ES
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
tags:
- natural-language-understanding
---
# MASSIVE 1.1: A 1M-Example Multilingual Natural Language Understanding Dataset with 52 Typologically-Diverse Languages
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/alexa/massive
- **Repository:** https://github.com/alexa/massive
- **Paper:** https://arxiv.org/abs/2204.08582
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1697/overview
- **Point of Contact:** [GitHub](https://github.com/alexa/massive/issues)
### Dataset Summary
MASSIVE 1.1 is a parallel dataset of > 1M utterances across 52 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
| Name | Lang | Utt/Lang | Domains | Intents | Slots |
|:-------------------------------------------------------------------------------:|:-------:|:--------------:|:-------:|:--------:|:------:|
| MASSIVE 1.1 | 52 | 19,521 | 18 | 60 | 55 |
| SLURP (Bastianelli et al., 2020) | 1 | 16,521 | 18 | 60 | 55 |
| NLU Evaluation Data (Liu et al., 2019) | 1 | 25,716 | 18 | 54 | 56 |
| Airline Travel Information System (ATIS) (Price, 1990) | 1 | 5,871 | 1 | 26 | 129 |
| ATIS with Hindi and Turkish (Upadhyay et al., 2018) | 3 | 1,315-5,871 | 1 | 26 | 129 |
| MultiATIS++ (Xu et al., 2020) | 9 | 1,422-5,897 | 1 | 21-26 | 99-140 |
| Snips (Coucke et al., 2018) | 1 | 14,484 | - | 7 | 53 |
| Snips with French (Saade et al., 2019) | 2 | 4,818 | 2 | 14-15 | 11-12 |
| Task Oriented Parsing (TOP) (Gupta et al., 2018) | 1 | 44,873 | 2 | 25 | 36 |
| Multilingual Task-Oriented Semantic Parsing (MTOP) (Li et al., 2021) | 6 | 15,195-22,288 | 11 | 104-113 | 72-75 |
| Cross-Lingual Multilingual Task Oriented Dialog (Schuster et al., 2019) | 3 | 5,083-43,323 | 3 | 12 | 11 |
| Microsoft Dialog Challenge (Li et al., 2018) | 1 | 38,276 | 3 | 11 | 29 |
| Fluent Speech Commands (FSC) (Lugosch et al., 2019) | 1 | 30,043 | - | 31 | - |
| Chinese Audio-Textual Spoken Language Understanding (CATSLU) (Zhu et al., 2019) | 1 | 16,258 | 4 | - | 94 |
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `natural-language-understanding` (NLU) :
- `intent-classification`
- `multi-class-classification`
- `natural-language-understanding`
### Languages
The MASSIVE 1.1 corpora consists of parallel sentences from 52 languages :
- `Afrikaans - South Africa (af-ZA)`
- `Amharic - Ethiopia (am-ET)`
- `Arabic - Saudi Arabia (ar-SA)`
- `Azeri - Azerbaijan (az-AZ)`
- `Bengali - Bangladesh (bn-BD)`
- `Catalan - Spain (ca-ES)`
- `Chinese - China (zh-CN)`
- `Chinese - Taiwan (zh-TW)`
- `Danish - Denmark (da-DK)`
- `German - Germany (de-DE)`
- `Greek - Greece (el-GR)`
- `English - United States (en-US)`
- `Spanish - Spain (es-ES)`
- `Farsi - Iran (fa-IR)`
- `Finnish - Finland (fi-FI)`
- `French - France (fr-FR)`
- `Hebrew - Israel (he-IL)`
- `Hungarian - Hungary (hu-HU)`
- `Armenian - Armenia (hy-AM)`
- `Indonesian - Indonesia (id-ID)`
- `Icelandic - Iceland (is-IS)`
- `Italian - Italy (it-IT)`
- `Japanese - Japan (ja-JP)`
- `Javanese - Indonesia (jv-ID)`
- `Georgian - Georgia (ka-GE)`
- `Khmer - Cambodia (km-KH)`
- `Korean - Korea (ko-KR)`
- `Latvian - Latvia (lv-LV)`
- `Mongolian - Mongolia (mn-MN)`
- `Malay - Malaysia (ms-MY)`
- `Burmese - Myanmar (my-MM)`
- `Norwegian - Norway (nb-NO)`
- `Dutch - Netherlands (nl-NL)`
- `Polish - Poland (pl-PL)`
- `Portuguese - Portugal (pt-PT)`
- `Romanian - Romania (ro-RO)`
- `Russian - Russia (ru-RU)`
- `Slovanian - Slovania (sl-SL)`
- `Albanian - Albania (sq-AL)`
- `Swedish - Sweden (sv-SE)`
- `Swahili - Kenya (sw-KE)`
- `Hindi - India (hi-IN)`
- `Kannada - India (kn-IN)`
- `Malayalam - India (ml-IN)`
- `Tamil - India (ta-IN)`
- `Telugu - India (te-IN)`
- `Thai - Thailand (th-TH)`
- `Tagalog - Philippines (tl-PH)`
- `Turkish - Turkey (tr-TR)`
- `Urdu - Pakistan (ur-PK)`
- `Vietnamese - Vietnam (vi-VN)`
- `Welsh - United Kingdom (cy-GB)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("AmazonScience/massive", "en-US", split='train')
print(dataset[0])
```
## Dataset Structure
### Data Instances
```json
{
"id": "0",
"locale": "fr-FR",
"partition": "test",
"scenario": "alarm",
"intent": "alarm_set",
"utt": "réveille-moi à cinq heures du matin cette semaine",
"annot_utt": "réveille-moi à [time : cinq heures du matin] [date : cette semaine]",
"worker_id": "22",
"slot_method": [
{ "slot": "time", "method": "translation" },
{ "slot": "date", "method": "translation" }
],
"judgments": [
{
"worker_id": "22",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
},
{
"worker_id": "8",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
},
{
"worker_id": "0",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
}
]
}
```
### Data Fields
`id`: maps to the original ID in the [SLURP](https://github.com/pswietojanski/slurp) collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.
`locale`: is the language and country code accoring to ISO-639-1 and ISO-3166.
`partition`: is either `train`, `dev`, or `test`, according to the original split in [SLURP](https://github.com/pswietojanski/slurp).
`scenario`: is the general domain, aka "scenario" in SLURP terminology, of an utterance
`intent`: is the specific intent of an utterance within a domain formatted as `{scenario}_{intent}`
`utt`: the raw utterance text without annotations
`annot_utt`: the text from `utt` with slot annotations formatted as `[{label} : {entity}]`
`worker_id`: The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.
`slot_method`: for each slot in the utterance, whether that slot was a `translation` (i.e., same expression just in the target language), `localization` (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or `unchanged` (i.e., the original en-US slot value was copied over without modification).
`judgments`: Each judgment collected for the localized utterance has 6 keys. `worker_id` is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.
```plain
intent_score : "Does the sentence match the intent?"
0: No
1: Yes
2: It is a reasonable interpretation of the goal
slots_score : "Do all these terms match the categories in square brackets?"
0: No
1: Yes
2: There are no words in square brackets (utterance without a slot)
grammar_score : "Read the sentence out loud. Ignore any spelling, punctuation, or capitalization errors. Does it sound natural?"
0: Completely unnatural (nonsensical, cannot be understood at all)
1: Severe errors (the meaning cannot be understood and doesn't sound natural in your language)
2: Some errors (the meaning can be understood but it doesn't sound natural in your language)
3: Good enough (easily understood and sounds almost natural in your language)
4: Perfect (sounds natural in your language)
spelling_score : "Are all words spelled correctly? Ignore any spelling variances that may be due to differences in dialect. Missing spaces should be marked as a spelling error."
0: There are more than 2 spelling errors
1: There are 1-2 spelling errors
2: All words are spelled correctly
language_identification : "The following sentence contains words in the following languages (check all that apply)"
1: target
2: english
3: other
4: target & english
5: target & other
6: english & other
7: target & english & other
```
### Data Splits
|Language|Train|Dev|Test|
|:---:|:---:|:---:|:---:|
|af-ZA|11514|2033|2974|
|am-ET|11514|2033|2974|
|ar-SA|11514|2033|2974|
|az-AZ|11514|2033|2974|
|bn-BD|11514|2033|2974|
|ca-ES|11514|2033|2974|
|cy-GB|11514|2033|2974|
|da-DK|11514|2033|2974|
|de-DE|11514|2033|2974|
|el-GR|11514|2033|2974|
|en-US|11514|2033|2974|
|es-ES|11514|2033|2974|
|fa-IR|11514|2033|2974|
|fi-FI|11514|2033|2974|
|fr-FR|11514|2033|2974|
|he-IL|11514|2033|2974|
|hi-IN|11514|2033|2974|
|hu-HU|11514|2033|2974|
|hy-AM|11514|2033|2974|
|id-ID|11514|2033|2974|
|is-IS|11514|2033|2974|
|it-IT|11514|2033|2974|
|ja-JP|11514|2033|2974|
|jv-ID|11514|2033|2974|
|ka-GE|11514|2033|2974|
|km-KH|11514|2033|2974|
|kn-IN|11514|2033|2974|
|ko-KR|11514|2033|2974|
|lv-LV|11514|2033|2974|
|ml-IN|11514|2033|2974|
|mn-MN|11514|2033|2974|
|ms-MY|11514|2033|2974|
|my-MM|11514|2033|2974|
|nb-NO|11514|2033|2974|
|nl-NL|11514|2033|2974|
|pl-PL|11514|2033|2974|
|pt-PT|11514|2033|2974|
|ro-RO|11514|2033|2974|
|ru-RU|11514|2033|2974|
|sl-SL|11514|2033|2974|
|sq-AL|11514|2033|2974|
|sv-SE|11514|2033|2974|
|sw-KE|11514|2033|2974|
|ta-IN|11514|2033|2974|
|te-IN|11514|2033|2974|
|th-TH|11514|2033|2974|
|tl-PH|11514|2033|2974|
|tr-TR|11514|2033|2974|
|ur-PK|11514|2033|2974|
|vi-VN|11514|2033|2974|
|zh-CN|11514|2033|2974|
|zh-TW|11514|2033|2974|
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__MASSIVE__: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.
__SLURP__: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.
__Hugging Face Upload and Integration__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
Copyright Amazon.com Inc. or its affiliates.
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public licenses.
Notwithstanding, Creative Commons may elect to apply one of its public
licenses to material it publishes and in those instances will be
considered the “Licensor.” The text of the Creative Commons public
licenses is dedicated to the public domain under the CC0 Public Domain
Dedication. Except for the limited purpose of indicating that material
is shared under a Creative Commons public license or as otherwise
permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the public
licenses.
Creative Commons may be contacted at creativecommons.org.
```
### Citation Information
Please cite the following papers when using this dataset.
```latex
@misc{fitzgerald2022massive,
title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages},
author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan},
year={2022},
eprint={2204.08582},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{bastianelli-etal-2020-slurp,
title = "{SLURP}: A Spoken Language Understanding Resource Package",
author = "Bastianelli, Emanuele and
Vanzo, Andrea and
Swietojanski, Pawel and
Rieser, Verena",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.588",
doi = "10.18653/v1/2020.emnlp-main.588",
pages = "7252--7262",
abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp."
}
```
|
boettiger-lab/ca-30x30 | boettiger-lab | "2025-02-22T22:00:54Z" | 13,042 | 0 | [
"license:bsd-2-clause",
"region:us"
] | null | "2024-10-03T17:43:10Z" | ---
license: bsd-2-clause
---
|
ehovy/race | ehovy | "2024-01-04T16:22:34Z" | 12,938 | 59 | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1704.04683",
"region:us"
] | [
"multiple-choice"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: race
pretty_name: RACE
dataset_info:
- config_name: all
features:
- name: example_id
dtype: string
- name: article
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
- name: options
sequence: string
splits:
- name: test
num_bytes: 8775370
num_examples: 4934
- name: train
num_bytes: 157308478
num_examples: 87866
- name: validation
num_bytes: 8647176
num_examples: 4887
download_size: 41500647
dataset_size: 174731024
- config_name: high
features:
- name: example_id
dtype: string
- name: article
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
- name: options
sequence: string
splits:
- name: test
num_bytes: 6989097
num_examples: 3498
- name: train
num_bytes: 126243228
num_examples: 62445
- name: validation
num_bytes: 6885263
num_examples: 3451
download_size: 33750880
dataset_size: 140117588
- config_name: middle
features:
- name: example_id
dtype: string
- name: article
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
- name: options
sequence: string
splits:
- name: test
num_bytes: 1786273
num_examples: 1436
- name: train
num_bytes: 31065250
num_examples: 25421
- name: validation
num_bytes: 1761913
num_examples: 1436
download_size: 7781596
dataset_size: 34613436
configs:
- config_name: all
data_files:
- split: test
path: all/test-*
- split: train
path: all/train-*
- split: validation
path: all/validation-*
- config_name: high
data_files:
- split: test
path: high/test-*
- split: train
path: high/train-*
- split: validation
path: high/validation-*
- config_name: middle
data_files:
- split: test
path: middle/test-*
- split: train
path: middle/train-*
- split: validation
path: middle/validation-*
---
# Dataset Card for "race"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.cs.cmu.edu/~glai1/data/race/](http://www.cs.cmu.edu/~glai1/data/race/)
- **Repository:** https://github.com/qizhex/RACE_AR_baselines
- **Paper:** [RACE: Large-scale ReAding Comprehension Dataset From Examinations](https://arxiv.org/abs/1704.04683)
- **Point of Contact:** [Guokun Lai](mailto:[email protected]), [Qizhe Xie](mailto:[email protected])
- **Size of downloaded dataset files:** 76.33 MB
- **Size of the generated dataset:** 349.46 MB
- **Total amount of disk used:** 425.80 MB
### Dataset Summary
RACE is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
dataset is collected from English examinations in China, which are designed for middle school and high school students.
The dataset can be served as the training and test sets for machine comprehension.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### all
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 174.73 MB
- **Total amount of disk used:** 200.17 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "A",
"article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
"example_id": "high132.txt",
"options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
"question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
}
```
#### high
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 140.12 MB
- **Total amount of disk used:** 165.56 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "A",
"article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
"example_id": "high132.txt",
"options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
"question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
}
```
#### middle
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 34.61 MB
- **Total amount of disk used:** 60.05 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "B",
"article": "\"There is not enough oil in the world now. As time goes by, it becomes less and less, so what are we going to do when it runs ou...",
"example_id": "middle3.txt",
"options": ["There is more petroleum than we can use now.", "Trees are needed for some other things besides making gas.", "We got electricity from ocean tides in the old days.", "Gas wasn't used to run cars in the Second World War."],
"question": "According to the passage, which of the following statements is TRUE?"
}
```
### Data Fields
The data fields are the same among all splits.
#### all
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
#### high
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
#### middle
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|------|----:|---------:|---:|
|all |87866| 4887|4934|
|high |62445| 3451|3498|
|middle|25421| 1436|1436|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
http://www.cs.cmu.edu/~glai1/data/race/
1. RACE dataset is available for non-commercial research purpose only.
2. All passages are obtained from the Internet which is not property of Carnegie Mellon University. We are not responsible for the content nor the meaning of these passages.
3. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.
4. We reserve the right to terminate your access to the RACE dataset at any time.
### Citation Information
```
@inproceedings{lai-etal-2017-race,
title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
author = "Lai, Guokun and
Xie, Qizhe and
Liu, Hanxiao and
Yang, Yiming and
Hovy, Eduard",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D17-1082",
doi = "10.18653/v1/D17-1082",
pages = "785--794",
}
```
### Contributions
Thanks to [@abarbosa94](https://github.com/abarbosa94), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
Intraferon/iNat-XC-1000 | Intraferon | "2025-02-24T06:38:26Z" | 12,898 | 0 | [
"language:en",
"region:us"
] | null | "2024-12-21T11:51:02Z" | ---
language:
- en
pretty_name: i
---
# iNat-XC-1000
A multi-modal bird species classification dataset consisting of the sightings of 1000 bird species from two citizen science projects: [iNaturalist](https://www.inaturalist.org/) and [Xeno Canto](https://xeno-canto.org/). |
mhardalov/exams | mhardalov | "2024-02-06T07:20:12Z" | 12,889 | 33 | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"multilinguality:multilingual",
"source_datasets:original",
"language:ar",
"language:bg",
"language:de",
"language:es",
"language:fr",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:mk",
"language:pl",
"language:pt",
"language:sq",
"language:sr",
"language:tr",
"language:vi",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2011.03080",
"region:us"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- bg
- de
- es
- fr
- hr
- hu
- it
- lt
- mk
- pl
- pt
- sq
- sr
- tr
- vi
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
- multilingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: exams
pretty_name: EXAMS
config_names:
- alignments
- crosslingual_bg
- crosslingual_hr
- crosslingual_hu
- crosslingual_it
- crosslingual_mk
- crosslingual_pl
- crosslingual_pt
- crosslingual_sq
- crosslingual_sr
- crosslingual_test
- crosslingual_tr
- crosslingual_vi
- crosslingual_with_para_bg
- crosslingual_with_para_hr
- crosslingual_with_para_hu
- crosslingual_with_para_it
- crosslingual_with_para_mk
- crosslingual_with_para_pl
- crosslingual_with_para_pt
- crosslingual_with_para_sq
- crosslingual_with_para_sr
- crosslingual_with_para_test
- crosslingual_with_para_tr
- crosslingual_with_para_vi
- multilingual
- multilingual_with_para
dataset_info:
- config_name: alignments
features:
- name: source_id
dtype: string
- name: target_id_list
sequence: string
splits:
- name: full
num_bytes: 1265256
num_examples: 10834
download_size: 184096
dataset_size: 1265256
- config_name: crosslingual_bg
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 1077329
num_examples: 2344
- name: validation
num_bytes: 281771
num_examples: 593
download_size: 514922
dataset_size: 1359100
- config_name: crosslingual_hr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 807104
num_examples: 2341
- name: validation
num_bytes: 176594
num_examples: 538
download_size: 450090
dataset_size: 983698
- config_name: crosslingual_hu
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 677535
num_examples: 1731
- name: validation
num_bytes: 202012
num_examples: 536
download_size: 401455
dataset_size: 879547
- config_name: crosslingual_it
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 399312
num_examples: 1010
- name: validation
num_bytes: 93175
num_examples: 246
download_size: 226376
dataset_size: 492487
- config_name: crosslingual_mk
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 825702
num_examples: 1665
- name: validation
num_bytes: 204318
num_examples: 410
download_size: 394548
dataset_size: 1030020
- config_name: crosslingual_pl
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 573410
num_examples: 1577
- name: validation
num_bytes: 141633
num_examples: 394
download_size: 341925
dataset_size: 715043
- config_name: crosslingual_pt
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 374798
num_examples: 740
- name: validation
num_bytes: 87714
num_examples: 184
download_size: 208021
dataset_size: 462512
- config_name: crosslingual_sq
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 423744
num_examples: 1194
- name: validation
num_bytes: 110093
num_examples: 311
download_size: 247052
dataset_size: 533837
- config_name: crosslingual_sr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 649560
num_examples: 1323
- name: validation
num_bytes: 145724
num_examples: 314
download_size: 327466
dataset_size: 795284
- config_name: crosslingual_test
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: test
num_bytes: 8402575
num_examples: 19736
download_size: 3438526
dataset_size: 8402575
- config_name: crosslingual_tr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 717599
num_examples: 1571
- name: validation
num_bytes: 182730
num_examples: 393
download_size: 440914
dataset_size: 900329
- config_name: crosslingual_vi
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 953167
num_examples: 1955
- name: validation
num_bytes: 231976
num_examples: 488
download_size: 462940
dataset_size: 1185143
- config_name: crosslingual_with_para_bg
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 47066808
num_examples: 2344
- name: validation
num_bytes: 11916026
num_examples: 593
download_size: 15794611
dataset_size: 58982834
- config_name: crosslingual_with_para_hr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 24889604
num_examples: 2341
- name: validation
num_bytes: 5695066
num_examples: 538
download_size: 9839452
dataset_size: 30584670
- config_name: crosslingual_with_para_hu
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 19035663
num_examples: 1731
- name: validation
num_bytes: 6043265
num_examples: 536
download_size: 9263625
dataset_size: 25078928
- config_name: crosslingual_with_para_it
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 16409235
num_examples: 1010
- name: validation
num_bytes: 4018329
num_examples: 246
download_size: 6907617
dataset_size: 20427564
- config_name: crosslingual_with_para_mk
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 38445894
num_examples: 1665
- name: validation
num_bytes: 9673574
num_examples: 410
download_size: 12878474
dataset_size: 48119468
- config_name: crosslingual_with_para_pl
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 16373781
num_examples: 1577
- name: validation
num_bytes: 4158832
num_examples: 394
download_size: 6539172
dataset_size: 20532613
- config_name: crosslingual_with_para_pt
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 12185383
num_examples: 740
- name: validation
num_bytes: 3093712
num_examples: 184
download_size: 4956969
dataset_size: 15279095
- config_name: crosslingual_with_para_sq
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 17341277
num_examples: 1194
- name: validation
num_bytes: 4449952
num_examples: 311
download_size: 7112236
dataset_size: 21791229
- config_name: crosslingual_with_para_sr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 24575845
num_examples: 1323
- name: validation
num_bytes: 5772509
num_examples: 314
download_size: 8035415
dataset_size: 30348354
- config_name: crosslingual_with_para_test
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: test
num_bytes: 207974374
num_examples: 13510
download_size: 62878029
dataset_size: 207974374
- config_name: crosslingual_with_para_tr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 18597131
num_examples: 1571
- name: validation
num_bytes: 4763097
num_examples: 393
download_size: 7346658
dataset_size: 23360228
- config_name: crosslingual_with_para_vi
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 40882999
num_examples: 1955
- name: validation
num_bytes: 10260374
num_examples: 488
download_size: 13028078
dataset_size: 51143373
- config_name: multilingual
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 3381837
num_examples: 7961
- name: validation
num_bytes: 1141687
num_examples: 2672
- name: test
num_bytes: 5746781
num_examples: 13510
download_size: 4323915
dataset_size: 10270305
- config_name: multilingual_with_para
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 127294567
num_examples: 7961
- name: validation
num_bytes: 42711689
num_examples: 2672
- name: test
num_bytes: 207974374
num_examples: 13510
download_size: 112597818
dataset_size: 377980630
configs:
- config_name: alignments
data_files:
- split: full
path: alignments/full-*
- config_name: crosslingual_bg
data_files:
- split: train
path: crosslingual_bg/train-*
- split: validation
path: crosslingual_bg/validation-*
- config_name: crosslingual_hr
data_files:
- split: train
path: crosslingual_hr/train-*
- split: validation
path: crosslingual_hr/validation-*
- config_name: crosslingual_hu
data_files:
- split: train
path: crosslingual_hu/train-*
- split: validation
path: crosslingual_hu/validation-*
- config_name: crosslingual_it
data_files:
- split: train
path: crosslingual_it/train-*
- split: validation
path: crosslingual_it/validation-*
- config_name: crosslingual_mk
data_files:
- split: train
path: crosslingual_mk/train-*
- split: validation
path: crosslingual_mk/validation-*
- config_name: crosslingual_pl
data_files:
- split: train
path: crosslingual_pl/train-*
- split: validation
path: crosslingual_pl/validation-*
- config_name: crosslingual_pt
data_files:
- split: train
path: crosslingual_pt/train-*
- split: validation
path: crosslingual_pt/validation-*
- config_name: crosslingual_sq
data_files:
- split: train
path: crosslingual_sq/train-*
- split: validation
path: crosslingual_sq/validation-*
- config_name: crosslingual_sr
data_files:
- split: train
path: crosslingual_sr/train-*
- split: validation
path: crosslingual_sr/validation-*
- config_name: crosslingual_test
data_files:
- split: test
path: crosslingual_test/test-*
- config_name: crosslingual_tr
data_files:
- split: train
path: crosslingual_tr/train-*
- split: validation
path: crosslingual_tr/validation-*
- config_name: crosslingual_vi
data_files:
- split: train
path: crosslingual_vi/train-*
- split: validation
path: crosslingual_vi/validation-*
- config_name: crosslingual_with_para_bg
data_files:
- split: train
path: crosslingual_with_para_bg/train-*
- split: validation
path: crosslingual_with_para_bg/validation-*
- config_name: crosslingual_with_para_hr
data_files:
- split: train
path: crosslingual_with_para_hr/train-*
- split: validation
path: crosslingual_with_para_hr/validation-*
- config_name: crosslingual_with_para_hu
data_files:
- split: train
path: crosslingual_with_para_hu/train-*
- split: validation
path: crosslingual_with_para_hu/validation-*
- config_name: crosslingual_with_para_it
data_files:
- split: train
path: crosslingual_with_para_it/train-*
- split: validation
path: crosslingual_with_para_it/validation-*
- config_name: crosslingual_with_para_mk
data_files:
- split: train
path: crosslingual_with_para_mk/train-*
- split: validation
path: crosslingual_with_para_mk/validation-*
- config_name: crosslingual_with_para_pl
data_files:
- split: train
path: crosslingual_with_para_pl/train-*
- split: validation
path: crosslingual_with_para_pl/validation-*
- config_name: crosslingual_with_para_pt
data_files:
- split: train
path: crosslingual_with_para_pt/train-*
- split: validation
path: crosslingual_with_para_pt/validation-*
- config_name: crosslingual_with_para_sq
data_files:
- split: train
path: crosslingual_with_para_sq/train-*
- split: validation
path: crosslingual_with_para_sq/validation-*
- config_name: crosslingual_with_para_sr
data_files:
- split: train
path: crosslingual_with_para_sr/train-*
- split: validation
path: crosslingual_with_para_sr/validation-*
- config_name: crosslingual_with_para_test
data_files:
- split: test
path: crosslingual_with_para_test/test-*
- config_name: crosslingual_with_para_tr
data_files:
- split: train
path: crosslingual_with_para_tr/train-*
- split: validation
path: crosslingual_with_para_tr/validation-*
- config_name: crosslingual_with_para_vi
data_files:
- split: train
path: crosslingual_with_para_vi/train-*
- split: validation
path: crosslingual_with_para_vi/validation-*
- config_name: multilingual
data_files:
- split: train
path: multilingual/train-*
- split: validation
path: multilingual/validation-*
- split: test
path: multilingual/test-*
- config_name: multilingual_with_para
data_files:
- split: train
path: multilingual_with_para/train-*
- split: validation
path: multilingual_with_para/validation-*
- split: test
path: multilingual_with_para/test-*
default: true
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/mhardalov/exams-qa
- **Paper:** [EXAMS: A Multi-Subject High School Examinations Dataset for Cross-Lingual and Multilingual Question Answering](https://arxiv.org/abs/2011.03080)
- **Point of Contact:** [hardalov@@fmi.uni-sofia.bg](hardalov@@fmi.uni-sofia.bg)
### Dataset Summary
EXAMS is a benchmark dataset for multilingual and cross-lingual question answering from high school examinations. It consists of more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- ar
- bg
- de
- es
- fr
- hr
- hu
- it
- lt
- mk
- pl
- pt
- sq
- sr
- tr
- vi
## Dataset Structure
### Data Instances
An example of a data instance (with support paragraphs, in Bulgarian) is:
```
{'answerKey': 'C',
'id': '35dd6b52-7e71-11ea-9eb1-54bef70b159e',
'info': {'grade': 12, 'language': 'Bulgarian', 'subject': 'Biology'},
'question': {'choices': {'label': ['A', 'B', 'C', 'D'],
'para': ['Това води до наследствени изменения между организмите. Мирновременните вождове са наследствени. Черният, сивият и кафявият цвят на оцветяване на тялото се определя от пигмента меланин и възниква в резултат на наследствени изменения. Тези различия, според Монтескьо, не са наследствени. Те са и важни наследствени вещи в клана. Те са били наследствени архонти и управляват демократично. Реликвите са исторически, религиозни, семейни (наследствени) и технически. Общо са направени 800 изменения. Не всички наследствени аномалии на хемоглобина са вредни, т.е. Моногенните наследствени болести, които водят до мигрена, са редки. Няма наследствени владетели. Повечето от тях са наследствени и се предават на потомството. Всичките синове са ерцхерцози на всичките наследствени земи и претенденти. През 1509 г. Фраунбергите са издигнати на наследствени имперски графове. Фамилията Валдбург заради постиженията са номинирани на „наследствени имперски трушсеси“. Фамилията Валдбург заради постиженията са номинирани на „наследствени имперски трушсеси“. Описани са единични наследствени случаи, но по-често липсва фамилна обремененост. Позициите им са наследствени и се предават в рамките на клана. Внесени са изменения в конструкцията на веригите. и са направени изменения в ходовата част. На храма са правени лоши архитектурни изменения. Изменения са предприети и вътре в двореца. Имало двама наследствени вождове. Имало двама наследствени вождове. Годишният календар, „компасът“ и биологичния часовник са наследствени и при много бозайници.',
'Постепенно задълбочаващите се функционални изменения довеждат и до структурни изменения. Те се дължат както на растягането на кожата, така и на въздействието на хормоналните изменения върху кожната тъкан. тези изменения се долавят по-ясно. Впоследствие, той претърпява изменения. Ширината остава без изменения. След тяхното издаване се налагат изменения в първоначалния Кодекс, защото не е съобразен с направените в Дигестите изменения. Еволюционният преход се характеризира със следните изменения: Наблюдават се и сезонни изменения в теглото. Приемат се изменения и допълнения към Устава. Тук се размножават и предизвикват възпалителни изменения. Общо са направени 800 изменения. Бронирането не претърпява съществени изменения. При животните се откриват изменения при злокачествената форма. Срещат се и дегенеративни изменения в семенните каналчета. ТАВКР „Баку“ се строи по изменения проект 1143.4. Трансът се съпровожда с определени изменения на мозъчната дейност. На изменения е подложен и Светия Синод. Внесени са изменения в конструкцията на веригите. На храма са правени лоши архитектурни изменения. Оттогава стиховете претърпяват изменения няколко пъти. Настъпват съществени изменения в музикалната култура. По-късно той претърпява леки изменения. Настъпват съществени изменения в музикалната култура. Претърпява сериозни изменения само носовата надстройка. Хоризонталното брониране е оставено без изменения.',
'Модификациите са обратими. Тези реакции са обратими. В началните стадии тези натрупвания са обратими. Всички такива ефекти са временни и обратими. Много от реакциите са обратими и идентични с тези при гликолизата. Ако в обращение има книжни пари, те са обратими в злато при поискване . Общо са направени 800 изменения. Непоследователността е представена от принципа на "симетрия", при който взаимоотношенията са разглеждани като симетрични или обратими. Откакто формулите в клетките на електронната таблица не са обратими, тази техника е с ограничена стойност. Ефектът на Пелтие-Зеебек и ефектът Томсън са обратими (ефектът на Пелтие е обратен на ефекта на Зеебек). Плазмолизата протича в три етапа, в зависимост от силата и продължителността на въздействието:\n\nПървите два етапа са обратими. Внесени са изменения в конструкцията на веригите. и са направени изменения в ходовата част. На храма са правени лоши архитектурни изменения. Изменения са предприети и вътре в двореца. Оттогава насетне екипите не са претърпявали съществени изменения. Изменения са направени и в колесника на машината. Тези изменения са обявени през октомври 1878 година. Последните изменения са внесени през януари 2009 година. В процеса на последващото проектиране са внесени някои изменения. Сериозните изменения са в края на Втората световна война. Внесени са изменения в конструкцията на погребите и подемниците. Внесени са изменения в конструкцията на погребите и подемниците. Внесени са изменения в конструкцията на погребите и подемниците. Постепенно задълбочаващите се функционални изменения довеждат и до структурни изменения.',
'Ерозионни процеси от масов характер липсват. Обновлението в редиците на партията приема масов характер. Тя обаче няма масов характер поради спецификата на формата. Движението против десятъка придобива масов характер и в Балчишка околия. Понякога екзекутирането на „обсебените от Сатана“ взимало невероятно масов характер. Укриването на дължими като наряд продукти в селата придобива масов характер. Периодичните миграции са в повечето случаи с масов характер и са свързани със сезонните изменения в природата, а непериодичните са премествания на животни, които настъпват след пожари, замърсяване на средата, висока численост и др. Имат необратим характер. Именно по време на двувековните походи на западните рицари използването на гербовете придобива масов характер. След присъединяването на Южен Кавказ към Русия, изселването на азербайджанци от Грузия придобива масов характер. Те имат нормативен характер. Те имат установителен характер. Освобождаването на работна сила обикновено има масов характер, защото обхваща големи контингенти от носителите на труд. Валежите имат подчертано континентален характер. Имат най-често издънков характер. Приливите имат предимно полуденонощен характер. Някои от тях имат мистериален характер. Тези сведения имат случаен, епизодичен характер. Те имат сезонен или годишен характер. Временните обезпечителни мерки имат временен характер. Други имат пожелателен характер (Здравко, Слава). Ловът и събирачеството имат спомагателен характер. Фактически успяват само малко да усилят бронирането на артилерийските погреби, другите изменения носят само частен характер. Някои карикатури имат само развлекателен характер, докато други имат политически нюанси. Поемите на Хезиод имат по-приложен характер.'],
'text': ['дължат се на фенотипни изменения',
'имат масов характер',
'са наследствени',
'са обратими']},
'stem': 'Мутационите изменения:'}}
```
### Data Fields
A data instance contains the following fields:
- `id`: A question ID, unique across the dataset
- `question`: the question contains the following:
- `stem`: a stemmed representation of the question textual
- `choices`: a set of 3 to 5 candidate answers, which each have:
- `text`: the text of the answers
- `label`: a label in `['A', 'B', 'C', 'D', 'E']` used to match to the `answerKey`
- `para`: (optional) a supported paragraph from Wikipedia in the same language as the question and answer
- `answerKey`: the key corresponding to the right answer's `label`
- `info`: some additional information on the question including:
- `grade`: the school grade for the exam this question was taken from
- `subject`: a free text description of the academic subject
- `language`: the English name of the language for this question
### Data Splits
Depending on the configuration, the dataset have different splits:
- "alignments": a single "full" split
- "multilingual" and "multilingual_with_para": "train", "validation" and "test" splits
- "crosslingual_test" and "crosslingual_with_para_test": a single "test" split
- the rest of crosslingual configurations: "train" and "validation" splits
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Eχαµs was collected from official state exams prepared by the ministries of education of various countries. These exams are taken by students graduating from high school, and often require knowledge learned through the entire course.
The questions cover a large variety of subjects and material based on the country’s education system. They cover major school subjects such as Biology, Chemistry, Geography, History, and Physics, but we also highly specialized ones such as Agriculture, Geology, Informatics, as well as some applied and profiled studies.
Some countries allow students to take official examinations in several languages. This dataset provides 9,857 parallel question pairs spread across seven languages coming from Croatia (Croatian, Serbian, Italian, Hungarian), Hungary (Hungarian, German, French, Spanish, Croatian, Serbian, Italian), and North Macedonia (Macedonian, Albanian, Turkish).
For all languages in the dataset, the first step in the process of data collection was to download the PDF files per year, per subject, and per language (when parallel languages were available in the same source), convert the PDF files to text, and select those that were well formatted and followed the document structure.
Then, Regular Expressions (RegEx) were used to parse the questions, their corresponding choices and the correct answer choice. In order to ensure that all our questions are answerable using textual input only, questions that contained visual information were removed, as selected by using curated list of words such as map, table, picture, graph, etc., in the corresponding language.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset, which contains paragraphs from Wikipedia, is licensed under CC-BY-SA 4.0. The code in this repository is licensed according the [LICENSE file](https://raw.githubusercontent.com/mhardalov/exams-qa/main/LICENSE).
### Citation Information
```
@inproceedings{hardalov-etal-2020-exams,
title = "{EXAMS}: A Multi-subject High School Examinations Dataset for Cross-lingual and Multilingual Question Answering",
author = "Hardalov, Momchil and
Mihaylov, Todor and
Zlatkova, Dimitrina and
Dinkov, Yoan and
Koychev, Ivan and
Nakov, Preslav",
editor = "Webber, Bonnie and
Cohn, Trevor and
He, Yulan and
Liu, Yang",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.438",
doi = "10.18653/v1/2020.emnlp-main.438",
pages = "5427--5444",
}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset. |
kamilakesbi/transformers_image_doc | kamilakesbi | "2024-04-22T15:51:29Z" | 12,810 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-04-22T15:50:03Z" | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 406434.0
num_examples: 2
download_size: 381914
dataset_size: 406434.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
openai/MMMLU | openai | "2024-10-16T18:39:00Z" | 12,803 | 472 | [
"task_categories:question-answering",
"language:ar",
"language:bn",
"language:de",
"language:es",
"language:fr",
"language:hi",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:pt",
"language:sw",
"language:yo",
"language:zh",
"license:mit",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2009.03300",
"region:us"
] | [
"question-answering"
] | "2024-09-13T16:37:19Z" | ---
task_categories:
- question-answering
configs:
- config_name: default
data_files:
- split: test
path: test/*.csv
- config_name: AR_XY
data_files:
- split: test
path: test/mmlu_AR-XY.csv
- config_name: BN_BD
data_files:
- split: test
path: test/mmlu_BN-BD.csv
- config_name: DE_DE
data_files:
- split: test
path: test/mmlu_DE-DE.csv
- config_name: ES_LA
data_files:
- split: test
path: test/mmlu_ES-LA.csv
- config_name: FR_FR
data_files:
- split: test
path: test/mmlu_FR-FR.csv
- config_name: HI_IN
data_files:
- split: test
path: test/mmlu_HI-IN.csv
- config_name: ID_ID
data_files:
- split: test
path: test/mmlu_ID-ID.csv
- config_name: IT_IT
data_files:
- split: test
path: test/mmlu_IT-IT.csv
- config_name: JA_JP
data_files:
- split: test
path: test/mmlu_JA-JP.csv
- config_name: KO_KR
data_files:
- split: test
path: test/mmlu_KO-KR.csv
- config_name: PT_BR
data_files:
- split: test
path: test/mmlu_PT-BR.csv
- config_name: SW_KE
data_files:
- split: test
path: test/mmlu_SW-KE.csv
- config_name: YO_NG
data_files:
- split: test
path: test/mmlu_YO-NG.csv
- config_name: ZH_CN
data_files:
- split: test
path: test/mmlu_ZH-CN.csv
language:
- ar
- bn
- de
- es
- fr
- hi
- id
- it
- ja
- ko
- pt
- sw
- yo
- zh
license: mit
---
# Multilingual Massive Multitask Language Understanding (MMMLU)
The MMLU is a widely recognized benchmark of general knowledge attained by AI models. It covers a broad range of topics from 57 different categories, covering elementary-level knowledge up to advanced professional subjects like law, physics, history, and computer science.
We translated the MMLU’s test set into 14 languages using professional human translators. Relying on human translators for this evaluation increases confidence in the accuracy of the translations, especially for low-resource languages like Yoruba. We are publishing the professional human translations and the code we use to run the evaluations.
This effort reflects our commitment to improving the multilingual capabilities of AI models, ensuring they perform accurately across languages, particularly for underrepresented communities. By prioritizing high-quality translations, we aim to make AI technology more inclusive and effective for users worldwide.
## Locales
MMMLU contains the MMLU test set translated into the following locales:
* AR_XY (Arabic)
* BN_BD (Bengali)
* DE_DE (German)
* ES_LA (Spanish)
* FR_FR (French)
* HI_IN (Hindi)
* ID_ID (Indonesian)
* IT_IT (Italian)
* JA_JP (Japanese)
* KO_KR (Korean)
* PT_BR (Brazilian Portuguese)
* SW_KE (Swahili)
* YO_NG (Yoruba)
* ZH_CN (Simplified Chinese)
## Sources
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., & Steinhardt, J. (2021). [*Measuring Massive Multitask Language Understanding*](https://arxiv.org/abs/2009.03300).
[OpenAI Simple Evals GitHub Repository](https://github.com/openai/simple-evals) |
kevinjesse/ManyRefactors4C | kevinjesse | "2022-09-25T12:59:34Z" | 12,794 | 0 | [
"license:cc-by-2.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-09-25T06:28:58Z" | ---
license: cc-by-2.0
---
|
alexandrainst/nota | alexandrainst | "2023-10-03T07:51:33Z" | 12,788 | 1 | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"language:da",
"license:cc0-1.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"automatic-speech-recognition",
"text-to-speech"
] | "2023-10-02T16:01:30Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 361607522962
num_examples: 98600
download_size: 256213612292
dataset_size: 361607522962
license: cc0-1.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- da
pretty_name: Nota
size_categories:
- 10K<n<100K
---
# Dataset Card for Nota
## Dataset Description
- **Repository:** <https://sprogtek-ressources.digst.govcloud.dk/nota>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Size of downloaded dataset files:** 256.21 GB
- **Size of the generated dataset:** 361.62 GB
- **Total amount of disk used:** 617.83 GB
### Dataset Summary
This data was created by the public institution [Nota](https://nota.dk/), which is part of the Danish Ministry of Culture. Nota has a library audiobooks and audiomagazines for people with reading or sight disabilities. Nota also produces a number of audiobooks and audiomagazines themselves.
The dataset consists of audio and associated transcriptions from Nota's audiomagazines "Inspiration" and "Radio/TV". All files related to one reading of one edition of the magazine "Inspiration" or "Radio/TV" has been segmented into bits of 2 - 50 seconds.
The dataset has been published as a part of the initiative sprogteknologi.dk, within the [Danish Agency for Digital Government (DIGST)](www.digst.dk).
### Supported Tasks and Leaderboards
Automatic speech recognition and speech synthesis are the intended tasks for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 256.21 GB
- **Size of the generated dataset:** 361.62 GB
- **Total amount of disk used:** 617.83 GB
An example from the dataset looks as follows.
```
{
'audio': {'path': 'RMHL20190028_000140.wav',
'array': array([-0.04023849, -0.06235407, -0.04545404, ..., 0.00014322,
0.00017925, 0.00018811]),
'sampling_rate': 16000},
'text': '13:05: 24syv Dokumentar 14:05: Spørge Jørgen Vært: Jørgen Leth'
}
```
### Data Fields
The data fields are the same among all splits.
- `audio`: an `Audio` feature.
- `text`: a `string` feature.
### Dataset Statistics
There are 98,600 samples in the dataset.
#### Transcription Length Distribution

## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) reorganised the dataset and uploaded it to the Hugging Face Hub.
### Licensing Information
The dataset is licensed under the [CC0
license](https://creativecommons.org/share-your-work/public-domain/cc0/). |
japanese-asr/whisper_transcriptions.mls.wer_10.0.vectorized | japanese-asr | "2024-09-15T01:35:08Z" | 12,726 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-11T12:32:36Z" | ---
dataset_info:
- config_name: subset_0
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95550219596
num_examples: 62101
download_size: 43092578892
dataset_size: 95550219596
- config_name: subset_1
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95892233884
num_examples: 62323
download_size: 43217224829
dataset_size: 95892233884
- config_name: subset_10
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95659534424
num_examples: 62172
download_size: 43197712726
dataset_size: 95659534424
- config_name: subset_100
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95570234896
num_examples: 62114
download_size: 43084233453
dataset_size: 95570234896
- config_name: subset_101
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95741000524
num_examples: 62225
download_size: 43183665345
dataset_size: 95741000524
- config_name: subset_102
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95732487892
num_examples: 62219
download_size: 43229537725
dataset_size: 95732487892
- config_name: subset_103
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95776664816
num_examples: 62248
download_size: 43187441638
dataset_size: 95776664816
- config_name: subset_104
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95930500816
num_examples: 62348
download_size: 43294625977
dataset_size: 95930500816
- config_name: subset_105
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95836868972
num_examples: 62287
download_size: 43251807028
dataset_size: 95836868972
- config_name: subset_106
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95940716900
num_examples: 62355
download_size: 43289304103
dataset_size: 95940716900
- config_name: subset_107
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95782476488
num_examples: 62252
download_size: 43209137820
dataset_size: 95782476488
- config_name: subset_108
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 96007104792
num_examples: 62398
download_size: 43221018658
dataset_size: 96007104792
- config_name: subset_109
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95764776944
num_examples: 62240
download_size: 43162176171
dataset_size: 95764776944
- config_name: subset_11
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95749669360
num_examples: 62230
download_size: 43193067430
dataset_size: 95749669360
- config_name: subset_110
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95826353540
num_examples: 62281
download_size: 43217482451
dataset_size: 95826353540
- config_name: subset_111
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95910493660
num_examples: 62335
download_size: 43268379463
dataset_size: 95910493660
- config_name: subset_112
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95782539616
num_examples: 62252
download_size: 43198507530
dataset_size: 95782539616
- config_name: subset_113
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95496693376
num_examples: 62066
download_size: 43106662052
dataset_size: 95496693376
- config_name: subset_114
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16920876
num_examples: 11
download_size: 7573002
dataset_size: 16920876
- config_name: subset_115
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95639177564
num_examples: 62159
download_size: 43180784518
dataset_size: 95639177564
- config_name: subset_116
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95625267448
num_examples: 62150
download_size: 43124129761
dataset_size: 95625267448
- config_name: subset_117
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95545048296
num_examples: 62098
download_size: 43082968259
dataset_size: 95545048296
- config_name: subset_118
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95866827908
num_examples: 62307
download_size: 43167164098
dataset_size: 95866827908
- config_name: subset_119
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 96068332252
num_examples: 62437
download_size: 43339136980
dataset_size: 96068332252
- config_name: subset_12
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95728691164
num_examples: 62217
download_size: 43198747627
dataset_size: 95728691164
- config_name: subset_120
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95722632700
num_examples: 62213
download_size: 43167373358
dataset_size: 95722632700
- config_name: subset_121
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95732191100
num_examples: 62219
download_size: 43221505796
dataset_size: 95732191100
- config_name: subset_122
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95699024432
num_examples: 62198
download_size: 43219580053
dataset_size: 95699024432
- config_name: subset_123
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95713523564
num_examples: 62207
download_size: 43177149081
dataset_size: 95713523564
- config_name: subset_124
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95463256840
num_examples: 62044
download_size: 43081995426
dataset_size: 95463256840
- config_name: subset_125
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95597812312
num_examples: 62132
download_size: 43093919552
dataset_size: 95597812312
- config_name: subset_126
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95798796016
num_examples: 62262
download_size: 43254288601
dataset_size: 95798796016
- config_name: subset_127
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95674872576
num_examples: 62182
download_size: 43251503801
dataset_size: 95674872576
- config_name: subset_128
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95741950380
num_examples: 62225
download_size: 43150675085
dataset_size: 95741950380
- config_name: subset_129
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95902821264
num_examples: 62330
download_size: 43266797081
dataset_size: 95902821264
- config_name: subset_13
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95706484544
num_examples: 62202
download_size: 43194357797
dataset_size: 95706484544
- config_name: subset_130
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13845812
num_examples: 9
download_size: 6597728
dataset_size: 13845812
- config_name: subset_131
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95385820008
num_examples: 61994
download_size: 43049793791
dataset_size: 95385820008
- config_name: subset_132
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95718111696
num_examples: 62210
download_size: 43160367467
dataset_size: 95718111696
- config_name: subset_133
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95642427284
num_examples: 62161
download_size: 43145455128
dataset_size: 95642427284
- config_name: subset_134
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95783162736
num_examples: 62252
download_size: 43157288094
dataset_size: 95783162736
- config_name: subset_135
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95905643680
num_examples: 62332
download_size: 43211878248
dataset_size: 95905643680
- config_name: subset_136
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95602572980
num_examples: 62135
download_size: 43148250609
dataset_size: 95602572980
- config_name: subset_137
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95742569912
num_examples: 62226
download_size: 43196126465
dataset_size: 95742569912
- config_name: subset_138
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95784277468
num_examples: 62253
download_size: 43213036863
dataset_size: 95784277468
- config_name: subset_14
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95484035440
num_examples: 62058
download_size: 43038787620
dataset_size: 95484035440
- config_name: subset_15
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95847923004
num_examples: 62295
download_size: 43269622880
dataset_size: 95847923004
- config_name: subset_16
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143676345616
num_examples: 93380
download_size: 64763101794
dataset_size: 143676345616
- config_name: subset_17
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143218562076
num_examples: 93081
download_size: 64543519703
dataset_size: 143218562076
- config_name: subset_18
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1538508
num_examples: 1
download_size: 888657
dataset_size: 1538508
- config_name: subset_19
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143234313008
num_examples: 93092
download_size: 64590945738
dataset_size: 143234313008
- config_name: subset_2
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95692976304
num_examples: 62194
download_size: 43156432229
dataset_size: 95692976304
- config_name: subset_20
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143387469416
num_examples: 93192
download_size: 64657130955
dataset_size: 143387469416
- config_name: subset_21
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143761745188
num_examples: 93435
download_size: 64848639452
dataset_size: 143761745188
- config_name: subset_22
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143510788288
num_examples: 93272
download_size: 64664207735
dataset_size: 143510788288
- config_name: subset_23
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143745597332
num_examples: 93425
download_size: 64881327829
dataset_size: 143745597332
- config_name: subset_24
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143524148912
num_examples: 93280
download_size: 64658212505
dataset_size: 143524148912
- config_name: subset_25
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143893466228
num_examples: 93521
download_size: 64887011756
dataset_size: 143893466228
- config_name: subset_26
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143679361468
num_examples: 93381
download_size: 64845399473
dataset_size: 143679361468
- config_name: subset_27
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143578388120
num_examples: 93316
download_size: 64733082218
dataset_size: 143578388120
- config_name: subset_28
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143336154232
num_examples: 93158
download_size: 64663766459
dataset_size: 143336154232
- config_name: subset_29
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7691452
num_examples: 5
download_size: 3459998
dataset_size: 7691452
- config_name: subset_3
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95679747492
num_examples: 62185
download_size: 43162138038
dataset_size: 95679747492
- config_name: subset_30
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143445434128
num_examples: 93230
download_size: 64632174781
dataset_size: 143445434128
- config_name: subset_31
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143557489496
num_examples: 93302
download_size: 64701593443
dataset_size: 143557489496
- config_name: subset_32
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143480746600
num_examples: 93252
download_size: 64739797925
dataset_size: 143480746600
- config_name: subset_33
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143804688340
num_examples: 93463
download_size: 64883427549
dataset_size: 143804688340
- config_name: subset_34
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143763133852
num_examples: 93435
download_size: 64878027444
dataset_size: 143763133852
- config_name: subset_35
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143471499924
num_examples: 93247
download_size: 64668279919
dataset_size: 143471499924
- config_name: subset_36
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143578658920
num_examples: 93316
download_size: 64771308961
dataset_size: 143578658920
- config_name: subset_37
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143421843384
num_examples: 93214
download_size: 64702071071
dataset_size: 143421843384
- config_name: subset_38
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143684196348
num_examples: 93385
download_size: 64784473635
dataset_size: 143684196348
- config_name: subset_39
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143680679576
num_examples: 93382
download_size: 64742283095
dataset_size: 143680679576
- config_name: subset_4
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95688480364
num_examples: 62191
download_size: 43186075154
dataset_size: 95688480364
- config_name: subset_40
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12306560
num_examples: 8
download_size: 5824713
dataset_size: 12306560
- config_name: subset_41
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143692662464
num_examples: 93390
download_size: 64709902673
dataset_size: 143692662464
- config_name: subset_42
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143203990624
num_examples: 93072
download_size: 64613043749
dataset_size: 143203990624
- config_name: subset_43
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143423685888
num_examples: 93216
download_size: 64706208398
dataset_size: 143423685888
- config_name: subset_44
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143599898184
num_examples: 93330
download_size: 64731002218
dataset_size: 143599898184
- config_name: subset_45
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143523564744
num_examples: 93280
download_size: 64697514930
dataset_size: 143523564744
- config_name: subset_46
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143657253604
num_examples: 93367
download_size: 64832235795
dataset_size: 143657253604
- config_name: subset_47
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143635083108
num_examples: 93353
download_size: 64816566584
dataset_size: 143635083108
- config_name: subset_48
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143498198048
num_examples: 93264
download_size: 64761835832
dataset_size: 143498198048
- config_name: subset_49
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95628254736
num_examples: 62152
download_size: 43114668093
dataset_size: 95628254736
- config_name: subset_5
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95761408264
num_examples: 62238
download_size: 43098449131
dataset_size: 95761408264
- config_name: subset_50
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95710114492
num_examples: 62205
download_size: 43145722727
dataset_size: 95710114492
- config_name: subset_51
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95718176136
num_examples: 62210
download_size: 43219455050
dataset_size: 95718176136
- config_name: subset_52
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95945517480
num_examples: 62358
download_size: 43266431091
dataset_size: 95945517480
- config_name: subset_53
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95659331056
num_examples: 62172
download_size: 43159537104
dataset_size: 95659331056
- config_name: subset_54
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95733778168
num_examples: 62220
download_size: 43168252529
dataset_size: 95733778168
- config_name: subset_55
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95529164348
num_examples: 62087
download_size: 43137593293
dataset_size: 95529164348
- config_name: subset_56
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12308272
num_examples: 8
download_size: 6100142
dataset_size: 12308272
- config_name: subset_57
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95725368820
num_examples: 62215
download_size: 43172851860
dataset_size: 95725368820
- config_name: subset_58
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95675366136
num_examples: 62182
download_size: 43119589688
dataset_size: 95675366136
- config_name: subset_59
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95757684168
num_examples: 62236
download_size: 43189671985
dataset_size: 95757684168
- config_name: subset_6
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95578098092
num_examples: 62119
download_size: 43139859949
dataset_size: 95578098092
- config_name: subset_60
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95700890356
num_examples: 62199
download_size: 43168843386
dataset_size: 95700890356
- config_name: subset_61
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95730743624
num_examples: 62218
download_size: 43225423665
dataset_size: 95730743624
- config_name: subset_62
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95939249240
num_examples: 62354
download_size: 43194512512
dataset_size: 95939249240
- config_name: subset_63
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95771529340
num_examples: 62245
download_size: 43196958015
dataset_size: 95771529340
- config_name: subset_64
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95848159444
num_examples: 62295
download_size: 43215788957
dataset_size: 95848159444
- config_name: subset_65
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95882062420
num_examples: 62317
download_size: 43207070631
dataset_size: 95882062420
- config_name: subset_66
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95871828072
num_examples: 62310
download_size: 43225588983
dataset_size: 95871828072
- config_name: subset_67
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95709466896
num_examples: 62204
download_size: 43120587410
dataset_size: 95709466896
- config_name: subset_68
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95761289000
num_examples: 62238
download_size: 43168961328
dataset_size: 95761289000
- config_name: subset_69
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95724204184
num_examples: 62214
download_size: 43179130241
dataset_size: 95724204184
- config_name: subset_7
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1538156
num_examples: 1
download_size: 820398
dataset_size: 1538156
- config_name: subset_70
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95609034668
num_examples: 62139
download_size: 43117847437
dataset_size: 95609034668
- config_name: subset_71
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95387596792
num_examples: 61996
download_size: 43021151166
dataset_size: 95387596792
- config_name: subset_72
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4615244
num_examples: 3
download_size: 2452533
dataset_size: 4615244
- config_name: subset_73
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95846947264
num_examples: 62294
download_size: 43209608729
dataset_size: 95846947264
- config_name: subset_74
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95818688740
num_examples: 62275
download_size: 43205934001
dataset_size: 95818688740
- config_name: subset_75
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95775232460
num_examples: 62247
download_size: 43190536605
dataset_size: 95775232460
- config_name: subset_76
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95665581880
num_examples: 62176
download_size: 43152943289
dataset_size: 95665581880
- config_name: subset_77
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95538306996
num_examples: 62093
download_size: 43186488482
dataset_size: 95538306996
- config_name: subset_78
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95653366336
num_examples: 62168
download_size: 43143452346
dataset_size: 95653366336
- config_name: subset_79
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95676339900
num_examples: 62183
download_size: 43128557833
dataset_size: 95676339900
- config_name: subset_8
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95715343992
num_examples: 62208
download_size: 43164885461
dataset_size: 95715343992
- config_name: subset_80
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95766599296
num_examples: 62242
download_size: 43239930128
dataset_size: 95766599296
- config_name: subset_81
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143321630332
num_examples: 93149
download_size: 64626209893
dataset_size: 143321630332
- config_name: subset_82
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143413608572
num_examples: 93209
download_size: 64621538517
dataset_size: 143413608572
- config_name: subset_83
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143531418364
num_examples: 93285
download_size: 64801287818
dataset_size: 143531418364
- config_name: subset_84
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143436760360
num_examples: 93224
download_size: 64683558260
dataset_size: 143436760360
- config_name: subset_85
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143693790772
num_examples: 93391
download_size: 64884349404
dataset_size: 143693790772
- config_name: subset_86
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143485525272
num_examples: 93256
download_size: 64696999488
dataset_size: 143485525272
- config_name: subset_87
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143365510504
num_examples: 93178
download_size: 64633922570
dataset_size: 143365510504
- config_name: subset_88
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143459907372
num_examples: 93239
download_size: 64646778148
dataset_size: 143459907372
- config_name: subset_89
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143713229680
num_examples: 93404
download_size: 64788831986
dataset_size: 143713229680
- config_name: subset_9
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95562629284
num_examples: 62109
download_size: 43123134026
dataset_size: 95562629284
- config_name: subset_90
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143379593708
num_examples: 93187
download_size: 64733038170
dataset_size: 143379593708
- config_name: subset_91
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95696250128
num_examples: 62196
download_size: 43134085960
dataset_size: 95696250128
- config_name: subset_92
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95934407652
num_examples: 62351
download_size: 43224929875
dataset_size: 95934407652
- config_name: subset_93
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95753580284
num_examples: 62233
download_size: 43222236201
dataset_size: 95753580284
- config_name: subset_94
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95891874916
num_examples: 62323
download_size: 43252770071
dataset_size: 95891874916
- config_name: subset_95
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95659464728
num_examples: 62172
download_size: 43086228614
dataset_size: 95659464728
- config_name: subset_96
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95794821896
num_examples: 62260
download_size: 43179370699
dataset_size: 95794821896
- config_name: subset_97
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95622506640
num_examples: 62148
download_size: 43156846644
dataset_size: 95622506640
- config_name: subset_98
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13845628
num_examples: 9
download_size: 6713409
dataset_size: 13845628
- config_name: subset_99
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95636401316
num_examples: 62157
download_size: 43147436863
dataset_size: 95636401316
configs:
- config_name: subset_0
data_files:
- split: train
path: subset_0/train-*
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
jxie/camelyon17 | jxie | "2023-06-22T09:10:17Z" | 12,694 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-06-20T19:19:23Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: id_train
num_bytes: 1028118482.46
num_examples: 302436
- name: id_val
num_bytes: 114778024.28
num_examples: 33560
- name: unlabeled_train
num_bytes: 2167898085.29
num_examples: 600030
- name: ood_val
num_bytes: 129021135.128
num_examples: 34904
- name: ood_test
num_bytes: 276517018.354
num_examples: 85054
download_size: 2858780601
dataset_size: 3716332745.5119996
---
# Dataset Card for "camelyon17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gmongaras/Imagenet21K_Recaption | gmongaras | "2025-02-12T22:15:29Z" | 12,666 | 2 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-01T03:28:10Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: class
dtype: string
- name: id
dtype: string
- name: recaption
dtype: string
- name: recaption_short
dtype: string
splits:
- name: train
num_bytes: 4921695912
num_examples: 25388
download_size: 4917113842
dataset_size: 4921695912
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset is the entire 21K ImageNet dataset with about 13 million examples and about 19 thousand classes as strings
(for some reason it only had ~19K classes instead of 21K).
If you want an even larger set of images, I have a recaptioned CC12M and ImageNet dataset: [https://huggingface.co/datasets/gmongaras/CC12M_and_Imagenet21K_Recap](https://huggingface.co/datasets/gmongaras/CC12M_and_Imagenet21K_Recap)
The images are in PNG format. They can be decoded like in the following example
```
import io
from PIL import Image
Image.open(io.BytesIO(row["image"]))
```
where row["image"] are the raw image bytes.
The captions were recaptioned following [this dataset](https://huggingface.co/datasets/CaptionEmporium/conceptual-captions-cc12m-llavanext). The procedure to do the recaption is as follows:
First the image along with the alt text is fed into the [Llama llava next 3b model](https://huggingface.co/lmms-lab/llama3-llava-next-8b). The prompt is
```
Please make a detailed but succinct caption of this image. If you see text or objects, be sure to describe them in detail along with any other aspects of the foreground and background. If there is any important text in the image, include it in the caption. As a hint, here is the alt-text attribute of the image, which may or may not have to do with the image:
Hint:
\`\`\`
{alt_text}
\`\`\`
```
The captions are then postprocessed to remove repetitive phrases such as "the image is" and content that repeats itself multiple times, if the model runs into a decoding issue.
Next, the image prompt is fed into the [llama 3 instruct model](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) to shorten the caption. The folllowing prompt was given to the model:
```
Please take the following image caption and attempt to distill it into a single sentence. Remove any redundant lines or descriptions and make it a maximum of 40 words in length.
\`\`\`
{img_prompt}
\`\`\`
Please only write the caption and no other text.
```
Both the original caption and shortened caption are in the dataset.
I will post the code I used for this later. |
ScalingIntelligence/kernelbench-samples | ScalingIntelligence | "2025-02-25T08:12:53Z" | 12,665 | 1 | [
"license:apache-2.0",
"arxiv:2502.10517",
"region:us"
] | null | "2025-02-24T18:24:15Z" | ---
license: apache-2.0
---
## KernelBench Samples
Samples from experiments for KernelBench, described in our [arxiv](https://arxiv.org/abs/2502.10517)
Learn more about KernelBench from our
* [Paper](https://arxiv.org/abs/2502.10517)
* [Github Repo](https://github.com/ScalingIntelligence/KernelBench)
The samples are organized as such
- `baseline_eval` (Section 4 Baseline)
- `repeated_sampling` (Section 5.1.1 Repeated Sampling)
- `iterative_refinement` (Section 5.1.2 Iterative Refinement of Generations)
Within each folder, we organize the results by `/level/model/problem_{id}/sample_{id}`.
The inner most `.json` file contains the generated kernel and its associated eval result. |
awettig/RedPajama-combined-15B-6K-llama | awettig | "2023-11-29T16:45:09Z" | 12,619 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-11-29T05:23:32Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 1422094968
num_examples: 17802
- name: train
num_bytes: 192480977304
num_examples: 2409506
download_size: 577654462
dataset_size: 193903072272
---
# Dataset Card for "RedPajama-combined-15B-6K-llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cardiffnlp/tweet_topic_multilingual | cardiffnlp | "2024-11-25T10:54:58Z" | 12,616 | 2 | [
"task_categories:text-classification",
"multilinguality:monolingual",
"language:en",
"language:ja",
"language:el",
"language:es",
"license:other",
"size_categories:1K<n<10K",
"arxiv:2410.03075",
"region:us"
] | [
"text-classification"
] | "2023-10-04T18:49:58Z" | ---
configs:
- config_name: default
data_files:
- split: train_en
path: "dataset/en/en_train.jsonl"
language:
- en
- ja
- el
- es
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- text-classification
pretty_name: xtopic
---
# Dataset Card for "cardiffnlp/tweet_topic_multilingual"
## Dataset Description
- **Dataset:** X-Topic
- **Domain:** X (Twitter)
- **Number of Class:** 19
### Dataset Summary
This is the official repository of X-Topic ([Multilingual Topic Classification in X: Dataset and Analysis](https://arxiv.org/abs/2410.03075), EMNLP 2024), a topic classification dataset based on X (formerly Twitter), featuring 19 topic labels.
The classification task is multi-label, with tweets available in four languages: English, Japanese, Spanish, and Greek.
The dataset comprises 4,000 tweets (1,000 per language), collected between September 2021 and August 2022.
The dataset uses the same taxonomy as [TweetTopic](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi).
## Dataset Structure
### Data Splits
The dataset includes the following splits:
- **en**: English
- **es**: Spanish
- **ja**: Japanese
- **gr**: Greek
- **en_2022**: English data from 2022 (TweetTopic)
- **mix**: Mixed-language data
- **mix_2022**: Mixed-language data including (TweetTopic) from 2022
- **Cross-validation splits:**
- **en_cross_validation_0** to **en_cross_validation_4**: English cross-validation splits
- **es_cross_validation_0** to **es_cross_validation_4**: Spanish cross-validation splits
- **ja_cross_validation_0** to **ja_cross_validation_4**: Japanese cross-validation splits
- **gr_cross_validation_0** to **gr_cross_validation_4**: Greek cross-validation splits
### Data Instances
An example of `train` looks as follows.
```python
{
"id": 1470030676816797696,
"text": "made a matcha latte, black tea and green juice until i break my fast at 1!! my body and skin are thanking me",
"label": [0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"label_name": ["Diaries & Daily Life", "Fitness & Health", "Food & Dining"],
"label_name_flatten": "Diaries & Daily Life, Fitness & Health, Food & Dining"
}
```
### Labels
| <span style="font-weight:normal">0: arts_&_culture</span> | <span style="font-weight:normal">5: fashion_&_style</span> | <span style="font-weight:normal">10: learning_&_educational</span> | <span style="font-weight:normal">15: science_&_technology</span> |
|-----------------------------|---------------------|----------------------------|--------------------------|
| 1: business_&_entrepreneurs | 6: film_tv_&_video | 11: music | 16: sports |
| 2: celebrity_&_pop_culture | 7: fitness_&_health | 12: news_&_social_concern | 17: travel_&_adventure |
| 3: diaries_&_daily_life | 8: food_&_dining | 13: other_hobbies | 18: youth_&_student_life |
| 4: family | 9: gaming | 14: relationships | |
Annotation instructions for English can be found [here](https://docs.google.com/document/d/1IaIXZYof3iCLLxyBdu_koNmjy--zqsuOmxQ2vOxYd_g/edit?usp=sharing).
## Citation Information
```
@inproceedings{antypas-etal-2024-multilingual,
title = "Multilingual Topic Classification in {X}: Dataset and Analysis",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Barbieri, Francesco and
Camacho-Collados, Jose",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.1123",
pages = "20136--20152",
abstract = "In the dynamic realm of social media, diverse topics are discussed daily, transcending linguistic boundaries. However, the complexities of understanding and categorising this content across various languages remain an important challenge with traditional techniques like topic modelling often struggling to accommodate this multilingual diversity. In this paper, we introduce X-Topic, a multilingual dataset featuring content in four distinct languages (English, Spanish, Japanese, and Greek), crafted for the purpose of tweet topic classification. Our dataset includes a wide range of topics, tailored for social media content, making it a valuable resource for scientists and professionals working on cross-linguistic analysis, the development of robust multilingual models, and computational scientists studying online dialogue. Finally, we leverage X-Topic to perform a comprehensive cross-linguistic and multilingual analysis, and compare the capabilities of current general- and domain-specific language models.",
}
```
|
spdenisov/tokenized_udtree | spdenisov | "2023-03-28T21:56:12Z" | 12,614 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-03-28T21:50:04Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: cs_0
num_bytes: 73985244
num_examples: 102133
- name: cs_1
num_bytes: 95459594
num_examples: 102133
- name: cs_2
num_bytes: 95354064
num_examples: 102133
- name: cs_3
num_bytes: 128817619
num_examples: 102133
- name: cs_4
num_bytes: 236925044
num_examples: 102133
- name: cs_5
num_bytes: 115688159
num_examples: 102133
- name: cs_6
num_bytes: 132404489
num_examples: 102133
- name: tr_0
num_bytes: 28666902
num_examples: 60089
- name: tr_1
num_bytes: 31887742
num_examples: 60089
- name: tr_2
num_bytes: 31749302
num_examples: 60089
- name: tr_3
num_bytes: 28498032
num_examples: 60089
- name: tr_4
num_bytes: 57177672
num_examples: 60089
- name: tr_5
num_bytes: 37804587
num_examples: 60089
- name: tr_6
num_bytes: 28280762
num_examples: 60089
- name: ar_0
num_bytes: 32848442
num_examples: 21864
- name: ar_1
num_bytes: 49955197
num_examples: 21864
- name: ar_2
num_bytes: 49285292
num_examples: 21864
- name: ar_3
num_bytes: 69585617
num_examples: 21864
- name: ar_4
num_bytes: 91649737
num_examples: 21864
- name: ar_5
num_bytes: 59303592
num_examples: 21864
- name: ar_6
num_bytes: 50935047
num_examples: 21864
- name: de_0
num_bytes: 112997417
num_examples: 166849
- name: de_1
num_bytes: 149332477
num_examples: 166849
- name: de_2
num_bytes: 157628127
num_examples: 166849
- name: de_3
num_bytes: 155444887
num_examples: 166849
- name: de_4
num_bytes: 309419752
num_examples: 166849
- name: de_5
num_bytes: 191783977
num_examples: 166849
- name: de_6
num_bytes: 138689312
num_examples: 166849
- name: fr_0
num_bytes: 27905013
num_examples: 34921
- name: fr_1
num_bytes: 41237113
num_examples: 34921
- name: fr_2
num_bytes: 45655098
num_examples: 34921
- name: fr_3
num_bytes: 39973853
num_examples: 34921
- name: fr_4
num_bytes: 76420558
num_examples: 34921
- name: fr_5
num_bytes: 56197173
num_examples: 34921
- name: fr_6
num_bytes: 39938223
num_examples: 34921
- name: no_0
num_bytes: 19584526
num_examples: 33282
- name: no_1
num_bytes: 25823376
num_examples: 33282
- name: no_2
num_bytes: 26954416
num_examples: 33282
- name: no_3
num_bytes: 23459636
num_examples: 33282
- name: no_4
num_bytes: 43762856
num_examples: 33282
- name: no_5
num_bytes: 32578281
num_examples: 33282
- name: no_6
num_bytes: 23459636
num_examples: 33282
- name: pt_0
num_bytes: 12627085
num_examples: 30720
- name: pt_1
num_bytes: 16475005
num_examples: 30720
- name: pt_2
num_bytes: 17295815
num_examples: 30720
- name: pt_3
num_bytes: 16917200
num_examples: 30720
- name: pt_4
num_bytes: 24168495
num_examples: 30720
- name: pt_5
num_bytes: 20520155
num_examples: 30720
- name: pt_6
num_bytes: 15115165
num_examples: 30720
- name: es_0
num_bytes: 27551907
num_examples: 28474
- name: es_1
num_bytes: 39391152
num_examples: 28474
- name: es_2
num_bytes: 42349787
num_examples: 28474
- name: es_3
num_bytes: 43743597
num_examples: 28474
- name: es_4
num_bytes: 69878787
num_examples: 28474
- name: es_5
num_bytes: 51203677
num_examples: 28474
- name: es_6
num_bytes: 46914367
num_examples: 28474
- name: ru_0
num_bytes: 57566900
num_examples: 89525
- name: ru_1
num_bytes: 74853550
num_examples: 89525
- name: ru_2
num_bytes: 76555950
num_examples: 89525
- name: ru_3
num_bytes: 67072565
num_examples: 89525
- name: ru_4
num_bytes: 155012405
num_examples: 89525
- name: ru_5
num_bytes: 92396515
num_examples: 89525
- name: ru_6
num_bytes: 98333345
num_examples: 89525
- name: en_0
num_bytes: 14945668
num_examples: 28686
- name: en_1
num_bytes: 20836733
num_examples: 28686
- name: en_2
num_bytes: 23313373
num_examples: 28686
- name: en_3
num_bytes: 21978133
num_examples: 28686
- name: en_4
num_bytes: 32732303
num_examples: 28686
- name: en_5
num_bytes: 28539183
num_examples: 28686
- name: en_6
num_bytes: 28399343
num_examples: 28686
- name: fi_0
num_bytes: 14729969
num_examples: 27198
- name: fi_1
num_bytes: 17656509
num_examples: 27198
- name: fi_2
num_bytes: 16915489
num_examples: 27198
- name: fi_3
num_bytes: 18732354
num_examples: 27198
- name: fi_4
num_bytes: 29894674
num_examples: 27198
- name: fi_5
num_bytes: 20079089
num_examples: 27198
- name: fi_6
num_bytes: 18874279
num_examples: 27198
- name: gd_0
num_bytes: 2829948
num_examples: 3541
- name: gd_1
num_bytes: 3700318
num_examples: 3541
- name: gd_2
num_bytes: 3798313
num_examples: 3541
- name: gd_3
num_bytes: 3907648
num_examples: 3541
- name: gd_4
num_bytes: 5359963
num_examples: 3541
- name: gd_5
num_bytes: 4693368
num_examples: 3541
- name: gd_6
num_bytes: 3383253
num_examples: 3541
- name: gv_0
num_bytes: 456221
num_examples: 1172
- name: gv_1
num_bytes: 597391
num_examples: 1172
- name: gv_2
num_bytes: 609501
num_examples: 1172
- name: gv_3
num_bytes: 542486
num_examples: 1172
- name: gv_4
num_bytes: 785231
num_examples: 1172
- name: gv_5
num_bytes: 729026
num_examples: 1172
- name: gv_6
num_bytes: 542486
num_examples: 1172
- name: ga_0
num_bytes: 3928820
num_examples: 4005
- name: ga_1
num_bytes: 5021230
num_examples: 4005
- name: ga_2
num_bytes: 5059580
num_examples: 4005
- name: ga_3
num_bytes: 4843745
num_examples: 4005
- name: ga_4
num_bytes: 9085760
num_examples: 4005
- name: ga_5
num_bytes: 6197075
num_examples: 4005
- name: ga_6
num_bytes: 4483365
num_examples: 4005
- name: cop_0
num_bytes: 4660032
num_examples: 1379
- name: cop_1
num_bytes: 5726842
num_examples: 1379
- name: cop_2
num_bytes: 4508942
num_examples: 1379
- name: cop_3
num_bytes: 4496787
num_examples: 1379
- name: cop_4
num_bytes: 5425137
num_examples: 1379
- name: cop_5
num_bytes: 4907442
num_examples: 1379
- name: cop_6
num_bytes: 4284382
num_examples: 1379
- name: it_0
num_bytes: 17989232
num_examples: 21724
- name: it_1
num_bytes: 25839627
num_examples: 21724
- name: it_2
num_bytes: 27448052
num_examples: 21724
- name: it_3
num_bytes: 24875027
num_examples: 21724
- name: it_4
num_bytes: 43731272
num_examples: 21724
- name: it_5
num_bytes: 33091747
num_examples: 21724
- name: it_6
num_bytes: 30955017
num_examples: 21724
- name: cy_0
num_bytes: 907518
num_examples: 1111
- name: cy_1
num_bytes: 1180383
num_examples: 1111
- name: cy_2
num_bytes: 1192068
num_examples: 1111
- name: cy_3
num_bytes: 1123428
num_examples: 1111
- name: cy_4
num_bytes: 1834888
num_examples: 1111
- name: cy_5
num_bytes: 1439843
num_examples: 1111
- name: cy_6
num_bytes: 1055223
num_examples: 1111
- name: hu_0
num_bytes: 858340
num_examples: 910
- name: hu_1
num_bytes: 1088085
num_examples: 910
- name: hu_2
num_bytes: 1086220
num_examples: 910
- name: hu_3
num_bytes: 957490
num_examples: 910
- name: hu_4
num_bytes: 1964920
num_examples: 910
- name: hu_5
num_bytes: 1370660
num_examples: 910
- name: hu_6
num_bytes: 957490
num_examples: 910
- name: zh_0
num_bytes: 9051347
num_examples: 7994
- name: zh_1
num_bytes: 12537582
num_examples: 7994
- name: zh_2
num_bytes: 11419717
num_examples: 7994
- name: zh_3
num_bytes: 10888407
num_examples: 7994
- name: zh_4
num_bytes: 10558847
num_examples: 7994
- name: zh_5
num_bytes: 13867342
num_examples: 7994
- name: zh_6
num_bytes: 10167967
num_examples: 7994
- name: hy_0
num_bytes: 5120790
num_examples: 3200
- name: hy_1
num_bytes: 5762195
num_examples: 3200
- name: hy_2
num_bytes: 4712195
num_examples: 3200
- name: hy_3
num_bytes: 4260805
num_examples: 3200
- name: hy_4
num_bytes: 8546900
num_examples: 3200
- name: hy_5
num_bytes: 5442440
num_examples: 3200
- name: hy_6
num_bytes: 4260805
num_examples: 3200
- name: ro_0
num_bytes: 6894274
num_examples: 8043
- name: ro_1
num_bytes: 9156564
num_examples: 8043
- name: ro_2
num_bytes: 9493574
num_examples: 8043
- name: ro_3
num_bytes: 10830604
num_examples: 8043
- name: ro_4
num_bytes: 20320209
num_examples: 8043
- name: ro_5
num_bytes: 11507314
num_examples: 8043
- name: ro_6
num_bytes: 8300564
num_examples: 8043
- name: da_0
num_bytes: 2963139
num_examples: 4383
- name: da_1
num_bytes: 3945104
num_examples: 4383
- name: da_2
num_bytes: 4115634
num_examples: 4383
- name: da_3
num_bytes: 3583269
num_examples: 4383
- name: da_4
num_bytes: 7089004
num_examples: 4383
- name: da_5
num_bytes: 4981724
num_examples: 4383
- name: da_6
num_bytes: 3583269
num_examples: 4383
- name: nl_0
num_bytes: 6741817
num_examples: 12289
- name: nl_1
num_bytes: 8989392
num_examples: 12289
- name: nl_2
num_bytes: 9389757
num_examples: 12289
- name: nl_3
num_bytes: 16004832
num_examples: 12289
- name: nl_4
num_bytes: 12089687
num_examples: 12289
- name: nl_5
num_bytes: 11410547
num_examples: 12289
- name: nl_6
num_bytes: 12631912
num_examples: 12289
download_size: 934434422
dataset_size: 5264208717
---
# Dataset Card for "tokenized_udtree"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mrdbourke/food_vision_199_classes | mrdbourke | "2022-11-27T23:00:30Z" | 12,599 | 0 | [
"modality:image",
"region:us"
] | null | "2022-11-23T23:59:42Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': almond_butter
'1': almonds
'2': apple
'3': apricot
'4': asparagus
'5': avocado
'6': bacon
'7': bacon_and_egg_burger
'8': bagel
'9': baklava
'10': banana
'11': banana_bread
'12': barbecue_sauce
'13': beans
'14': beef
'15': beef_curry
'16': beef_mince
'17': beef_stir_fry
'18': beer
'19': beetroot
'20': biltong
'21': blackberries
'22': blueberries
'23': bok_choy
'24': bread
'25': broccoli
'26': broccolini
'27': brownie
'28': brussel_sprouts
'29': burrito
'30': butter
'31': cabbage
'32': calamari
'33': candy
'34': capsicum
'35': carrot
'36': cashews
'37': cauliflower
'38': celery
'39': cheese
'40': cheeseburger
'41': cherries
'42': chicken_breast
'43': chicken_thighs
'44': chicken_wings
'45': chilli
'46': chimichurri
'47': chocolate
'48': chocolate_cake
'49': coconut
'50': coffee
'51': coleslaw
'52': cookies
'53': coriander
'54': corn
'55': corn_chips
'56': cream
'57': croissant
'58': crumbed_chicken
'59': cucumber
'60': cupcake
'61': daikon_radish
'62': dates
'63': donuts
'64': dragonfruit
'65': eggplant
'66': eggs
'67': enoki_mushroom
'68': fennel
'69': figs
'70': french_toast
'71': fried_rice
'72': fries
'73': fruit_juice
'74': garlic
'75': garlic_bread
'76': ginger
'77': goji_berries
'78': granola
'79': grapefruit
'80': grapes
'81': green_beans
'82': green_onion
'83': guacamole
'84': guava
'85': gyoza
'86': ham
'87': honey
'88': hot_chocolate
'89': ice_coffee
'90': ice_cream
'91': iceberg_lettuce
'92': jerusalem_artichoke
'93': kale
'94': karaage_chicken
'95': kimchi
'96': kiwi_fruit
'97': lamb_chops
'98': leek
'99': lemon
'100': lentils
'101': lettuce
'102': lime
'103': mandarin
'104': mango
'105': maple_syrup
'106': mashed_potato
'107': mayonnaise
'108': milk
'109': miso_soup
'110': mushrooms
'111': nectarines
'112': noodles
'113': nuts
'114': olive_oil
'115': olives
'116': omelette
'117': onion
'118': orange
'119': orange_juice
'120': oysters
'121': pain_au_chocolat
'122': pancakes
'123': papaya
'124': parsley
'125': parsnips
'126': passionfruit
'127': pasta
'128': pawpaw
'129': peach
'130': pear
'131': peas
'132': pickles
'133': pineapple
'134': pizza
'135': plum
'136': pomegranate
'137': popcorn
'138': pork_belly
'139': pork_chop
'140': pork_loins
'141': porridge
'142': potato_bake
'143': potato_chips
'144': potato_scallop
'145': potatoes
'146': prawns
'147': pumpkin
'148': radish
'149': ramen
'150': raspberries
'151': red_onion
'152': red_wine
'153': rhubarb
'154': rice
'155': roast_beef
'156': roast_pork
'157': roast_potatoes
'158': rockmelon
'159': rosemary
'160': salad
'161': salami
'162': salmon
'163': salsa
'164': salt
'165': sandwich
'166': sardines
'167': sausage_roll
'168': sausages
'169': scrambled_eggs
'170': seaweed
'171': shallots
'172': snow_peas
'173': soda
'174': soy_sauce
'175': spaghetti_bolognese
'176': spinach
'177': sports_drink
'178': squash
'179': starfruit
'180': steak
'181': strawberries
'182': sushi
'183': sweet_potato
'184': tacos
'185': tamarillo
'186': taro
'187': tea
'188': toast
'189': tofu
'190': tomato
'191': tomato_chutney
'192': tomato_sauce
'193': turnip
'194': watermelon
'195': white_onion
'196': white_wine
'197': yoghurt
'198': zucchini
splits:
- name: train
num_bytes: 2973286
num_examples: 19747
download_size: 6202707
dataset_size: 2973286
---
|
autogluon/chronos_datasets | autogluon | "2025-03-18T10:38:52Z" | 12,593 | 35 | [
"task_categories:time-series-forecasting",
"task_ids:univariate-time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"source_datasets:original",
"license:other",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2403.07815",
"region:us"
] | [
"time-series-forecasting"
] | "2024-06-22T15:59:58Z" | ---
annotations_creators:
- no-annotation
license: other
source_datasets:
- original
task_categories:
- time-series-forecasting
task_ids:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
pretty_name: Chronos datasets
dataset_info:
- config_name: dominick
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: im_0
dtype: int64
splits:
- name: train
num_bytes: 477140250
num_examples: 100014
download_size: 42290010
dataset_size: 477140250
- config_name: electricity_15min
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: consumption_kW
sequence: float64
splits:
- name: train
num_bytes: 670989988
num_examples: 370
download_size: 284497403
dataset_size: 670989988
license: CC BY 4.0
homepage: https://archive.ics.uci.edu/dataset/321/electricityloaddiagrams20112014
- config_name: ercot
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ns]
- name: target
sequence: float32
splits:
- name: train
num_examples: 8
download_size: 14504261
- config_name: exchange_rate
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float32
splits:
- name: train
num_examples: 8
download_size: 401501
license: MIT
homepage: https://github.com/laiguokun/multivariate-time-series-data/tree/master/exchange_rate
- config_name: m4_daily
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 160504176
num_examples: 4227
download_size: 65546675
dataset_size: 160504176
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m4_hourly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 5985544
num_examples: 414
download_size: 1336971
dataset_size: 5985544
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m4_monthly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 181372969
num_examples: 48000
download_size: 52772258
dataset_size: 181372969
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m4_quarterly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 39205397
num_examples: 24000
download_size: 13422579
dataset_size: 39205397
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m4_weekly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 5955806
num_examples: 359
download_size: 2556691
dataset_size: 5955806
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m4_yearly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: category
dtype: string
splits:
- name: train
num_bytes: 14410042
num_examples: 23000
download_size: 5488601
dataset_size: 14410042
homepage: https://github.com/Mcompetitions/M4-methods
- config_name: m5
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: item_id
dtype: string
- name: target
sequence: float32
- name: dept_id
dtype: string
- name: cat_id
dtype: string
- name: store_id
dtype: string
- name: state_id
dtype: string
splits:
- name: train
num_bytes: 574062630
num_examples: 30490
download_size: 78063286
dataset_size: 574062630
homepage: https://www.kaggle.com/competitions/m5-forecasting-accuracy/rules
- config_name: mexico_city_bikes
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 618999406
num_examples: 494
download_size: 103206946
dataset_size: 618999406
homepage: https://ecobici.cdmx.gob.mx/en/open-data/
- config_name: monash_australian_electricity
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 18484319
num_examples: 5
download_size: 16856156
dataset_size: 18484319
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_car_parts
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 2232790
num_examples: 2674
download_size: 70278
dataset_size: 2232790
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_cif_2016
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 115096
num_examples: 72
download_size: 70876
dataset_size: 115096
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_covid_deaths
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 907326
num_examples: 266
download_size: 58957
dataset_size: 907326
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_electricity_hourly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 135103443
num_examples: 321
download_size: 31139117
dataset_size: 135103443
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_electricity_weekly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 807315
num_examples: 321
download_size: 333563
dataset_size: 807315
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_fred_md
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 1248369
num_examples: 107
download_size: 412207
dataset_size: 1248369
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_hospital
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: int64
splits:
- name: train
num_examples: 767
download_size: 117038
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_kdd_cup_2018
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: city
dtype: string
- name: station
dtype: string
- name: measurement
dtype: string
splits:
- name: train
num_bytes: 47091540
num_examples: 270
download_size: 8780105
dataset_size: 47091540
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_london_smart_meters
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 2664567976
num_examples: 5560
download_size: 597389119
dataset_size: 2664567976
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m1_monthly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 907691
num_examples: 617
download_size: 244372
dataset_size: 907691
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m1_quarterly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 162961
num_examples: 203
download_size: 48439
dataset_size: 162961
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m1_yearly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 75679
num_examples: 181
download_size: 30754
dataset_size: 75679
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m3_monthly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 2708124
num_examples: 1428
download_size: 589699
dataset_size: 2708124
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m3_quarterly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 606428
num_examples: 756
download_size: 188543
dataset_size: 606428
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_m3_yearly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 305359
num_examples: 645
download_size: 100184
dataset_size: 305359
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_nn5_weekly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float32
splits:
- name: train
num_examples: 111
download_size: 64620
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_pedestrian_counts
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: int64
splits:
- name: train
num_bytes: 50118790
num_examples: 66
download_size: 12377357
dataset_size: 50118790
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_rideshare
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: source_location
dtype: string
- name: provider_name
dtype: string
- name: provider_service
dtype: string
- name: price_min
sequence: float64
- name: price_mean
sequence: float64
- name: price_max
sequence: float64
- name: distance_min
sequence: float64
- name: distance_mean
sequence: float64
- name: distance_max
sequence: float64
- name: surge_min
sequence: float64
- name: surge_mean
sequence: float64
- name: surge_max
sequence: float64
- name: api_calls
sequence: float64
- name: temp
sequence: float64
- name: rain
sequence: float64
- name: humidity
sequence: float64
- name: clouds
sequence: float64
- name: wind
sequence: float64
splits:
- name: train
num_bytes: 10819910
num_examples: 156
download_size: 781873
dataset_size: 10819910
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_saugeenday
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: T1
sequence: float64
splits:
- name: train
num_bytes: 379875
num_examples: 1
download_size: 222678
dataset_size: 379875
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_temperature_rain
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: t_mean
sequence: float64
- name: prcp_sum
sequence: float64
- name: t_max
sequence: float64
- name: t_min
sequence: float64
- name: fcst_0_dailypop
sequence: float64
- name: fcst_0_dailypop1
sequence: float64
- name: fcst_0_dailypop10
sequence: float64
- name: fcst_0_dailypop15
sequence: float64
- name: fcst_0_dailypop25
sequence: float64
- name: fcst_0_dailypop5
sequence: float64
- name: fcst_0_dailypop50
sequence: float64
- name: fcst_0_dailyprecip
sequence: float64
- name: fcst_0_dailyprecip10pct
sequence: float64
- name: fcst_0_dailyprecip25pct
sequence: float64
- name: fcst_0_dailyprecip50pct
sequence: float64
- name: fcst_0_dailyprecip75pct
sequence: float64
- name: fcst_1_dailypop
sequence: float64
- name: fcst_1_dailypop1
sequence: float64
- name: fcst_1_dailypop10
sequence: float64
- name: fcst_1_dailypop15
sequence: float64
- name: fcst_1_dailypop25
sequence: float64
- name: fcst_1_dailypop5
sequence: float64
- name: fcst_1_dailypop50
sequence: float64
- name: fcst_1_dailyprecip
sequence: float64
- name: fcst_1_dailyprecip10pct
sequence: float64
- name: fcst_1_dailyprecip25pct
sequence: float64
- name: fcst_1_dailyprecip50pct
sequence: float64
- name: fcst_1_dailyprecip75pct
sequence: float64
- name: fcst_2_dailypop
sequence: float64
- name: fcst_2_dailypop1
sequence: float64
- name: fcst_2_dailypop10
sequence: float64
- name: fcst_2_dailypop15
sequence: float64
- name: fcst_2_dailypop25
sequence: float64
- name: fcst_2_dailypop5
sequence: float64
- name: fcst_2_dailypop50
sequence: float64
- name: fcst_2_dailyprecip
sequence: float64
- name: fcst_2_dailyprecip10pct
sequence: float64
- name: fcst_2_dailyprecip25pct
sequence: float64
- name: fcst_2_dailyprecip50pct
sequence: float64
- name: fcst_2_dailyprecip75pct
sequence: float64
- name: fcst_3_dailypop
sequence: float64
- name: fcst_3_dailypop1
sequence: float64
- name: fcst_3_dailypop10
sequence: float64
- name: fcst_3_dailypop15
sequence: float64
- name: fcst_3_dailypop25
sequence: float64
- name: fcst_3_dailypop5
sequence: float64
- name: fcst_3_dailypop50
sequence: float64
- name: fcst_3_dailyprecip
sequence: float64
- name: fcst_3_dailyprecip10pct
sequence: float64
- name: fcst_3_dailyprecip25pct
sequence: float64
- name: fcst_3_dailyprecip50pct
sequence: float64
- name: fcst_3_dailyprecip75pct
sequence: float64
- name: fcst_4_dailypop
sequence: float64
- name: fcst_4_dailypop1
sequence: float64
- name: fcst_4_dailypop10
sequence: float64
- name: fcst_4_dailypop15
sequence: float64
- name: fcst_4_dailypop25
sequence: float64
- name: fcst_4_dailypop5
sequence: float64
- name: fcst_4_dailypop50
sequence: float64
- name: fcst_4_dailyprecip
sequence: float64
- name: fcst_4_dailyprecip10pct
sequence: float64
- name: fcst_4_dailyprecip25pct
sequence: float64
- name: fcst_4_dailyprecip50pct
sequence: float64
- name: fcst_4_dailyprecip75pct
sequence: float64
- name: fcst_5_dailypop
sequence: float64
- name: fcst_5_dailypop1
sequence: float64
- name: fcst_5_dailypop10
sequence: float64
- name: fcst_5_dailypop15
sequence: float64
- name: fcst_5_dailypop25
sequence: float64
- name: fcst_5_dailypop5
sequence: float64
- name: fcst_5_dailypop50
sequence: float64
- name: fcst_5_dailyprecip
sequence: float64
- name: fcst_5_dailyprecip10pct
sequence: float64
- name: fcst_5_dailyprecip25pct
sequence: float64
- name: fcst_5_dailyprecip50pct
sequence: float64
- name: fcst_5_dailyprecip75pct
sequence: float64
splits:
- name: train
num_bytes: 188598927
num_examples: 422
download_size: 44967856
dataset_size: 188598927
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_tourism_monthly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 1755434
num_examples: 366
download_size: 334951
dataset_size: 1755434
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_tourism_quarterly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 688817
num_examples: 427
download_size: 177407
dataset_size: 688817
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_tourism_yearly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 213954
num_examples: 518
download_size: 81479
dataset_size: 213954
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_traffic
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 241983226
num_examples: 862
download_size: 52748547
dataset_size: 241983226
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: monash_weather
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: subset
dtype: string
splits:
- name: train
num_bytes: 688598539
num_examples: 3010
download_size: 133164027
dataset_size: 688598539
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: nn5
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float32
splits:
- name: train
num_examples: 111
download_size: 203096
homepage: http://www.neural-forecasting-competition.com/downloads/NN5/datasets/download.htm
- config_name: solar
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: power_mw
sequence: float64
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: capacity_mw
dtype: float64
- name: subset
dtype: string
splits:
- name: train
num_bytes: 8689093932
num_examples: 5166
download_size: 1507924920
dataset_size: 8689093932
homepage: https://www.nrel.gov/grid/solar-power-data.html
- config_name: solar_1h
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: power_mw
sequence: float64
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: capacity_mw
dtype: float64
- name: subset
dtype: string
splits:
- name: train
num_bytes: 724361772
num_examples: 5166
download_size: 124515417
dataset_size: 724361772
homepage: https://www.nrel.gov/grid/solar-power-data.html
- config_name: taxi_1h
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: subset
dtype: string
- name: lat
dtype: float64
- name: lng
dtype: float64
splits:
- name: train
num_bytes: 28832500
num_examples: 2428
download_size: 2265297
dataset_size: 28832500
license: Apache 2.0
homepage: https://github.com/mbohlkeschneider/gluon-ts/tree/mv_release/datasets
- config_name: taxi_30min
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: subset
dtype: string
- name: lat
dtype: float64
- name: lng
dtype: float64
splits:
- name: train
num_bytes: 57560596
num_examples: 2428
download_size: 4541244
dataset_size: 57560596
license: Apache 2.0
homepage: https://github.com/mbohlkeschneider/gluon-ts/tree/mv_release/datasets
- config_name: training_corpus_kernel_synth_1m
features:
- name: target
sequence: float64
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
splits:
- name: train
num_examples: 1000000
download_size: 8313239368
- config_name: training_corpus_tsmixup_10m
features:
- name: target
sequence: float64
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
splits:
- name: train
num_examples: 10000000
download_size: 82189589906
- config_name: uber_tlc_daily
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: int64
splits:
- name: train
num_examples: 262
download_size: 84747
homepage: https://github.com/fivethirtyeight/uber-tlc-foil-response
- config_name: uber_tlc_hourly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: int64
splits:
- name: train
num_examples: 262
download_size: 1878515
homepage: https://github.com/fivethirtyeight/uber-tlc-foil-response
- config_name: ushcn_daily
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: state
dtype: string
- name: coop_id
dtype: int64
- name: PRCP
sequence: float64
- name: SNOW
sequence: float64
- name: SNWD
sequence: float64
- name: TMAX
sequence: float64
- name: TMIN
sequence: float64
splits:
- name: train
num_bytes: 2259905202
num_examples: 1218
download_size: 221089890
dataset_size: 2259905202
homepage: https://data.ess-dive.lbl.gov/portals/CDIAC
- config_name: weatherbench_daily
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float32
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: level
dtype: float64
- name: subset
dtype: string
splits:
- name: train
num_bytes: 39510157312
num_examples: 225280
download_size: 18924392742
dataset_size: 39510157312
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_10m_u_component_of_wind
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 7292845757
dataset_size: 8617472000
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_10m_v_component_of_wind
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 7292352508
dataset_size: 8617472000
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_2m_temperature
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 7276396852
dataset_size: 8617453568
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_geopotential
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 87305564613
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_potential_vorticity
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 92426240043
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_relative_humidity
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 94728788382
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_specific_humidity
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 85139896451
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_temperature
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 94081539079
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_toa_incident_solar_radiation
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 6057953007
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_total_cloud_cover
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 6628258398
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_total_precipitation
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: float64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 2048
download_size: 6473160755
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_u_component_of_wind
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 94801498563
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_v_component_of_wind
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 94800557482
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_hourly_vorticity
features:
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: target
sequence: float32
- name: level
dtype: int64
- name: timestamp
sequence: timestamp[ms]
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 26624
download_size: 94720960560
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: weatherbench_weekly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float32
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: level
dtype: float64
- name: subset
dtype: string
splits:
- name: train
num_bytes: 5656029184
num_examples: 225280
download_size: 2243012083
dataset_size: 5656029184
license: MIT
homepage: https://github.com/pangeo-data/WeatherBench
- config_name: wiki_daily_100k
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
- name: page_name
dtype: string
splits:
- name: train
num_bytes: 4389782678
num_examples: 100000
download_size: 592554033
dataset_size: 4389782678
license: CC0
homepage: https://dumps.wikimedia.org/other/pageviews/readme.html
- config_name: wind_farms_daily
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 1919187
num_examples: 337
download_size: 598834
dataset_size: 1919187
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
- config_name: wind_farms_hourly
features:
- name: id
dtype: string
- name: timestamp
sequence: timestamp[ms]
- name: target
sequence: float64
splits:
- name: train
num_bytes: 45917027
num_examples: 337
download_size: 12333116
dataset_size: 45917027
license: CC BY 4.0
homepage: https://zenodo.org/communities/forecasting
configs:
- config_name: dominick
data_files:
- split: train
path: dominick/train-*
- config_name: electricity_15min
data_files:
- split: train
path: electricity_15min/train-*
- config_name: ercot
data_files:
- split: train
path: ercot/train-*
- config_name: exchange_rate
data_files:
- split: train
path: exchange_rate/train-*
- config_name: m4_daily
data_files:
- split: train
path: m4_daily/train-*
- config_name: m4_hourly
data_files:
- split: train
path: m4_hourly/train-*
- config_name: m4_monthly
data_files:
- split: train
path: m4_monthly/train-*
- config_name: m4_quarterly
data_files:
- split: train
path: m4_quarterly/train-*
- config_name: m4_weekly
data_files:
- split: train
path: m4_weekly/train-*
- config_name: m4_yearly
data_files:
- split: train
path: m4_yearly/train-*
- config_name: m5
data_files:
- split: train
path: m5/train-*
- config_name: mexico_city_bikes
data_files:
- split: train
path: mexico_city_bikes/train-*
- config_name: monash_australian_electricity
data_files:
- split: train
path: monash_australian_electricity/train-*
- config_name: monash_car_parts
data_files:
- split: train
path: monash_car_parts/train-*
- config_name: monash_cif_2016
data_files:
- split: train
path: monash_cif_2016/train-*
- config_name: monash_covid_deaths
data_files:
- split: train
path: monash_covid_deaths/train-*
- config_name: monash_electricity_hourly
data_files:
- split: train
path: monash_electricity_hourly/train-*
- config_name: monash_electricity_weekly
data_files:
- split: train
path: monash_electricity_weekly/train-*
- config_name: monash_fred_md
data_files:
- split: train
path: monash_fred_md/train-*
- config_name: monash_hospital
data_files:
- split: train
path: monash_hospital/train-*
- config_name: monash_kdd_cup_2018
data_files:
- split: train
path: monash_kdd_cup_2018/train-*
- config_name: monash_london_smart_meters
data_files:
- split: train
path: monash_london_smart_meters/train-*
- config_name: monash_m1_monthly
data_files:
- split: train
path: monash_m1_monthly/train-*
- config_name: monash_m1_quarterly
data_files:
- split: train
path: monash_m1_quarterly/train-*
- config_name: monash_m1_yearly
data_files:
- split: train
path: monash_m1_yearly/train-*
- config_name: monash_m3_monthly
data_files:
- split: train
path: monash_m3_monthly/train-*
- config_name: monash_m3_quarterly
data_files:
- split: train
path: monash_m3_quarterly/train-*
- config_name: monash_m3_yearly
data_files:
- split: train
path: monash_m3_yearly/train-*
- config_name: monash_nn5_weekly
data_files:
- split: train
path: monash_nn5_weekly/train-*
- config_name: monash_pedestrian_counts
data_files:
- split: train
path: monash_pedestrian_counts/train-*
- config_name: monash_rideshare
data_files:
- split: train
path: monash_rideshare/train-*
- config_name: monash_saugeenday
data_files:
- split: train
path: monash_saugeenday/train-*
- config_name: monash_temperature_rain
data_files:
- split: train
path: monash_temperature_rain/train-*
- config_name: monash_tourism_monthly
data_files:
- split: train
path: monash_tourism_monthly/train-*
- config_name: monash_tourism_quarterly
data_files:
- split: train
path: monash_tourism_quarterly/train-*
- config_name: monash_tourism_yearly
data_files:
- split: train
path: monash_tourism_yearly/train-*
- config_name: monash_traffic
data_files:
- split: train
path: monash_traffic/train-*
- config_name: monash_weather
data_files:
- split: train
path: monash_weather/train-*
- config_name: nn5
data_files:
- split: train
path: nn5/train-*
- config_name: solar
data_files:
- split: train
path: solar/train-*
- config_name: solar_1h
data_files:
- split: train
path: solar_1h/train-*
- config_name: taxi_1h
data_files:
- split: train
path: taxi_1h/train-*
- config_name: taxi_30min
data_files:
- split: train
path: taxi_30min/train-*
- config_name: training_corpus_kernel_synth_1m
data_files:
- split: train
path: training_corpus/kernel_synth_1m/train-*
- config_name: training_corpus_tsmixup_10m
data_files:
- split: train
path: training_corpus/tsmixup_10m/train-*
- config_name: uber_tlc_daily
data_files:
- split: train
path: uber_tlc_daily/train-*
- config_name: uber_tlc_hourly
data_files:
- split: train
path: uber_tlc_hourly/train-*
- config_name: ushcn_daily
data_files:
- split: train
path: ushcn_daily/train-*
- config_name: weatherbench_daily
data_files:
- split: train
path: weatherbench_daily/train-*
- config_name: weatherbench_hourly_10m_u_component_of_wind
data_files:
- split: train
path: weatherbench_hourly/10m_u_component_of_wind/train-*
- config_name: weatherbench_hourly_10m_v_component_of_wind
data_files:
- split: train
path: weatherbench_hourly/10m_v_component_of_wind/train-*
- config_name: weatherbench_hourly_2m_temperature
data_files:
- split: train
path: weatherbench_hourly/2m_temperature/train-*
- config_name: weatherbench_hourly_geopotential
data_files:
- split: train
path: weatherbench_hourly/geopotential/train-*
- config_name: weatherbench_hourly_potential_vorticity
data_files:
- split: train
path: weatherbench_hourly/potential_vorticity/train-*
- config_name: weatherbench_hourly_relative_humidity
data_files:
- split: train
path: weatherbench_hourly/relative_humidity/train-*
- config_name: weatherbench_hourly_specific_humidity
data_files:
- split: train
path: weatherbench_hourly/specific_humidity/train-*
- config_name: weatherbench_hourly_temperature
data_files:
- split: train
path: weatherbench_hourly/temperature/train-*
- config_name: weatherbench_hourly_toa_incident_solar_radiation
data_files:
- split: train
path: weatherbench_hourly/toa_incident_solar_radiation/train-*
- config_name: weatherbench_hourly_total_cloud_cover
data_files:
- split: train
path: weatherbench_hourly/total_cloud_cover/train-*
- config_name: weatherbench_hourly_total_precipitation
data_files:
- split: train
path: weatherbench_hourly/total_precipitation/train-*
- config_name: weatherbench_hourly_u_component_of_wind
data_files:
- split: train
path: weatherbench_hourly/u_component_of_wind/train-*
- config_name: weatherbench_hourly_v_component_of_wind
data_files:
- split: train
path: weatherbench_hourly/v_component_of_wind/train-*
- config_name: weatherbench_hourly_vorticity
data_files:
- split: train
path: weatherbench_hourly/vorticity/train-*
- config_name: weatherbench_weekly
data_files:
- split: train
path: weatherbench_weekly/train-*
- config_name: wiki_daily_100k
data_files:
- split: train
path: wiki_daily_100k/train-*
- config_name: wind_farms_daily
data_files:
- split: train
path: wind_farms_daily/train-*
- config_name: wind_farms_hourly
data_files:
- split: train
path: wind_farms_hourly/train-*
---
# Chronos datasets
Time series datasets used for training and evaluation of the [Chronos](https://github.com/amazon-science/chronos-forecasting) forecasting models.
Note that some Chronos datasets (`ETTh`, `ETTm`, `brazilian_cities_temperature` and `spanish_energy_and_weather`) that rely on a custom builder script are available in the companion repo [`autogluon/chronos_datasets_extra`](https://huggingface.co/datasets/autogluon/chronos_datasets_extra).
See the [paper](https://arxiv.org/abs/2403.07815) for more information.
## Data format and usage
The recommended way to use these datasets is via https://github.com/autogluon/fev.
All datasets satisfy the following high-level schema:
- Each dataset row corresponds to a single (univariate or multivariate) time series.
- There exists one column with name `id` and type `string` that contains the unique identifier of each time series.
- There exists one column of type `Sequence` with dtype `timestamp[ms]`. This column contains the timestamps of the observations. Timestamps are guaranteed to have a regular frequency that can be obtained with [`pandas.infer_freq`](https://pandas.pydata.org/docs/reference/api/pandas.infer_freq.html).
- There exists at least one column of type `Sequence` with numeric (`float`, `double`, or `int`) dtype. These columns can be interpreted as target time series.
- For each row, all columns of type `Sequence` have same length.
- Remaining columns of types other than `Sequence` (e.g., `string` or `float`) can be interpreted as static covariates.
Datasets can be loaded using the 🤗 [`datasets`](https://huggingface.co/docs/datasets/en/index) library
```python
import datasets
ds = datasets.load_dataset("autogluon/chronos_datasets", "m4_daily", split="train")
ds.set_format("numpy") # sequences returned as numpy arrays
```
> **NOTE:** The `train` split of all datasets contains the full time series and has no relation to the train/test split used in the Chronos paper.
Example entry in the `m4_daily` dataset
```python
>>> ds[0]
{'id': 'T000000',
'timestamp': array(['1994-03-01T12:00:00.000', '1994-03-02T12:00:00.000',
'1994-03-03T12:00:00.000', ..., '1996-12-12T12:00:00.000',
'1996-12-13T12:00:00.000', '1996-12-14T12:00:00.000'],
dtype='datetime64[ms]'),
'target': array([1017.1, 1019.3, 1017. , ..., 2071.4, 2083.8, 2080.6], dtype=float32),
'category': 'Macro'}
```
## Changelog
- **v1.3.0 (2025-03-05)**: Fix incorrect timestamp frequency for `monash_hospital`
- **v1.2.0 (2025-01-03)**: Fix incorrect timestamp frequency for `dominick`
- **v1.1.0 (2024-11-14)**: Fix irregular timestamp frequency for `m4_quarterly`
- **v1.0.0 (2024-07-24)**: Initial release
### Converting to pandas
We can easily convert data in such format to a long format data frame
```python
def to_pandas(ds: datasets.Dataset) -> "pd.DataFrame":
"""Convert dataset to long data frame format."""
sequence_columns = [col for col in ds.features if isinstance(ds.features[col], datasets.Sequence)]
return ds.to_pandas().explode(sequence_columns).infer_objects()
```
Example output
```python
>>> print(to_pandas(ds).head())
id timestamp target category
0 T000000 1994-03-01 12:00:00 1017.1 Macro
1 T000000 1994-03-02 12:00:00 1019.3 Macro
2 T000000 1994-03-03 12:00:00 1017.0 Macro
3 T000000 1994-03-04 12:00:00 1019.2 Macro
4 T000000 1994-03-05 12:00:00 1018.7 Macro
```
### Dealing with large datasets
Note that some datasets, such as subsets of WeatherBench, are extremely large (~100GB). To work with them efficiently, we recommend either loading them from disk (files will be downloaded to disk, but won't be all loaded into memory)
```python
ds = datasets.load_dataset("autogluon/chronos_datasets", "weatherbench_daily", keep_in_memory=False, split="train")
```
or, for the largest datasets like `weatherbench_hourly_temperature`, reading them in streaming format (chunks will be downloaded one at a time)
```python
ds = datasets.load_dataset("autogluon/chronos_datasets", "weatherbench_hourly_temperature", streaming=True, split="train")
```
## Chronos training corpus with TSMixup & KernelSynth
The training corpus used for training the Chronos models can be loaded via the configs `training_corpus_tsmixup_10m` (10M TSMixup augmentations of real-world data) and `training_corpus_kernel_synth_1m` (1M synthetic time series generated with KernelSynth), e.g.,
```python
ds = datasets.load_dataset("autogluon/chronos_datasets", "training_corpus_tsmixup_10m", streaming=True, split="train")
```
Note that since data in the training corpus was obtained by combining various synthetic & real-world time series, the timestamps contain dummy values that have no connection to the original data.
## License
Different datasets available in this collection are distributed under different open source licenses. Please see `ds.info.license` and `ds.info.homepage` for each individual dataset.
## Citation
If you find these datasets useful for your research, please consider citing the associated paper:
```markdown
@article{ansari2024chronos,
author = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Wang, Hao and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
title = {Chronos: Learning the Language of Time Series},
journal = {arXiv preprint arXiv:2403.07815},
year = {2024}
}
```
|
trl-lib/tldr | trl-lib | "2025-01-08T16:18:59Z" | 12,590 | 13 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"trl"
] | null | "2024-08-22T12:14:13Z" | ---
tags:
- trl
---
# TL;DR Dataset
## Summary
The TL;DR dataset is a processed version of Reddit posts, specifically curated to train models using the [TRL library](https://github.com/huggingface/trl) for summarization tasks. It leverages the common practice on Reddit where users append "TL;DR" (Too Long; Didn't Read) summaries to lengthy posts, providing a rich source of paired text data for training summarization models.
## Data Structure
- **Format**: [Standard](https://huggingface.co/docs/trl/main/dataset_formats#standard)
- **Type**: [Prompt-completion](https://huggingface.co/docs/trl/main/dataset_formats#prompt-completion)
Columns:
- `"pompt"`: The unabridged Reddit post.
- `"completion"`: The concise "TL;DR" summary appended by the author.
This structure enables models to learn the relationship between detailed content and its abbreviated form, enhancing their summarization capabilities.
## Generation script
The script used to generate this dataset can be found [here](https://github.com/huggingface/trl/blob/main/examples/datasets/tldr.py).
|
TIGER-Lab/MMLU-STEM | TIGER-Lab | "2024-06-20T03:37:16Z" | 12,553 | 13 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-15T16:45:00Z" | ---
license: mit
dataset_info:
- config_name: default
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: subject
dtype: string
- name: answer
dtype: int64
splits:
- name: test
num_bytes: 976986
num_examples: 3153
download_size: 487500
dataset_size: 976986
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
This contains a subset of STEM subjects defined in MMLU by the original paper.
The included subjects are
- 'abstract_algebra',
- 'anatomy',
- 'astronomy',
- 'college_biology',
- 'college_chemistry',
- 'college_computer_science',
- 'college_mathematics',
- 'college_physics',
- 'computer_security',
- 'conceptual_physics',
- 'electrical_engineering',
- 'elementary_mathematics',
- 'high_school_biology',
- 'high_school_chemistry',
- 'high_school_computer_science',
- 'high_school_mathematics',
- 'high_school_physics',
- 'high_school_statistics',
- 'machine_learning'
Please cite the original MMLU paper when you are using it. |
vikhyatk/dvqa | vikhyatk | "2024-12-12T07:32:22Z" | 12,541 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T07:28:53Z" | ---
dataset_info:
features:
- name: image
dtype:
image:
decode: false
- name: qa
list:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 4425372570.0
num_examples: 200000
download_size: 4272488137
dataset_size: 4425372570.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
agicommies/synthia | agicommies | "2025-01-08T00:15:48Z" | 12,506 | 13 | [
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:question-answering",
"license:mit",
"size_categories:1M<n<10M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/2125",
"region:us"
] | [
"table-question-answering",
"summarization",
"question-answering"
] | "2024-04-23T04:57:00Z" | ---
license: mit
task_categories:
- table-question-answering
- summarization
- question-answering
size_categories:
- 100B<n<1T
---
The Synthia Dataset is a continuously growing aggregate of validated synthetic explanations of subjects picked from Claude Opus latent space based on varying esotericity in a large general list of technical fields. The explanations are varying in their target audience, level of detail and abstraction while incentivized to target Claude3-grade quality. The Synthia subnet leverages Commune's incentives to create a permissionless mining market around distilling knowledge out of SOTA closed-source model APIs into a public dataset to accelerate the OpenSource AI space. Validation models and strategy will adapt based on the current SOTA. |
argmaxinc/whisperkit-evals | argmaxinc | "2025-02-17T13:10:56Z" | 12,505 | 4 | [
"region:us",
"whisper",
"whisperkit",
"coreml",
"asr",
"quantized"
] | null | "2024-02-28T08:05:36Z" |
---
pretty_name: "WhisperKit ASR Evaluation Results"
viewer: false
library_name: whisperkit
tags:
- whisper
- whisperkit
- coreml
- asr
- quantized
---
# WhisperKit
WhisperKit is an on-device speech recognition framework for Apple Silicon:
https://github.com/argmaxinc/WhisperKit
For performance and accuracy benchmarks on real devices, please see:
https://huggingface.co/spaces/argmaxinc/whisperkit-benchmarks
|
asahi417/seamless-align-deA-enA.speaker-embedding.xlsr-2b | asahi417 | "2024-06-24T01:12:19Z" | 12,488 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-17T10:18:56Z" | ---
dataset_info:
- config_name: subset_1
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16496072358
num_examples: 2064
download_size: 16544774466
dataset_size: 16496072358
- config_name: subset_10
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16105175291
num_examples: 2106
download_size: 16153856631
dataset_size: 16105175291
- config_name: subset_100
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11700093808
num_examples: 1944
download_size: 11740547200
dataset_size: 11700093808
- config_name: subset_101
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11971334686
num_examples: 1985
download_size: 12012182776
dataset_size: 11971334686
- config_name: subset_102
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12007403919
num_examples: 1992
download_size: 12049445421
dataset_size: 12007403919
- config_name: subset_103
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11943077924
num_examples: 1952
download_size: 11983774430
dataset_size: 11943077924
- config_name: subset_104
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11727519000
num_examples: 1956
download_size: 11768140710
dataset_size: 11727519000
- config_name: subset_105
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12201628185
num_examples: 2034
download_size: 12243327271
dataset_size: 12201628185
- config_name: subset_106
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11906942375
num_examples: 1981
download_size: 11947610208
dataset_size: 11906942375
- config_name: subset_107
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11827697428
num_examples: 1983
download_size: 11868584591
dataset_size: 11827697428
- config_name: subset_108
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11329588226
num_examples: 1963
download_size: 11369118475
dataset_size: 11329588226
- config_name: subset_109
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11295470155
num_examples: 1949
download_size: 11334889864
dataset_size: 11295470155
- config_name: subset_11
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16223572386
num_examples: 2131
download_size: 16272632592
dataset_size: 16223572386
- config_name: subset_110
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11104691038
num_examples: 1940
download_size: 11144134254
dataset_size: 11104691038
- config_name: subset_111
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11399789523
num_examples: 1967
download_size: 11439345962
dataset_size: 11399789523
- config_name: subset_112
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11715773162
num_examples: 1993
download_size: 11756682752
dataset_size: 11715773162
- config_name: subset_113
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11236654750
num_examples: 1924
download_size: 11276132019
dataset_size: 11236654750
- config_name: subset_114
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10941353095
num_examples: 1951
download_size: 10979830813
dataset_size: 10941353095
- config_name: subset_115
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11321795057
num_examples: 1943
download_size: 11361162230
dataset_size: 11321795057
- config_name: subset_116
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11318362227
num_examples: 1967
download_size: 11358113093
dataset_size: 11318362227
- config_name: subset_117
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11094945816
num_examples: 1917
download_size: 11131439213
dataset_size: 11094945816
- config_name: subset_118
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11185863515
num_examples: 1924
download_size: 11225578783
dataset_size: 11185863515
- config_name: subset_119
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11480879571
num_examples: 1976
download_size: 11520449948
dataset_size: 11480879571
- config_name: subset_12
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15649997893
num_examples: 2103
download_size: 15697670955
dataset_size: 15649997893
- config_name: subset_120
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11094494082
num_examples: 1937
download_size: 11132984645
dataset_size: 11094494082
- config_name: subset_121
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10925722431
num_examples: 1933
download_size: 10964313881
dataset_size: 10925722431
- config_name: subset_122
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10985251923
num_examples: 1952
download_size: 11023439895
dataset_size: 10985251923
- config_name: subset_123
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10791496401
num_examples: 1910
download_size: 10829538453
dataset_size: 10791496401
- config_name: subset_124
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10655618494
num_examples: 1888
download_size: 10693670677
dataset_size: 10655618494
- config_name: subset_125
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10879457170
num_examples: 1934
download_size: 10917802096
dataset_size: 10879457170
- config_name: subset_126
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10694272019
num_examples: 1926
download_size: 10732899750
dataset_size: 10694272019
- config_name: subset_127
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10854973911
num_examples: 1910
download_size: 10893262401
dataset_size: 10854973911
- config_name: subset_128
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10314018564
num_examples: 1888
download_size: 10351215875
dataset_size: 10314018564
- config_name: subset_129
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10937610789
num_examples: 1948
download_size: 10976464614
dataset_size: 10937610789
- config_name: subset_13
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16353249939
num_examples: 2159
download_size: 16402386480
dataset_size: 16353249939
- config_name: subset_130
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10649709891
num_examples: 1893
download_size: 10688156634
dataset_size: 10649709891
- config_name: subset_131
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10764688493
num_examples: 1929
download_size: 10803203276
dataset_size: 10764688493
- config_name: subset_132
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10409598206
num_examples: 1868
download_size: 10446596998
dataset_size: 10409598206
- config_name: subset_133
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10472315516
num_examples: 1874
download_size: 10509356860
dataset_size: 10472315516
- config_name: subset_134
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10837311272
num_examples: 1945
download_size: 10875877289
dataset_size: 10837311272
- config_name: subset_135
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10159961925
num_examples: 1887
download_size: 10196736956
dataset_size: 10159961925
- config_name: subset_136
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10699846242
num_examples: 1872
download_size: 10738155740
dataset_size: 10699846242
- config_name: subset_137
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10206081468
num_examples: 1889
download_size: 10243263428
dataset_size: 10206081468
- config_name: subset_138
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10213449472
num_examples: 1877
download_size: 10250711146
dataset_size: 10213449472
- config_name: subset_139
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10741806746
num_examples: 1944
download_size: 10779784552
dataset_size: 10741806746
- config_name: subset_14
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15874731625
num_examples: 2100
download_size: 15922456926
dataset_size: 15874731625
- config_name: subset_140
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10434706265
num_examples: 1922
download_size: 10472386431
dataset_size: 10434706265
- config_name: subset_141
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10113855480
num_examples: 1857
download_size: 10150887710
dataset_size: 10113855480
- config_name: subset_142
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11020150262
num_examples: 1923
download_size: 11059662916
dataset_size: 11020150262
- config_name: subset_143
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10530678969
num_examples: 1917
download_size: 10568735537
dataset_size: 10530678969
- config_name: subset_144
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10134816538
num_examples: 1846
download_size: 10171640400
dataset_size: 10134816538
- config_name: subset_145
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9887229700
num_examples: 1838
download_size: 9921664981
dataset_size: 9887229700
- config_name: subset_146
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9824505833
num_examples: 1845
download_size: 9859996261
dataset_size: 9824505833
- config_name: subset_147
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10119655858
num_examples: 1844
download_size: 10156630108
dataset_size: 10119655858
- config_name: subset_148
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10141250570
num_examples: 1877
download_size: 10177897766
dataset_size: 10141250570
- config_name: subset_149
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9897395106
num_examples: 1831
download_size: 9933360150
dataset_size: 9897395106
- config_name: subset_15
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15578299089
num_examples: 2110
download_size: 15625846087
dataset_size: 15578299089
- config_name: subset_150
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9668641286
num_examples: 1817
download_size: 9704322722
dataset_size: 9668641286
- config_name: subset_151
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9965470443
num_examples: 1865
download_size: 10000721783
dataset_size: 9965470443
- config_name: subset_152
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9959261671
num_examples: 1864
download_size: 9993928951
dataset_size: 9959261671
- config_name: subset_153
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10215139982
num_examples: 1878
download_size: 10251819962
dataset_size: 10215139982
- config_name: subset_154
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9679627924
num_examples: 1799
download_size: 9715331834
dataset_size: 9679627924
- config_name: subset_155
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9627432882
num_examples: 1829
download_size: 9663405606
dataset_size: 9627432882
- config_name: subset_156
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9516520232
num_examples: 1806
download_size: 9552376267
dataset_size: 9516520232
- config_name: subset_157
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9606429591
num_examples: 1793
download_size: 9641845778
dataset_size: 9606429591
- config_name: subset_158
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9903128531
num_examples: 1846
download_size: 9939119123
dataset_size: 9903128531
- config_name: subset_159
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9628530400
num_examples: 1812
download_size: 9664544147
dataset_size: 9628530400
- config_name: subset_16
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15755916997
num_examples: 2137
download_size: 15803840641
dataset_size: 15755916997
- config_name: subset_160
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9732079062
num_examples: 1801
download_size: 9767711241
dataset_size: 9732079062
- config_name: subset_161
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9831513898
num_examples: 1849
download_size: 9867270832
dataset_size: 9831513898
- config_name: subset_162
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9708630704
num_examples: 1841
download_size: 9744493058
dataset_size: 9708630704
- config_name: subset_163
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9643361535
num_examples: 1824
download_size: 9678741713
dataset_size: 9643361535
- config_name: subset_164
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9696274651
num_examples: 1838
download_size: 9732197689
dataset_size: 9696274651
- config_name: subset_165
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10075927755
num_examples: 1863
download_size: 10113106624
dataset_size: 10075927755
- config_name: subset_166
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9650420438
num_examples: 1792
download_size: 9685896064
dataset_size: 9650420438
- config_name: subset_167
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9626047534
num_examples: 1801
download_size: 9662004149
dataset_size: 9626047534
- config_name: subset_168
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9323390375
num_examples: 1804
download_size: 9357877248
dataset_size: 9323390375
- config_name: subset_169
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9408636558
num_examples: 1803
download_size: 9443431724
dataset_size: 9408636558
- config_name: subset_17
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15604808215
num_examples: 2102
download_size: 15652413529
dataset_size: 15604808215
- config_name: subset_170
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9489971742
num_examples: 1803
download_size: 9524480953
dataset_size: 9489971742
- config_name: subset_171
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9578100504
num_examples: 1815
download_size: 9613775306
dataset_size: 9578100504
- config_name: subset_172
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9445214479
num_examples: 1828
download_size: 9479764298
dataset_size: 9445214479
- config_name: subset_173
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9287421846
num_examples: 1807
download_size: 9322030169
dataset_size: 9287421846
- config_name: subset_174
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9497564805
num_examples: 1818
download_size: 9532233809
dataset_size: 9497564805
- config_name: subset_175
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9727223229
num_examples: 1807
download_size: 9760076184
dataset_size: 9727223229
- config_name: subset_176
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9933404515
num_examples: 1858
download_size: 9969371830
dataset_size: 9933404515
- config_name: subset_177
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9378029112
num_examples: 1777
download_size: 9412073804
dataset_size: 9378029112
- config_name: subset_178
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9391103101
num_examples: 1820
download_size: 9425828343
dataset_size: 9391103101
- config_name: subset_179
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9617663716
num_examples: 1794
download_size: 9652670812
dataset_size: 9617663716
- config_name: subset_18
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16090036910
num_examples: 2095
download_size: 16138724197
dataset_size: 16090036910
- config_name: subset_180
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9832618836
num_examples: 1829
download_size: 9868610716
dataset_size: 9832618836
- config_name: subset_181
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9173235876
num_examples: 1786
download_size: 9206159755
dataset_size: 9173235876
- config_name: subset_182
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9440817189
num_examples: 1803
download_size: 9475443235
dataset_size: 9440817189
- config_name: subset_183
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9508662179
num_examples: 1839
download_size: 9544759263
dataset_size: 9508662179
- config_name: subset_184
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9218354457
num_examples: 1763
download_size: 9252725181
dataset_size: 9218354457
- config_name: subset_185
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9556470050
num_examples: 1815
download_size: 9591205146
dataset_size: 9556470050
- config_name: subset_186
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9158434857
num_examples: 1766
download_size: 9193099045
dataset_size: 9158434857
- config_name: subset_187
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9452890547
num_examples: 1825
download_size: 9487607951
dataset_size: 9452890547
- config_name: subset_188
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9066742411
num_examples: 1775
download_size: 9101454101
dataset_size: 9066742411
- config_name: subset_189
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9340135976
num_examples: 1830
download_size: 9374933969
dataset_size: 9340135976
- config_name: subset_19
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15660210795
num_examples: 2113
download_size: 15708612813
dataset_size: 15660210795
- config_name: subset_190
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9375610391
num_examples: 1798
download_size: 9410147209
dataset_size: 9375610391
- config_name: subset_191
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9251414358
num_examples: 1804
download_size: 9286135355
dataset_size: 9251414358
- config_name: subset_192
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9421951393
num_examples: 1784
download_size: 9456119767
dataset_size: 9421951393
- config_name: subset_193
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9248025132
num_examples: 1797
download_size: 9282543334
dataset_size: 9248025132
- config_name: subset_194
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9341977332
num_examples: 1796
download_size: 9376497121
dataset_size: 9341977332
- config_name: subset_195
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9337698399
num_examples: 1809
download_size: 9372336064
dataset_size: 9337698399
- config_name: subset_196
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9031411759
num_examples: 1780
download_size: 9065388897
dataset_size: 9031411759
- config_name: subset_197
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8716025847
num_examples: 1733
download_size: 8749176683
dataset_size: 8716025847
- config_name: subset_198
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9270000691
num_examples: 1789
download_size: 9304609554
dataset_size: 9270000691
- config_name: subset_199
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9151089783
num_examples: 1776
download_size: 9185720053
dataset_size: 9151089783
- config_name: subset_2
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16672042897
num_examples: 2055
download_size: 16720826617
dataset_size: 16672042897
- config_name: subset_20
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15994306148
num_examples: 2146
download_size: 16042700543
dataset_size: 15994306148
- config_name: subset_200
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9195129530
num_examples: 1810
download_size: 9229782263
dataset_size: 9195129530
- config_name: subset_201
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8968010275
num_examples: 1770
download_size: 8999474826
dataset_size: 8968010275
- config_name: subset_202
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9306467892
num_examples: 1775
download_size: 9341038409
dataset_size: 9306467892
- config_name: subset_203
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9281950950
num_examples: 1809
download_size: 9316582353
dataset_size: 9281950950
- config_name: subset_204
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9189760080
num_examples: 1830
download_size: 9224446128
dataset_size: 9189760080
- config_name: subset_205
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9401053951
num_examples: 1821
download_size: 9435630790
dataset_size: 9401053951
- config_name: subset_206
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9348785979
num_examples: 1804
download_size: 9383477478
dataset_size: 9348785979
- config_name: subset_207
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9240735059
num_examples: 1822
download_size: 9275489321
dataset_size: 9240735059
- config_name: subset_208
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9519456935
num_examples: 1824
download_size: 9552415549
dataset_size: 9519456935
- config_name: subset_209
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9357508423
num_examples: 1817
download_size: 9392395687
dataset_size: 9357508423
- config_name: subset_21
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15585255346
num_examples: 2137
download_size: 15633938072
dataset_size: 15585255346
- config_name: subset_210
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9193092429
num_examples: 1799
download_size: 9226854319
dataset_size: 9193092429
- config_name: subset_211
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9466507113
num_examples: 1831
download_size: 9501207864
dataset_size: 9466507113
- config_name: subset_212
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9236904079
num_examples: 1769
download_size: 9271417834
dataset_size: 9236904079
- config_name: subset_213
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9103221165
num_examples: 1810
download_size: 9138170110
dataset_size: 9103221165
- config_name: subset_214
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9152297806
num_examples: 1797
download_size: 9186920414
dataset_size: 9152297806
- config_name: subset_215
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8817363842
num_examples: 1750
download_size: 8850535001
dataset_size: 8817363842
- config_name: subset_216
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9223666197
num_examples: 1789
download_size: 9258104017
dataset_size: 9223666197
- config_name: subset_217
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9269112063
num_examples: 1822
download_size: 9303838873
dataset_size: 9269112063
- config_name: subset_218
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9214416864
num_examples: 1816
download_size: 9248808616
dataset_size: 9214416864
- config_name: subset_219
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9309859911
num_examples: 1817
download_size: 9344844661
dataset_size: 9309859911
- config_name: subset_22
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14241813975
num_examples: 1930
download_size: 14285636925
dataset_size: 14241813975
- config_name: subset_220
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9050468467
num_examples: 1785
download_size: 9085077185
dataset_size: 9050468467
- config_name: subset_221
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8813131709
num_examples: 1772
download_size: 8846709633
dataset_size: 8813131709
- config_name: subset_222
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9306237345
num_examples: 1775
download_size: 9340850716
dataset_size: 9306237345
- config_name: subset_223
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9423630196
num_examples: 1829
download_size: 9458553751
dataset_size: 9423630196
- config_name: subset_224
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9239005056
num_examples: 1808
download_size: 9273458297
dataset_size: 9239005056
- config_name: subset_225
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9171228911
num_examples: 1769
download_size: 9205504728
dataset_size: 9171228911
- config_name: subset_226
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9052949992
num_examples: 1779
download_size: 9087578146
dataset_size: 9052949992
- config_name: subset_227
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9382588877
num_examples: 1816
download_size: 9416810612
dataset_size: 9382588877
- config_name: subset_228
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8944735838
num_examples: 1775
download_size: 8978276961
dataset_size: 8944735838
- config_name: subset_229
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9107240708
num_examples: 1819
download_size: 9142228379
dataset_size: 9107240708
- config_name: subset_23
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13565548100
num_examples: 1873
download_size: 13608409283
dataset_size: 13565548100
- config_name: subset_230
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9073045701
num_examples: 1804
download_size: 9107854095
dataset_size: 9073045701
- config_name: subset_231
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9308498677
num_examples: 1803
download_size: 9343124500
dataset_size: 9308498677
- config_name: subset_232
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8857428854
num_examples: 1758
download_size: 8890640722
dataset_size: 8857428854
- config_name: subset_233
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8199852486
num_examples: 1631
download_size: 8231266438
dataset_size: 8199852486
- config_name: subset_234
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9451460980
num_examples: 1821
download_size: 9486408676
dataset_size: 9451460980
- config_name: subset_235
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8476607938
num_examples: 1735
download_size: 8508748907
dataset_size: 8476607938
- config_name: subset_236
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9071764204
num_examples: 1826
download_size: 9106739042
dataset_size: 9071764204
- config_name: subset_237
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9154133173
num_examples: 1783
download_size: 9188667479
dataset_size: 9154133173
- config_name: subset_238
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9016460165
num_examples: 1798
download_size: 9051153109
dataset_size: 9016460165
- config_name: subset_239
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9136591386
num_examples: 1792
download_size: 9171227465
dataset_size: 9136591386
- config_name: subset_24
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15273590577
num_examples: 2116
download_size: 15320087579
dataset_size: 15273590577
- config_name: subset_240
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8980159662
num_examples: 1782
download_size: 9013662105
dataset_size: 8980159662
- config_name: subset_241
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9176885097
num_examples: 1778
download_size: 9211677205
dataset_size: 9176885097
- config_name: subset_242
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8944890030
num_examples: 1781
download_size: 8978521881
dataset_size: 8944890030
- config_name: subset_243
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9037960232
num_examples: 1800
download_size: 9072848088
dataset_size: 9037960232
- config_name: subset_244
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8967780986
num_examples: 1785
download_size: 9001418045
dataset_size: 8967780986
- config_name: subset_245
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9205840273
num_examples: 1800
download_size: 9236569992
dataset_size: 9205840273
- config_name: subset_246
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9103925442
num_examples: 1777
download_size: 9138748882
dataset_size: 9103925442
- config_name: subset_247
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9184855568
num_examples: 1805
download_size: 9219424669
dataset_size: 9184855568
- config_name: subset_248
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8852803495
num_examples: 1763
download_size: 8886057035
dataset_size: 8852803495
- config_name: subset_249
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8979414369
num_examples: 1783
download_size: 9012921030
dataset_size: 8979414369
- config_name: subset_25
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14978193964
num_examples: 2109
download_size: 15024158228
dataset_size: 14978193964
- config_name: subset_250
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9205387737
num_examples: 1811
download_size: 9240319615
dataset_size: 9205387737
- config_name: subset_251
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8884793559
num_examples: 1782
download_size: 8917920455
dataset_size: 8884793559
- config_name: subset_252
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9236322282
num_examples: 1795
download_size: 9270881878
dataset_size: 9236322282
- config_name: subset_253
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8979385432
num_examples: 1805
download_size: 9012596899
dataset_size: 8979385432
- config_name: subset_254
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9209657780
num_examples: 1783
download_size: 9243447272
dataset_size: 9209657780
- config_name: subset_255
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9111411799
num_examples: 1803
download_size: 9146239721
dataset_size: 9111411799
- config_name: subset_256
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9047839862
num_examples: 1776
download_size: 9082364197
dataset_size: 9047839862
- config_name: subset_257
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9111816217
num_examples: 1769
download_size: 9146413613
dataset_size: 9111816217
- config_name: subset_258
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9025062654
num_examples: 1754
download_size: 9059422420
dataset_size: 9025062654
- config_name: subset_259
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9075933854
num_examples: 1792
download_size: 9110680389
dataset_size: 9075933854
- config_name: subset_26
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15538243504
num_examples: 2123
download_size: 15586619756
dataset_size: 15538243504
- config_name: subset_260
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8846530744
num_examples: 1731
download_size: 8879459228
dataset_size: 8846530744
- config_name: subset_261
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8728956146
num_examples: 1710
download_size: 8760346778
dataset_size: 8728956146
- config_name: subset_262
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8598682490
num_examples: 1721
download_size: 8631409198
dataset_size: 8598682490
- config_name: subset_263
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8716088388
num_examples: 1746
download_size: 8749155196
dataset_size: 8716088388
- config_name: subset_264
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9243116823
num_examples: 1819
download_size: 9275472456
dataset_size: 9243116823
- config_name: subset_265
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9181596395
num_examples: 1790
download_size: 9216095180
dataset_size: 9181596395
- config_name: subset_266
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9077724210
num_examples: 1792
download_size: 9112500982
dataset_size: 9077724210
- config_name: subset_267
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8824185497
num_examples: 1729
download_size: 8856839372
dataset_size: 8824185497
- config_name: subset_268
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8825762775
num_examples: 1754
download_size: 8859249544
dataset_size: 8825762775
- config_name: subset_269
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8985542851
num_examples: 1742
download_size: 9018106211
dataset_size: 8985542851
- config_name: subset_27
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15051131402
num_examples: 2120
download_size: 15098084305
dataset_size: 15051131402
- config_name: subset_270
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8636207462
num_examples: 1770
download_size: 8669803011
dataset_size: 8636207462
- config_name: subset_271
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8766334554
num_examples: 1752
download_size: 8799593343
dataset_size: 8766334554
- config_name: subset_272
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8886610750
num_examples: 1734
download_size: 8919554505
dataset_size: 8886610750
- config_name: subset_273
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8892776981
num_examples: 1779
download_size: 8925698649
dataset_size: 8892776981
- config_name: subset_274
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8722581461
num_examples: 1747
download_size: 8755435562
dataset_size: 8722581461
- config_name: subset_275
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8635436383
num_examples: 1737
download_size: 8668466437
dataset_size: 8635436383
- config_name: subset_276
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9220791816
num_examples: 1783
download_size: 9255463126
dataset_size: 9220791816
- config_name: subset_277
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8760533608
num_examples: 1757
download_size: 8793632739
dataset_size: 8760533608
- config_name: subset_278
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8784398182
num_examples: 1735
download_size: 8817736094
dataset_size: 8784398182
- config_name: subset_279
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8879061623
num_examples: 1786
download_size: 8912436679
dataset_size: 8879061623
- config_name: subset_28
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15312865434
num_examples: 2140
download_size: 15359866861
dataset_size: 15312865434
- config_name: subset_280
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8879812866
num_examples: 1765
download_size: 8912537728
dataset_size: 8879812866
- config_name: subset_281
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8926718698
num_examples: 1797
download_size: 8960181837
dataset_size: 8926718698
- config_name: subset_282
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8717019448
num_examples: 1762
download_size: 8750457572
dataset_size: 8717019448
- config_name: subset_283
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9029430582
num_examples: 1796
download_size: 9064468156
dataset_size: 9029430582
- config_name: subset_284
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9026525496
num_examples: 1789
download_size: 9061577900
dataset_size: 9026525496
- config_name: subset_285
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8772659681
num_examples: 1767
download_size: 8805474892
dataset_size: 8772659681
- config_name: subset_286
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8423231933
num_examples: 1699
download_size: 8455171699
dataset_size: 8423231933
- config_name: subset_287
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8947120642
num_examples: 1809
download_size: 8980661009
dataset_size: 8947120642
- config_name: subset_288
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8637864684
num_examples: 1739
download_size: 8671043194
dataset_size: 8637864684
- config_name: subset_289
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8868839640
num_examples: 1759
download_size: 8902329604
dataset_size: 8868839640
- config_name: subset_29
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15318220379
num_examples: 2131
download_size: 15365728676
dataset_size: 15318220379
- config_name: subset_290
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9398641828
num_examples: 1829
download_size: 9433513847
dataset_size: 9398641828
- config_name: subset_291
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9030673488
num_examples: 1772
download_size: 9065253590
dataset_size: 9030673488
- config_name: subset_292
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8673739145
num_examples: 1713
download_size: 8706941342
dataset_size: 8673739145
- config_name: subset_293
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9021697887
num_examples: 1764
download_size: 9056354270
dataset_size: 9021697887
- config_name: subset_294
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9020416125
num_examples: 1782
download_size: 9055125015
dataset_size: 9020416125
- config_name: subset_295
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8928197985
num_examples: 1752
download_size: 8961440884
dataset_size: 8928197985
- config_name: subset_296
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8914211985
num_examples: 1738
download_size: 8947240069
dataset_size: 8914211985
- config_name: subset_297
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9037659607
num_examples: 1789
download_size: 9072220428
dataset_size: 9037659607
- config_name: subset_298
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9155762004
num_examples: 1781
download_size: 9190065683
dataset_size: 9155762004
- config_name: subset_299
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8863898643
num_examples: 1757
download_size: 8897224652
dataset_size: 8863898643
- config_name: subset_3
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16013854164
num_examples: 2051
download_size: 16061679796
dataset_size: 16013854164
- config_name: subset_30
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15419083423
num_examples: 2165
download_size: 15466100289
dataset_size: 15419083423
- config_name: subset_300
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7932054312
num_examples: 1593
download_size: 7961960954
dataset_size: 7932054312
- config_name: subset_301
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7056926448
num_examples: 1441
download_size: 7084756588
dataset_size: 7056926448
- config_name: subset_302
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8814492951
num_examples: 1786
download_size: 8847908493
dataset_size: 8814492951
- config_name: subset_303
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9157951317
num_examples: 1773
download_size: 9192508769
dataset_size: 9157951317
- config_name: subset_304
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8575761186
num_examples: 1722
download_size: 8608887389
dataset_size: 8575761186
- config_name: subset_305
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8862823531
num_examples: 1764
download_size: 8896376328
dataset_size: 8862823531
- config_name: subset_306
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8819682335
num_examples: 1725
download_size: 8850725084
dataset_size: 8819682335
- config_name: subset_307
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8886726075
num_examples: 1735
download_size: 8919503597
dataset_size: 8886726075
- config_name: subset_308
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8834414668
num_examples: 1750
download_size: 8867476559
dataset_size: 8834414668
- config_name: subset_309
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8967319045
num_examples: 1774
download_size: 9000886254
dataset_size: 8967319045
- config_name: subset_31
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15425103392
num_examples: 2112
download_size: 15471564085
dataset_size: 15425103392
- config_name: subset_310
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9113354514
num_examples: 1787
download_size: 9148019031
dataset_size: 9113354514
- config_name: subset_311
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8828454927
num_examples: 1788
download_size: 8862160868
dataset_size: 8828454927
- config_name: subset_312
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8798444142
num_examples: 1730
download_size: 8831342176
dataset_size: 8798444142
- config_name: subset_313
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8727236694
num_examples: 1732
download_size: 8760237805
dataset_size: 8727236694
- config_name: subset_314
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8970629791
num_examples: 1762
download_size: 9003957495
dataset_size: 8970629791
- config_name: subset_315
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8991923591
num_examples: 1779
download_size: 9025318455
dataset_size: 8991923591
- config_name: subset_316
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9118107313
num_examples: 1838
download_size: 9153189665
dataset_size: 9118107313
- config_name: subset_317
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8977461901
num_examples: 1774
download_size: 9010779801
dataset_size: 8977461901
- config_name: subset_318
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9109365451
num_examples: 1774
download_size: 9143995110
dataset_size: 9109365451
- config_name: subset_319
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9049037180
num_examples: 1759
download_size: 9083514026
dataset_size: 9049037180
- config_name: subset_32
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15085572892
num_examples: 2142
download_size: 15132947975
dataset_size: 15085572892
- config_name: subset_320
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9014783457
num_examples: 1778
download_size: 9049478428
dataset_size: 9014783457
- config_name: subset_321
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8809696493
num_examples: 1766
download_size: 8843027701
dataset_size: 8809696493
- config_name: subset_322
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9059575408
num_examples: 1802
download_size: 9094369478
dataset_size: 9059575408
- config_name: subset_323
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8839079827
num_examples: 1763
download_size: 8871809437
dataset_size: 8839079827
- config_name: subset_324
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8699568922
num_examples: 1760
download_size: 8732844616
dataset_size: 8699568922
- config_name: subset_325
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8843814016
num_examples: 1773
download_size: 8876744844
dataset_size: 8843814016
- config_name: subset_326
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8382437403
num_examples: 1697
download_size: 8414236110
dataset_size: 8382437403
- config_name: subset_327
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8946931981
num_examples: 1757
download_size: 8980031482
dataset_size: 8946931981
- config_name: subset_328
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8923371248
num_examples: 1737
download_size: 8956384841
dataset_size: 8923371248
- config_name: subset_329
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8786027392
num_examples: 1737
download_size: 8818793065
dataset_size: 8786027392
- config_name: subset_33
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14796384199
num_examples: 2126
download_size: 14842940606
dataset_size: 14796384199
- config_name: subset_330
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8809703844
num_examples: 1762
download_size: 8842953877
dataset_size: 8809703844
- config_name: subset_331
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8938727592
num_examples: 1783
download_size: 8972517816
dataset_size: 8938727592
- config_name: subset_332
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8862678215
num_examples: 1773
download_size: 8896221106
dataset_size: 8862678215
- config_name: subset_333
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8972898973
num_examples: 1790
download_size: 9004445960
dataset_size: 8972898973
- config_name: subset_334
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8836303856
num_examples: 1738
download_size: 8869367325
dataset_size: 8836303856
- config_name: subset_335
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8969172035
num_examples: 1789
download_size: 9002742620
dataset_size: 8969172035
- config_name: subset_336
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8456060808
num_examples: 1734
download_size: 8487666617
dataset_size: 8456060808
- config_name: subset_337
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8711800670
num_examples: 1745
download_size: 8743584700
dataset_size: 8711800670
- config_name: subset_338
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8208255933
num_examples: 1690
download_size: 8239488345
dataset_size: 8208255933
- config_name: subset_339
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8796092922
num_examples: 1730
download_size: 8828925650
dataset_size: 8796092922
- config_name: subset_34
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14820211898
num_examples: 2121
download_size: 14866374032
dataset_size: 14820211898
- config_name: subset_340
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9153210131
num_examples: 1771
download_size: 9186868286
dataset_size: 9153210131
- config_name: subset_341
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9117694888
num_examples: 1774
download_size: 9151414542
dataset_size: 9117694888
- config_name: subset_342
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9128385811
num_examples: 1804
download_size: 9162741265
dataset_size: 9128385811
- config_name: subset_343
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8761716624
num_examples: 1754
download_size: 8794739077
dataset_size: 8761716624
- config_name: subset_344
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8585896406
num_examples: 1723
download_size: 8619280939
dataset_size: 8585896406
- config_name: subset_345
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8911155504
num_examples: 1760
download_size: 8944715489
dataset_size: 8911155504
- config_name: subset_346
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8907136859
num_examples: 1761
download_size: 8940526729
dataset_size: 8907136859
- config_name: subset_347
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9164775856
num_examples: 1787
download_size: 9199465409
dataset_size: 9164775856
- config_name: subset_348
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8996709978
num_examples: 1770
download_size: 9029768429
dataset_size: 8996709978
- config_name: subset_349
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8592250684
num_examples: 1717
download_size: 8624963303
dataset_size: 8592250684
- config_name: subset_35
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15119393520
num_examples: 2183
download_size: 15167216567
dataset_size: 15119393520
- config_name: subset_350
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8920954605
num_examples: 1784
download_size: 8953822358
dataset_size: 8920954605
- config_name: subset_351
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9051735934
num_examples: 1780
download_size: 9086631862
dataset_size: 9051735934
- config_name: subset_352
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8695638258
num_examples: 1709
download_size: 8728559586
dataset_size: 8695638258
- config_name: subset_353
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9096093986
num_examples: 1759
download_size: 9130642710
dataset_size: 9096093986
- config_name: subset_354
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9028642174
num_examples: 1739
download_size: 9062592905
dataset_size: 9028642174
- config_name: subset_355
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9171384095
num_examples: 1787
download_size: 9204615983
dataset_size: 9171384095
- config_name: subset_356
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8841707428
num_examples: 1759
download_size: 8875018281
dataset_size: 8841707428
- config_name: subset_357
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9012637820
num_examples: 1756
download_size: 9047242720
dataset_size: 9012637820
- config_name: subset_358
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9334156707
num_examples: 1817
download_size: 9368962938
dataset_size: 9334156707
- config_name: subset_359
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8915604554
num_examples: 1760
download_size: 8948885033
dataset_size: 8915604554
- config_name: subset_36
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14828879562
num_examples: 2122
download_size: 14874979051
dataset_size: 14828879562
- config_name: subset_360
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8838088732
num_examples: 1764
download_size: 8871428246
dataset_size: 8838088732
- config_name: subset_361
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8994780054
num_examples: 1755
download_size: 9028282043
dataset_size: 8994780054
- config_name: subset_362
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9262862681
num_examples: 1794
download_size: 9297505440
dataset_size: 9262862681
- config_name: subset_363
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9116435523
num_examples: 1784
download_size: 9151163972
dataset_size: 9116435523
- config_name: subset_364
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8828742960
num_examples: 1740
download_size: 8861822462
dataset_size: 8828742960
- config_name: subset_365
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8934676395
num_examples: 1762
download_size: 8967897761
dataset_size: 8934676395
- config_name: subset_366
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8791340743
num_examples: 1783
download_size: 8824240372
dataset_size: 8791340743
- config_name: subset_367
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8946031798
num_examples: 1743
download_size: 8978192624
dataset_size: 8946031798
- config_name: subset_368
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8771575491
num_examples: 1758
download_size: 8804735179
dataset_size: 8771575491
- config_name: subset_369
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9056691073
num_examples: 1767
download_size: 9091692283
dataset_size: 9056691073
- config_name: subset_37
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15009610610
num_examples: 2163
download_size: 15057438979
dataset_size: 15009610610
- config_name: subset_370
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8983523268
num_examples: 1758
download_size: 9016838116
dataset_size: 8983523268
- config_name: subset_371
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8757526443
num_examples: 1725
download_size: 8790484999
dataset_size: 8757526443
- config_name: subset_372
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8970574967
num_examples: 1749
download_size: 9002706320
dataset_size: 8970574967
- config_name: subset_373
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9018816259
num_examples: 1763
download_size: 9053259065
dataset_size: 9018816259
- config_name: subset_374
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9015253852
num_examples: 1799
download_size: 9050087244
dataset_size: 9015253852
- config_name: subset_375
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9011790664
num_examples: 1733
download_size: 9045827888
dataset_size: 9011790664
- config_name: subset_376
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8775302497
num_examples: 1761
download_size: 8808310717
dataset_size: 8775302497
- config_name: subset_377
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8744409655
num_examples: 1722
download_size: 8776888106
dataset_size: 8744409655
- config_name: subset_378
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8834753671
num_examples: 1762
download_size: 8868133472
dataset_size: 8834753671
- config_name: subset_379
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8694480736
num_examples: 1743
download_size: 8727683895
dataset_size: 8694480736
- config_name: subset_38
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14769297130
num_examples: 2114
download_size: 14815937628
dataset_size: 14769297130
- config_name: subset_380
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9103109911
num_examples: 1765
download_size: 9137072190
dataset_size: 9103109911
- config_name: subset_381
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8875714610
num_examples: 1732
download_size: 8908823228
dataset_size: 8875714610
- config_name: subset_382
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8584797706
num_examples: 1724
download_size: 8617969007
dataset_size: 8584797706
- config_name: subset_383
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9081087034
num_examples: 1758
download_size: 9114807977
dataset_size: 9081087034
- config_name: subset_384
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8895771411
num_examples: 1751
download_size: 8929023328
dataset_size: 8895771411
- config_name: subset_385
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8917979336
num_examples: 1765
download_size: 8951086569
dataset_size: 8917979336
- config_name: subset_386
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8938707287
num_examples: 1722
download_size: 8971922380
dataset_size: 8938707287
- config_name: subset_387
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8446177985
num_examples: 1719
download_size: 8478120085
dataset_size: 8446177985
- config_name: subset_388
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9244008327
num_examples: 1821
download_size: 9278988089
dataset_size: 9244008327
- config_name: subset_389
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8916035542
num_examples: 1769
download_size: 8948245095
dataset_size: 8916035542
- config_name: subset_39
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14861437288
num_examples: 2130
download_size: 14908233820
dataset_size: 14861437288
- config_name: subset_390
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8717815990
num_examples: 1730
download_size: 8751004719
dataset_size: 8717815990
- config_name: subset_391
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9002771676
num_examples: 1758
download_size: 9036444116
dataset_size: 9002771676
- config_name: subset_392
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8903088652
num_examples: 1776
download_size: 8936589014
dataset_size: 8903088652
- config_name: subset_393
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9046518598
num_examples: 1781
download_size: 9080814492
dataset_size: 9046518598
- config_name: subset_394
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 371403853
num_examples: 66
download_size: 372998366
dataset_size: 371403853
- config_name: subset_4
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16543553479
num_examples: 2085
download_size: 16593287130
dataset_size: 16543553479
- config_name: subset_40
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14517163302
num_examples: 2129
download_size: 14563906397
dataset_size: 14517163302
- config_name: subset_41
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14682982024
num_examples: 2105
download_size: 14729080970
dataset_size: 14682982024
- config_name: subset_42
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14148457161
num_examples: 2071
download_size: 14193631514
dataset_size: 14148457161
- config_name: subset_43
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14082981933
num_examples: 2073
download_size: 14127897147
dataset_size: 14082981933
- config_name: subset_44
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14568965641
num_examples: 2090
download_size: 14615519743
dataset_size: 14568965641
- config_name: subset_45
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14737753899
num_examples: 2109
download_size: 14783903423
dataset_size: 14737753899
- config_name: subset_46
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14526614791
num_examples: 2131
download_size: 14573472759
dataset_size: 14526614791
- config_name: subset_47
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14494125460
num_examples: 2115
download_size: 14539858270
dataset_size: 14494125460
- config_name: subset_48
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14388930693
num_examples: 2105
download_size: 14433711825
dataset_size: 14388930693
- config_name: subset_49
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14547697284
num_examples: 2102
download_size: 14594523462
dataset_size: 14547697284
- config_name: subset_5
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15978915154
num_examples: 2053
download_size: 16025356606
dataset_size: 15978915154
- config_name: subset_50
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13997181370
num_examples: 2060
download_size: 14040863599
dataset_size: 13997181370
- config_name: subset_51
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14586917880
num_examples: 2120
download_size: 14633485372
dataset_size: 14586917880
- config_name: subset_52
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14188277449
num_examples: 2093
download_size: 14233365032
dataset_size: 14188277449
- config_name: subset_53
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13932777686
num_examples: 2104
download_size: 13977478463
dataset_size: 13932777686
- config_name: subset_54
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13758282152
num_examples: 2109
download_size: 13803583863
dataset_size: 13758282152
- config_name: subset_55
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14073547496
num_examples: 2091
download_size: 14119187779
dataset_size: 14073547496
- config_name: subset_56
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13993634214
num_examples: 2095
download_size: 14038882859
dataset_size: 13993634214
- config_name: subset_57
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14080532531
num_examples: 2095
download_size: 14126397280
dataset_size: 14080532531
- config_name: subset_58
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13665473634
num_examples: 2095
download_size: 13710268413
dataset_size: 13665473634
- config_name: subset_59
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13666270873
num_examples: 2072
download_size: 13710475365
dataset_size: 13666270873
- config_name: subset_6
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16293486593
num_examples: 2100
download_size: 16341564703
dataset_size: 16293486593
- config_name: subset_60
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13814719753
num_examples: 2093
download_size: 13859034712
dataset_size: 13814719753
- config_name: subset_61
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7643248641
num_examples: 1185
download_size: 7668714442
dataset_size: 7643248641
- config_name: subset_62
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12289579503
num_examples: 1849
download_size: 12329202232
dataset_size: 12289579503
- config_name: subset_63
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13369236545
num_examples: 2053
download_size: 13413443133
dataset_size: 13369236545
- config_name: subset_64
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13735321622
num_examples: 2101
download_size: 13780035998
dataset_size: 13735321622
- config_name: subset_65
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13844326798
num_examples: 2100
download_size: 13888899635
dataset_size: 13844326798
- config_name: subset_66
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13909789139
num_examples: 2129
download_size: 13954936542
dataset_size: 13909789139
- config_name: subset_67
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13490462109
num_examples: 2086
download_size: 13534843752
dataset_size: 13490462109
- config_name: subset_68
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13362243499
num_examples: 2045
download_size: 13405954294
dataset_size: 13362243499
- config_name: subset_69
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13565012004
num_examples: 2083
download_size: 13609763105
dataset_size: 13565012004
- config_name: subset_7
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15889513146
num_examples: 2069
download_size: 15936706470
dataset_size: 15889513146
- config_name: subset_70
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13537927458
num_examples: 2102
download_size: 13582961811
dataset_size: 13537927458
- config_name: subset_71
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13531310793
num_examples: 2093
download_size: 13576149290
dataset_size: 13531310793
- config_name: subset_72
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13523618650
num_examples: 2087
download_size: 13568544553
dataset_size: 13523618650
- config_name: subset_73
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13008035913
num_examples: 2067
download_size: 13051660144
dataset_size: 13008035913
- config_name: subset_74
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12552351033
num_examples: 1965
download_size: 12594875262
dataset_size: 12552351033
- config_name: subset_75
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13639025504
num_examples: 2098
download_size: 13684510964
dataset_size: 13639025504
- config_name: subset_76
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13225693604
num_examples: 2075
download_size: 13268995142
dataset_size: 13225693604
- config_name: subset_77
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13203724810
num_examples: 2072
download_size: 13247867362
dataset_size: 13203724810
- config_name: subset_78
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13475962024
num_examples: 2082
download_size: 13520419272
dataset_size: 13475962024
- config_name: subset_79
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13166470674
num_examples: 2047
download_size: 13210657227
dataset_size: 13166470674
- config_name: subset_8
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16241249478
num_examples: 2084
download_size: 16289825702
dataset_size: 16241249478
- config_name: subset_80
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13156959042
num_examples: 2061
download_size: 13200614257
dataset_size: 13156959042
- config_name: subset_81
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12721289739
num_examples: 2038
download_size: 12764515364
dataset_size: 12721289739
- config_name: subset_82
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12802296601
num_examples: 2064
download_size: 12845482466
dataset_size: 12802296601
- config_name: subset_83
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12968370983
num_examples: 2066
download_size: 13011547079
dataset_size: 12968370983
- config_name: subset_84
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12792160309
num_examples: 2051
download_size: 12833884535
dataset_size: 12792160309
- config_name: subset_85
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12463652790
num_examples: 2037
download_size: 12505510781
dataset_size: 12463652790
- config_name: subset_86
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12654197117
num_examples: 1994
download_size: 12695596145
dataset_size: 12654197117
- config_name: subset_87
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13209950907
num_examples: 2098
download_size: 13253278616
dataset_size: 13209950907
- config_name: subset_88
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12655852110
num_examples: 2030
download_size: 12698717748
dataset_size: 12655852110
- config_name: subset_89
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12188647978
num_examples: 2011
download_size: 12229649309
dataset_size: 12188647978
- config_name: subset_9
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16112182005
num_examples: 2093
download_size: 16160971854
dataset_size: 16112182005
- config_name: subset_90
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12525486379
num_examples: 2042
download_size: 12567361616
dataset_size: 12525486379
- config_name: subset_91
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12859955790
num_examples: 2045
download_size: 12903417845
dataset_size: 12859955790
- config_name: subset_92
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12288594576
num_examples: 2020
download_size: 12330498284
dataset_size: 12288594576
- config_name: subset_93
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11963665176
num_examples: 1974
download_size: 12004132469
dataset_size: 11963665176
- config_name: subset_94
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12196863288
num_examples: 2024
download_size: 12238959398
dataset_size: 12196863288
- config_name: subset_95
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12237325419
num_examples: 2002
download_size: 12279225283
dataset_size: 12237325419
- config_name: subset_96
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11956757177
num_examples: 1973
download_size: 11997239064
dataset_size: 11956757177
- config_name: subset_97
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11897495704
num_examples: 1944
download_size: 11937567574
dataset_size: 11897495704
- config_name: subset_98
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11717282430
num_examples: 1939
download_size: 11756916623
dataset_size: 11717282430
- config_name: subset_99
features:
- name: line_no
dtype: int64
- name: deA.id
dtype: string
- name: deA.laser_score
dtype: float64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: deA.audio.speaker_embedding
sequence: float32
- name: deA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11882601742
num_examples: 2008
download_size: 11923611047
dataset_size: 11882601742
configs:
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_145
data_files:
- split: train
path: subset_145/train-*
- config_name: subset_146
data_files:
- split: train
path: subset_146/train-*
- config_name: subset_147
data_files:
- split: train
path: subset_147/train-*
- config_name: subset_148
data_files:
- split: train
path: subset_148/train-*
- config_name: subset_149
data_files:
- split: train
path: subset_149/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_150
data_files:
- split: train
path: subset_150/train-*
- config_name: subset_151
data_files:
- split: train
path: subset_151/train-*
- config_name: subset_152
data_files:
- split: train
path: subset_152/train-*
- config_name: subset_153
data_files:
- split: train
path: subset_153/train-*
- config_name: subset_154
data_files:
- split: train
path: subset_154/train-*
- config_name: subset_155
data_files:
- split: train
path: subset_155/train-*
- config_name: subset_156
data_files:
- split: train
path: subset_156/train-*
- config_name: subset_157
data_files:
- split: train
path: subset_157/train-*
- config_name: subset_158
data_files:
- split: train
path: subset_158/train-*
- config_name: subset_159
data_files:
- split: train
path: subset_159/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_160
data_files:
- split: train
path: subset_160/train-*
- config_name: subset_161
data_files:
- split: train
path: subset_161/train-*
- config_name: subset_162
data_files:
- split: train
path: subset_162/train-*
- config_name: subset_163
data_files:
- split: train
path: subset_163/train-*
- config_name: subset_164
data_files:
- split: train
path: subset_164/train-*
- config_name: subset_165
data_files:
- split: train
path: subset_165/train-*
- config_name: subset_166
data_files:
- split: train
path: subset_166/train-*
- config_name: subset_167
data_files:
- split: train
path: subset_167/train-*
- config_name: subset_168
data_files:
- split: train
path: subset_168/train-*
- config_name: subset_169
data_files:
- split: train
path: subset_169/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_170
data_files:
- split: train
path: subset_170/train-*
- config_name: subset_171
data_files:
- split: train
path: subset_171/train-*
- config_name: subset_172
data_files:
- split: train
path: subset_172/train-*
- config_name: subset_173
data_files:
- split: train
path: subset_173/train-*
- config_name: subset_174
data_files:
- split: train
path: subset_174/train-*
- config_name: subset_175
data_files:
- split: train
path: subset_175/train-*
- config_name: subset_176
data_files:
- split: train
path: subset_176/train-*
- config_name: subset_177
data_files:
- split: train
path: subset_177/train-*
- config_name: subset_178
data_files:
- split: train
path: subset_178/train-*
- config_name: subset_179
data_files:
- split: train
path: subset_179/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_180
data_files:
- split: train
path: subset_180/train-*
- config_name: subset_181
data_files:
- split: train
path: subset_181/train-*
- config_name: subset_182
data_files:
- split: train
path: subset_182/train-*
- config_name: subset_183
data_files:
- split: train
path: subset_183/train-*
- config_name: subset_184
data_files:
- split: train
path: subset_184/train-*
- config_name: subset_185
data_files:
- split: train
path: subset_185/train-*
- config_name: subset_186
data_files:
- split: train
path: subset_186/train-*
- config_name: subset_187
data_files:
- split: train
path: subset_187/train-*
- config_name: subset_188
data_files:
- split: train
path: subset_188/train-*
- config_name: subset_189
data_files:
- split: train
path: subset_189/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_190
data_files:
- split: train
path: subset_190/train-*
- config_name: subset_191
data_files:
- split: train
path: subset_191/train-*
- config_name: subset_192
data_files:
- split: train
path: subset_192/train-*
- config_name: subset_193
data_files:
- split: train
path: subset_193/train-*
- config_name: subset_194
data_files:
- split: train
path: subset_194/train-*
- config_name: subset_195
data_files:
- split: train
path: subset_195/train-*
- config_name: subset_196
data_files:
- split: train
path: subset_196/train-*
- config_name: subset_197
data_files:
- split: train
path: subset_197/train-*
- config_name: subset_198
data_files:
- split: train
path: subset_198/train-*
- config_name: subset_199
data_files:
- split: train
path: subset_199/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_200
data_files:
- split: train
path: subset_200/train-*
- config_name: subset_201
data_files:
- split: train
path: subset_201/train-*
- config_name: subset_202
data_files:
- split: train
path: subset_202/train-*
- config_name: subset_203
data_files:
- split: train
path: subset_203/train-*
- config_name: subset_204
data_files:
- split: train
path: subset_204/train-*
- config_name: subset_205
data_files:
- split: train
path: subset_205/train-*
- config_name: subset_206
data_files:
- split: train
path: subset_206/train-*
- config_name: subset_207
data_files:
- split: train
path: subset_207/train-*
- config_name: subset_208
data_files:
- split: train
path: subset_208/train-*
- config_name: subset_209
data_files:
- split: train
path: subset_209/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_210
data_files:
- split: train
path: subset_210/train-*
- config_name: subset_211
data_files:
- split: train
path: subset_211/train-*
- config_name: subset_212
data_files:
- split: train
path: subset_212/train-*
- config_name: subset_213
data_files:
- split: train
path: subset_213/train-*
- config_name: subset_214
data_files:
- split: train
path: subset_214/train-*
- config_name: subset_215
data_files:
- split: train
path: subset_215/train-*
- config_name: subset_216
data_files:
- split: train
path: subset_216/train-*
- config_name: subset_217
data_files:
- split: train
path: subset_217/train-*
- config_name: subset_218
data_files:
- split: train
path: subset_218/train-*
- config_name: subset_219
data_files:
- split: train
path: subset_219/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_220
data_files:
- split: train
path: subset_220/train-*
- config_name: subset_221
data_files:
- split: train
path: subset_221/train-*
- config_name: subset_222
data_files:
- split: train
path: subset_222/train-*
- config_name: subset_223
data_files:
- split: train
path: subset_223/train-*
- config_name: subset_224
data_files:
- split: train
path: subset_224/train-*
- config_name: subset_225
data_files:
- split: train
path: subset_225/train-*
- config_name: subset_226
data_files:
- split: train
path: subset_226/train-*
- config_name: subset_227
data_files:
- split: train
path: subset_227/train-*
- config_name: subset_228
data_files:
- split: train
path: subset_228/train-*
- config_name: subset_229
data_files:
- split: train
path: subset_229/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_230
data_files:
- split: train
path: subset_230/train-*
- config_name: subset_231
data_files:
- split: train
path: subset_231/train-*
- config_name: subset_232
data_files:
- split: train
path: subset_232/train-*
- config_name: subset_233
data_files:
- split: train
path: subset_233/train-*
- config_name: subset_234
data_files:
- split: train
path: subset_234/train-*
- config_name: subset_235
data_files:
- split: train
path: subset_235/train-*
- config_name: subset_236
data_files:
- split: train
path: subset_236/train-*
- config_name: subset_237
data_files:
- split: train
path: subset_237/train-*
- config_name: subset_238
data_files:
- split: train
path: subset_238/train-*
- config_name: subset_239
data_files:
- split: train
path: subset_239/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_240
data_files:
- split: train
path: subset_240/train-*
- config_name: subset_241
data_files:
- split: train
path: subset_241/train-*
- config_name: subset_242
data_files:
- split: train
path: subset_242/train-*
- config_name: subset_243
data_files:
- split: train
path: subset_243/train-*
- config_name: subset_244
data_files:
- split: train
path: subset_244/train-*
- config_name: subset_245
data_files:
- split: train
path: subset_245/train-*
- config_name: subset_246
data_files:
- split: train
path: subset_246/train-*
- config_name: subset_247
data_files:
- split: train
path: subset_247/train-*
- config_name: subset_248
data_files:
- split: train
path: subset_248/train-*
- config_name: subset_249
data_files:
- split: train
path: subset_249/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_250
data_files:
- split: train
path: subset_250/train-*
- config_name: subset_251
data_files:
- split: train
path: subset_251/train-*
- config_name: subset_252
data_files:
- split: train
path: subset_252/train-*
- config_name: subset_253
data_files:
- split: train
path: subset_253/train-*
- config_name: subset_254
data_files:
- split: train
path: subset_254/train-*
- config_name: subset_255
data_files:
- split: train
path: subset_255/train-*
- config_name: subset_256
data_files:
- split: train
path: subset_256/train-*
- config_name: subset_257
data_files:
- split: train
path: subset_257/train-*
- config_name: subset_258
data_files:
- split: train
path: subset_258/train-*
- config_name: subset_259
data_files:
- split: train
path: subset_259/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_260
data_files:
- split: train
path: subset_260/train-*
- config_name: subset_261
data_files:
- split: train
path: subset_261/train-*
- config_name: subset_262
data_files:
- split: train
path: subset_262/train-*
- config_name: subset_263
data_files:
- split: train
path: subset_263/train-*
- config_name: subset_264
data_files:
- split: train
path: subset_264/train-*
- config_name: subset_265
data_files:
- split: train
path: subset_265/train-*
- config_name: subset_266
data_files:
- split: train
path: subset_266/train-*
- config_name: subset_267
data_files:
- split: train
path: subset_267/train-*
- config_name: subset_268
data_files:
- split: train
path: subset_268/train-*
- config_name: subset_269
data_files:
- split: train
path: subset_269/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_270
data_files:
- split: train
path: subset_270/train-*
- config_name: subset_271
data_files:
- split: train
path: subset_271/train-*
- config_name: subset_272
data_files:
- split: train
path: subset_272/train-*
- config_name: subset_273
data_files:
- split: train
path: subset_273/train-*
- config_name: subset_274
data_files:
- split: train
path: subset_274/train-*
- config_name: subset_275
data_files:
- split: train
path: subset_275/train-*
- config_name: subset_276
data_files:
- split: train
path: subset_276/train-*
- config_name: subset_277
data_files:
- split: train
path: subset_277/train-*
- config_name: subset_278
data_files:
- split: train
path: subset_278/train-*
- config_name: subset_279
data_files:
- split: train
path: subset_279/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_280
data_files:
- split: train
path: subset_280/train-*
- config_name: subset_281
data_files:
- split: train
path: subset_281/train-*
- config_name: subset_282
data_files:
- split: train
path: subset_282/train-*
- config_name: subset_283
data_files:
- split: train
path: subset_283/train-*
- config_name: subset_284
data_files:
- split: train
path: subset_284/train-*
- config_name: subset_285
data_files:
- split: train
path: subset_285/train-*
- config_name: subset_286
data_files:
- split: train
path: subset_286/train-*
- config_name: subset_287
data_files:
- split: train
path: subset_287/train-*
- config_name: subset_288
data_files:
- split: train
path: subset_288/train-*
- config_name: subset_289
data_files:
- split: train
path: subset_289/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_290
data_files:
- split: train
path: subset_290/train-*
- config_name: subset_291
data_files:
- split: train
path: subset_291/train-*
- config_name: subset_292
data_files:
- split: train
path: subset_292/train-*
- config_name: subset_293
data_files:
- split: train
path: subset_293/train-*
- config_name: subset_294
data_files:
- split: train
path: subset_294/train-*
- config_name: subset_295
data_files:
- split: train
path: subset_295/train-*
- config_name: subset_296
data_files:
- split: train
path: subset_296/train-*
- config_name: subset_297
data_files:
- split: train
path: subset_297/train-*
- config_name: subset_298
data_files:
- split: train
path: subset_298/train-*
- config_name: subset_299
data_files:
- split: train
path: subset_299/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_300
data_files:
- split: train
path: subset_300/train-*
- config_name: subset_301
data_files:
- split: train
path: subset_301/train-*
- config_name: subset_302
data_files:
- split: train
path: subset_302/train-*
- config_name: subset_303
data_files:
- split: train
path: subset_303/train-*
- config_name: subset_304
data_files:
- split: train
path: subset_304/train-*
- config_name: subset_305
data_files:
- split: train
path: subset_305/train-*
- config_name: subset_306
data_files:
- split: train
path: subset_306/train-*
- config_name: subset_307
data_files:
- split: train
path: subset_307/train-*
- config_name: subset_308
data_files:
- split: train
path: subset_308/train-*
- config_name: subset_309
data_files:
- split: train
path: subset_309/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_310
data_files:
- split: train
path: subset_310/train-*
- config_name: subset_311
data_files:
- split: train
path: subset_311/train-*
- config_name: subset_312
data_files:
- split: train
path: subset_312/train-*
- config_name: subset_313
data_files:
- split: train
path: subset_313/train-*
- config_name: subset_314
data_files:
- split: train
path: subset_314/train-*
- config_name: subset_315
data_files:
- split: train
path: subset_315/train-*
- config_name: subset_316
data_files:
- split: train
path: subset_316/train-*
- config_name: subset_317
data_files:
- split: train
path: subset_317/train-*
- config_name: subset_318
data_files:
- split: train
path: subset_318/train-*
- config_name: subset_319
data_files:
- split: train
path: subset_319/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_320
data_files:
- split: train
path: subset_320/train-*
- config_name: subset_321
data_files:
- split: train
path: subset_321/train-*
- config_name: subset_322
data_files:
- split: train
path: subset_322/train-*
- config_name: subset_323
data_files:
- split: train
path: subset_323/train-*
- config_name: subset_324
data_files:
- split: train
path: subset_324/train-*
- config_name: subset_325
data_files:
- split: train
path: subset_325/train-*
- config_name: subset_326
data_files:
- split: train
path: subset_326/train-*
- config_name: subset_327
data_files:
- split: train
path: subset_327/train-*
- config_name: subset_328
data_files:
- split: train
path: subset_328/train-*
- config_name: subset_329
data_files:
- split: train
path: subset_329/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_330
data_files:
- split: train
path: subset_330/train-*
- config_name: subset_331
data_files:
- split: train
path: subset_331/train-*
- config_name: subset_332
data_files:
- split: train
path: subset_332/train-*
- config_name: subset_333
data_files:
- split: train
path: subset_333/train-*
- config_name: subset_334
data_files:
- split: train
path: subset_334/train-*
- config_name: subset_335
data_files:
- split: train
path: subset_335/train-*
- config_name: subset_336
data_files:
- split: train
path: subset_336/train-*
- config_name: subset_337
data_files:
- split: train
path: subset_337/train-*
- config_name: subset_338
data_files:
- split: train
path: subset_338/train-*
- config_name: subset_339
data_files:
- split: train
path: subset_339/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_340
data_files:
- split: train
path: subset_340/train-*
- config_name: subset_341
data_files:
- split: train
path: subset_341/train-*
- config_name: subset_342
data_files:
- split: train
path: subset_342/train-*
- config_name: subset_343
data_files:
- split: train
path: subset_343/train-*
- config_name: subset_344
data_files:
- split: train
path: subset_344/train-*
- config_name: subset_345
data_files:
- split: train
path: subset_345/train-*
- config_name: subset_346
data_files:
- split: train
path: subset_346/train-*
- config_name: subset_347
data_files:
- split: train
path: subset_347/train-*
- config_name: subset_348
data_files:
- split: train
path: subset_348/train-*
- config_name: subset_349
data_files:
- split: train
path: subset_349/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_350
data_files:
- split: train
path: subset_350/train-*
- config_name: subset_351
data_files:
- split: train
path: subset_351/train-*
- config_name: subset_352
data_files:
- split: train
path: subset_352/train-*
- config_name: subset_353
data_files:
- split: train
path: subset_353/train-*
- config_name: subset_354
data_files:
- split: train
path: subset_354/train-*
- config_name: subset_355
data_files:
- split: train
path: subset_355/train-*
- config_name: subset_356
data_files:
- split: train
path: subset_356/train-*
- config_name: subset_357
data_files:
- split: train
path: subset_357/train-*
- config_name: subset_358
data_files:
- split: train
path: subset_358/train-*
- config_name: subset_359
data_files:
- split: train
path: subset_359/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_360
data_files:
- split: train
path: subset_360/train-*
- config_name: subset_361
data_files:
- split: train
path: subset_361/train-*
- config_name: subset_362
data_files:
- split: train
path: subset_362/train-*
- config_name: subset_363
data_files:
- split: train
path: subset_363/train-*
- config_name: subset_364
data_files:
- split: train
path: subset_364/train-*
- config_name: subset_365
data_files:
- split: train
path: subset_365/train-*
- config_name: subset_366
data_files:
- split: train
path: subset_366/train-*
- config_name: subset_367
data_files:
- split: train
path: subset_367/train-*
- config_name: subset_368
data_files:
- split: train
path: subset_368/train-*
- config_name: subset_369
data_files:
- split: train
path: subset_369/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_370
data_files:
- split: train
path: subset_370/train-*
- config_name: subset_371
data_files:
- split: train
path: subset_371/train-*
- config_name: subset_372
data_files:
- split: train
path: subset_372/train-*
- config_name: subset_373
data_files:
- split: train
path: subset_373/train-*
- config_name: subset_374
data_files:
- split: train
path: subset_374/train-*
- config_name: subset_375
data_files:
- split: train
path: subset_375/train-*
- config_name: subset_376
data_files:
- split: train
path: subset_376/train-*
- config_name: subset_377
data_files:
- split: train
path: subset_377/train-*
- config_name: subset_378
data_files:
- split: train
path: subset_378/train-*
- config_name: subset_379
data_files:
- split: train
path: subset_379/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_380
data_files:
- split: train
path: subset_380/train-*
- config_name: subset_381
data_files:
- split: train
path: subset_381/train-*
- config_name: subset_382
data_files:
- split: train
path: subset_382/train-*
- config_name: subset_383
data_files:
- split: train
path: subset_383/train-*
- config_name: subset_384
data_files:
- split: train
path: subset_384/train-*
- config_name: subset_385
data_files:
- split: train
path: subset_385/train-*
- config_name: subset_386
data_files:
- split: train
path: subset_386/train-*
- config_name: subset_387
data_files:
- split: train
path: subset_387/train-*
- config_name: subset_388
data_files:
- split: train
path: subset_388/train-*
- config_name: subset_389
data_files:
- split: train
path: subset_389/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_390
data_files:
- split: train
path: subset_390/train-*
- config_name: subset_391
data_files:
- split: train
path: subset_391/train-*
- config_name: subset_392
data_files:
- split: train
path: subset_392/train-*
- config_name: subset_393
data_files:
- split: train
path: subset_393/train-*
- config_name: subset_394
data_files:
- split: train
path: subset_394/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
calabi-yau-data/ws-5d | calabi-yau-data | "2024-01-18T12:48:46Z" | 12,469 | 5 | [
"license:cc-by-sa-4.0",
"size_categories:100B<n<1T",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1808.02422",
"region:us",
"physics",
"math"
] | null | "2023-12-21T17:16:10Z" | ---
license: cc-by-sa-4.0
pretty_name: Weight Systems Defining Five-Dimensional IP Lattice Polytopes
configs:
- config_name: non-reflexive
data_files:
- split: full
path: non-reflexive/*.parquet
- config_name: reflexive
data_files:
- split: full
path: reflexive/*.parquet
size_categories:
- 100B<n<1T
tags:
- physics
- math
---
# Weight Systems Defining Five-Dimensional IP Lattice Polytopes
This dataset contains all weight systems defining five-dimensional reflexive and
non-reflexive IP lattice polytopes, instrumental in the study of Calabi-Yau fourfolds in
mathematics and theoretical physics. The data was compiled by Harald Skarke and Friedrich
Schöller in [arXiv:1808.02422](https://arxiv.org/abs/1808.02422). More information is
available at the [Calabi-Yau data website](http://hep.itp.tuwien.ac.at/~kreuzer/CY/). The
dataset can be explored using the [search
frontend](http://rgc.itp.tuwien.ac.at/fourfolds/). See below for a short mathematical
exposition on the construction of polytopes.
Please cite the paper when referencing this dataset:
```
@article{Scholler:2018apc,
author = {Schöller, Friedrich and Skarke, Harald},
title = "{All Weight Systems for Calabi-Yau Fourfolds from Reflexive Polyhedra}",
eprint = "1808.02422",
archivePrefix = "arXiv",
primaryClass = "hep-th",
doi = "10.1007/s00220-019-03331-9",
journal = "Commun. Math. Phys.",
volume = "372",
number = "2",
pages = "657--678",
year = "2019"
}
```
## Dataset Details
The dataset consists of two subsets: weight systems defining reflexive (and therefore IP)
polytopes and weight systems defining non-reflexive IP polytopes. Each subset is split
into 4000 files in Parquet format. Rows within each file are sorted lexicographically by
weights. There are 185,269,499,015 weight systems defining reflexive polytopes and
137,114,261,915 defining non-reflexive polytopes, making a total of 322,383,760,930 IP
weight systems.
Each row in the dataset represents a polytope and contains the six weights defining it,
along with the vertex count, facet count, and lattice point count. The reflexive dataset
also includes the Hodge numbers \\( h^{1,1} \\), \\( h^{1,2} \\), and \\( h^{1,3} \\) of
the corresponding Calabi-Yau manifold, and the lattice point count of the dual polytope.
For any Calabi-Yau fourfold, the Euler characteristic \\( \chi \\) and the Hodge number
\\( h^{2,2} \\) can be derived as follows:
$$ \chi = 48 + 6 (h^{1,1} − h^{1,2} + h^{1,3}) $$
$$ h^{2,2} = 44 + 4 h^{1,1} − 2 h^{1,2} + 4 h^{1,3} $$
This dataset is licensed under the
[CC BY-SA 4.0 license](http://creativecommons.org/licenses/by-sa/4.0/).
### Data Fields
- `weight0` to `weight5`: Weights of the weight system defining the polytope.
- `vertex_count`: Vertex count of the polytope.
- `facet_count`: Facet count of the polytope.
- `point_count`: Lattice point count of the polytope.
- `dual_point_count`: Lattice point count of the dual polytope (only for reflexive
polytopes).
- `h11`: Hodge number \\( h^{1,1} \\) (only for reflexive polytopes).
- `h12`: Hodge number \\( h^{1,2} \\) (only for reflexive polytopes).
- `h13`: Hodge number \\( h^{1,3} \\) (only for reflexive polytopes).
## Usage
The dataset can be used without downloading it entirely, thanks to the streaming
capability of the `datasets` library. The following Python code snippet demonstrates how
to stream the dataset and print the first five rows:
```python
from datasets import load_dataset
dataset = load_dataset("calabi-yau-data/ws-5d", name="reflexive", split="full", streaming=True)
for row in dataset.take(5):
print(row)
```
When cloning the Git repository with Git Large File Storage (LFS), data files are stored
both in the Git LFS storage directory and in the working tree. To avoid occupying double
the disk space, use a filesystem that supports copy-on-write, and run the following
commands to clone the repository:
```bash
# Initialize Git LFS
git lfs install
# Clone the repository without downloading LFS files immediately
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/calabi-yau-data/ws-5d
# Change to the repository directory
cd ws-5d
# Test deduplication (optional)
git lfs dedup --test
# Download the LFS files
git lfs fetch
# Create working tree files as clones of the files in the Git LFS storage directory using
# copy-on-write functionality
git lfs dedup
```
## Construction of Polytopes
This is an introduction to the mathematics involved in the construction of polytopes
relevant to this dataset. For more details and precise definitions, consult the paper
[arXiv:1808.02422](https://arxiv.org/abs/1808.02422) and references therein.
### Polytopes
A polytope is the convex hull of a finite set of points in \\(n\\)-dimensional Euclidean
space, \\(\mathbb{R}^n\\). This means it is the smallest convex shape that contains all
these points. The minimal collection of points that define a particular polytope are its
vertices. Familiar examples of polytopes include triangles and rectangles in two
dimensions, and cubes and octahedra in three dimensions.
A polytope is considered an *IP polytope* (interior point polytope) if the origin of
\\(\mathbb{R}^n\\) is in the interior of the polytope, not on its boundary or outside it.
For any IP polytope \\(\nabla\\), its dual polytope \\(\nabla^*\\) is defined as the set
of points \\(\mathbf{y}\\) satisfying
$$
\mathbf{x} \cdot \mathbf{y}
\ge -1 \quad \text{for all } \mathbf{x} \in \nabla \;.
$$
This relationship is symmetric: the dual of the dual of an IP polytope is the polytope
itself, i.e., \\( \nabla^{**} = \nabla \\).
### Weight Systems
Weight systems provide a means to describe simple polytopes known as *simplices*. A weight
system is a tuple of real numbers. The construction process is outlined as follows:
Consider an \\(n\\)-dimensional simplex in \\(\mathbb{R}^n\\), i.e., a polytope in
\\(\mathbb{R}^n\\) with vertex count \\(n + 1\\) and \\(n\\) of its edges extending in
linearly independent directions. It is possible to position \\(n\\) of its vertices at
arbitrary (linearly independent) locations through a linear transformation. The placement
of the remaining vertex is then determined. Its position is the defining property of the
simplex. To specify the position independently of the applied linear transformation, one
can use the following equation. If \\(\mathbf{v}_0, \mathbf{v}_1, \dots, \mathbf{v}_n\\)
are the vertices of the simplex, this relation fixes one vertex in terms of the other
\\(n\\):
$$ \sum_{i=0}^n q_i \mathbf{v}_i = 0 \;, $$
where \\(q_i\\) is the tuple of real numbers, the weight system.
It is important to note that scaling all weights in a weight system by a common factor
results in an equivalent weight system that defines the same simplex.
The condition that a simplex is an IP simplex is equivalent to the condition that all
weights in its weight system are bigger than zero.
For this dataset, the focus is on a specific construction of lattice polytopes described
in subsequent sections.
### Lattice Polytopes
A lattice polytope is a polytope with vertices at the points of a regular grid, or
lattice. Using linear transformations, any lattice polytope can be transformed so that its
vertices have integer coordinates, hence they are also referred to as integral
polytopes.
The dual of a lattice with points \\(L\\) is the lattice consisting of all points
\\(\mathbf{y}\\) that satisfy
$$
\mathbf{x} \cdot \mathbf{y} \in \mathbb{Z} \quad \text{for all } \mathbf{x} \in L \;.
$$
*Reflexive polytopes* are a specific type of lattice polytope characterized by having a
dual that is also a lattice polytope, with vertices situated on the dual lattice. These
polytopes play a central role in the context of this dataset.
The weights of a lattice polytope are always rational. This characteristic enables the
rescaling of a weight system so that its weights become integers without any common
divisor. This rescaling has been performed in this dataset.
The construction of the lattice polytopes from this dataset works as follows: We start
with the simplex \\(\nabla\\), arising from a weight system as previously described. Then,
we define the polytope \\(\Delta\\) as the convex hull of the intersection of
\\(\nabla^*\\) with the points of the dual lattice. In the context of this dataset, the
polytope \\(\Delta\\) is referred to as ‘the polytope’. Correspondingly,
\\(\Delta^{\!*}\\) is referred to as ‘the dual polytope’. The lattice of \\(\nabla\\) and
\\(\Delta^{\!*}\\) is taken to be the coarsest lattice possible, such that \\(\nabla\\) is
a lattice polytope, i.e., the lattice generated by the vertices of \\(\nabla\\). This
construction is exemplified in the following sections.
A weight system is considered an IP weight system if the corresponding \\(\Delta\\) is an
IP polytope; that is, the origin is within its interior. Since only IP polytopes have
corresponding dual polytopes, this condition is essential for the polytope \\(\Delta\\) to
be classified as reflexive.
### Two Dimensions
In two dimensions, all IP weight systems define reflexive polytopes and every vertex of
\\(\nabla^*\\) lies on the dual lattice, making \\(\Delta\\) and \\(\nabla^*\\) identical.
There are exactly three IP weight systems that define two-dimensional polytopes
(polygons). Each polytope is reflexive and has three vertices and three facets (edges):
| weight system | number of points of \\(\nabla\\) | number of points of \\(\nabla^*\\) |
|--------------:|---------------------------------:|-----------------------------------:|
| (1, 1, 1) | 4 | 10 |
| (1, 1, 2) | 5 | 9 |
| (1, 2, 3) | 7 | 7 |
The polytopes and their duals are depicted below. Lattice points are indicated by dots.
<img src="pictures/ws-2d.png" style="display: block; margin-left: auto; margin-right: auto; width:520px;">
### General Dimension
In higher dimensions, the situation becomes more complex. Not all IP polytopes are
reflexive, and generally, \\(\Delta \neq \nabla^*\\).
This example shows the construction of the three-dimensional polytope \\(\Delta\\) with
weight system (2, 3, 4, 5) and its dual \\(\Delta^{\!*}\\). Lattice points lying on the
polytopes are indicated by dots. \\(\Delta\\) has 7 vertices and 13 lattice points,
\\(\Delta^{\!*}\\) also has 7 vertices, but 16 lattice points.
<img src="pictures/ws-3d-2-3-4-5.png" style="display: block; margin-left: auto; margin-right: auto; width:450px;">
The counts of reflexive single-weight-system polytopes by dimension \\(n\\) are:
| \\(n\\) | reflexive single-weight-system polytopes |
|--------:|-----------------------------------------:|
| 2 | 3 |
| 3 | 95 |
| 4 | 184,026 |
| 5 | (this dataset) 185,269,499,015 |
One should note that distinct weight systems may well lead to the same polytope (we have
not checked how often this occurs). In particular it seems that polytopes with a small
number of lattice points are generated many times.
|
openlifescienceai/medmcqa | openlifescienceai | "2024-01-04T14:32:02Z" | 12,456 | 136 | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering",
"multiple-choice"
] | "2022-05-06T08:43:24Z" | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
- multiple-choice
task_ids:
- multiple-choice-qa
- open-domain-qa
paperswithcode_id: medmcqa
pretty_name: MedMCQA
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: opa
dtype: string
- name: opb
dtype: string
- name: opc
dtype: string
- name: opd
dtype: string
- name: cop
dtype:
class_label:
names:
'0': a
'1': b
'2': c
'3': d
- name: choice_type
dtype: string
- name: exp
dtype: string
- name: subject_name
dtype: string
- name: topic_name
dtype: string
splits:
- name: train
num_bytes: 131903297
num_examples: 182822
- name: test
num_bytes: 1399350
num_examples: 6150
- name: validation
num_bytes: 2221428
num_examples: 4183
download_size: 88311487
dataset_size: 135524075
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for MedMCQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://medmcqa.github.io
- **Repository:** https://github.com/medmcqa/medmcqa
- **Paper:** [MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering](https://proceedings.mlr.press/v174/pal22a)
- **Leaderboard:** https://paperswithcode.com/dataset/medmcqa
- **Point of Contact:** [Aaditya Ura](mailto:[email protected])
### Dataset Summary
MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
Each sample contains a question, correct answer(s), and other options which require a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects & topics. A detailed explanation of the solution, along with the above information, is provided in this study.
MedMCQA provides an open-source dataset for the Natural Language Processing community.
It is expected that this dataset would facilitate future research toward achieving better QA systems.
The dataset contains questions about the following topics:
- Anesthesia
- Anatomy
- Biochemistry
- Dental
- ENT
- Forensic Medicine (FM)
- Obstetrics and Gynecology (O&G)
- Medicine
- Microbiology
- Ophthalmology
- Orthopedics
- Pathology
- Pediatrics
- Pharmacology
- Physiology
- Psychiatry
- Radiology
- Skin
- Preventive & Social Medicine (PSM)
- Surgery
### Supported Tasks and Leaderboards
multiple-choice-QA, open-domain-QA: The dataset can be used to train a model for multi-choice questions answering, open domain questions answering. Questions in these exams are challenging and generally require deeper domain and language understanding as it tests the 10+ reasoning abilities across a wide range of medical subjects & topics.
### Languages
The questions and answers are available in English.
## Dataset Structure
### Data Instances
```
{
"question":"A 40-year-old man presents with 5 days of productive cough and fever. Pseudomonas aeruginosa is isolated from a pulmonary abscess. CBC shows an acute effect characterized by marked leukocytosis (50,000 mL) and the differential count reveals a shift to left in granulocytes. Which of the following terms best describes these hematologic findings?",
"exp": "Circulating levels of leukocytes and their precursors may occasionally reach very high levels (>50,000 WBC mL). These extreme elevations are sometimes called leukemoid reactions because they are similar to the white cell counts observed in leukemia, from which they must be distinguished. The leukocytosis occurs initially because of the accelerated release of granulocytes from the bone marrow (caused by cytokines, including TNF and IL-1) There is a rise in the number of both mature and immature neutrophils in the blood, referred to as a shift to the left. In contrast to bacterial infections, viral infections (including infectious mononucleosis) are characterized by lymphocytosis Parasitic infestations and certain allergic reactions cause eosinophilia, an increase in the number of circulating eosinophils. Leukopenia is defined as an absolute decrease in the circulating WBC count.",
"cop":1,
"opa":"Leukemoid reaction",
"opb":"Leukopenia",
"opc":"Myeloid metaplasia",
"opd":"Neutrophilia",
"subject_name":"Pathology",
"topic_name":"Basic Concepts and Vascular changes of Acute Inflammation",
"id":"4e1715fe-0bc3-494e-b6eb-2d4617245aef",
"choice_type":"single"
}
```
### Data Fields
- `id` : a string question identifier for each example
- `question` : question text (a string)
- `opa` : Option A
- `opb` : Option B
- `opc` : Option C
- `opd` : Option D
- `cop` : Correct option, i.e., 1,2,3,4
- `choice_type` ({"single", "multi"}): Question choice type.
- "single": Single-choice question, where each choice contains a single option.
- "multi": Multi-choice question, where each choice contains a combination of multiple suboptions.
- `exp` : Expert's explanation of the answer
- `subject_name` : Medical Subject name of the particular question
- `topic_name` : Medical topic name from the particular subject
### Data Splits
The goal of MedMCQA is to emulate the rigor of real word medical exams. To enable that, a predefined split of the dataset is provided. The split is by exams instead of the given questions. This also ensures the reusability and generalization ability of the models.
The training set of MedMCQA consists of all the collected mock & online test series, whereas the test set consists of all AIIMS PG exam MCQs (years 1991-present). The development set consists of NEET PG exam MCQs (years 2001-present) to approximate real exam evaluation.
Similar questions from train , test and dev set were removed based on similarity. The final split sizes are as follow:
| | Train | Test | Valid |
| ----- | ------ | ----- | ---- |
| Question #| 182,822 | 6,150 | 4,183|
| Vocab | 94,231 | 11,218 | 10,800 |
| Max Ques tokens | 220 | 135| 88 |
| Max Ans tokens | 38 | 21 | 25 |
## Dataset Creation
### Curation Rationale
Before this attempt, very few works have been done to construct biomedical MCQA datasets (Vilares and Gomez-Rodr, 2019), and they are (1) mostly small, containing up to few thousand questions, and (2) cover a limited number of Medical topics and Subjects. This paper addresses the aforementioned limitations by introducing MedMCQA, a new large-scale, Multiple-Choice Question Answering
(MCQA) dataset designed to address real-world medical entrance exam questions.
### Source Data
#### Initial Data Collection and Normalization
Historical Exam questions from official websites - AIIMS & NEET PG (1991- present)
The raw data is collected from open websites and books
#### Who are the source language producers?
The dataset was created by Ankit Pal, Logesh Kumar Umapathi and Malaikannan Sankarasubbu
### Annotations
#### Annotation process
The dataset does not contain any additional annotations.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
If you find this useful in your research, please consider citing the dataset paper
```
@InProceedings{pmlr-v174-pal22a,
title = {MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering},
author = {Pal, Ankit and Umapathi, Logesh Kumar and Sankarasubbu, Malaikannan},
booktitle = {Proceedings of the Conference on Health, Inference, and Learning},
pages = {248--260},
year = {2022},
editor = {Flores, Gerardo and Chen, George H and Pollard, Tom and Ho, Joyce C and Naumann, Tristan},
volume = {174},
series = {Proceedings of Machine Learning Research},
month = {07--08 Apr},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v174/pal22a/pal22a.pdf},
url = {https://proceedings.mlr.press/v174/pal22a.html},
abstract = {This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity. Each sample contains a question, correct answer(s), and other options which requires a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects & topics. A detailed explanation of the solution, along with the above information, is provided in this study.}
}
```
### Contributions
Thanks to [@monk1337](https://github.com/monk1337) for adding this dataset. |
IPEC-COMMUNITY/bc_z_lerobot | IPEC-COMMUNITY | "2025-03-15T08:49:20Z" | 12,423 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"modality:video",
"region:us",
"LeRobot",
"bc_z",
"rlds",
"openx",
"google_robot"
] | [
"robotics"
] | "2025-03-15T02:24:39Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- LeRobot
- bc_z
- rlds
- openx
- google_robot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "google_robot",
"total_episodes": 39350,
"total_frames": 5471693,
"total_tasks": 104,
"total_videos": 39350,
"total_chunks": 40,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:39350"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.image": {
"dtype": "video",
"shape": [
171,
213,
3
],
"names": [
"height",
"width",
"rgb"
],
"info": {
"video.fps": 10.0,
"video.height": 172,
"video.width": 214,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.state": {
"dtype": "float32",
"shape": [
8
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"pad",
"gripper"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
davanstrien/AmericanStories-parquet | davanstrien | "2023-10-20T15:09:54Z" | 12,413 | 2 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-10-20T14:07:00Z" | ---
configs:
- config_name: default
data_files:
- split: '1774'
path: data/1774-*
- split: '1798'
path: data/1798-*
- split: '1799'
path: data/1799-*
- split: '1800'
path: data/1800-*
- split: '1801'
path: data/1801-*
- split: '1802'
path: data/1802-*
- split: '1803'
path: data/1803-*
- split: '1804'
path: data/1804-*
- split: '1805'
path: data/1805-*
- split: '1806'
path: data/1806-*
- split: '1807'
path: data/1807-*
- split: '1808'
path: data/1808-*
- split: '1809'
path: data/1809-*
- split: '1810'
path: data/1810-*
- split: '1811'
path: data/1811-*
- split: '1812'
path: data/1812-*
- split: '1813'
path: data/1813-*
- split: '1814'
path: data/1814-*
- split: '1815'
path: data/1815-*
- split: '1816'
path: data/1816-*
- split: '1817'
path: data/1817-*
- split: '1818'
path: data/1818-*
- split: '1819'
path: data/1819-*
- split: '1820'
path: data/1820-*
- split: '1821'
path: data/1821-*
- split: '1822'
path: data/1822-*
- split: '1823'
path: data/1823-*
- split: '1824'
path: data/1824-*
- split: '1825'
path: data/1825-*
- split: '1826'
path: data/1826-*
- split: '1827'
path: data/1827-*
- split: '1828'
path: data/1828-*
- split: '1829'
path: data/1829-*
- split: '1830'
path: data/1830-*
- split: '1831'
path: data/1831-*
- split: '1832'
path: data/1832-*
- split: '1833'
path: data/1833-*
- split: '1834'
path: data/1834-*
- split: '1835'
path: data/1835-*
- split: '1836'
path: data/1836-*
- split: '1837'
path: data/1837-*
- split: '1838'
path: data/1838-*
- split: '1839'
path: data/1839-*
- split: '1840'
path: data/1840-*
- split: '1841'
path: data/1841-*
- split: '1842'
path: data/1842-*
- split: '1843'
path: data/1843-*
- split: '1844'
path: data/1844-*
- split: '1845'
path: data/1845-*
- split: '1846'
path: data/1846-*
- split: '1847'
path: data/1847-*
- split: '1848'
path: data/1848-*
- split: '1849'
path: data/1849-*
- split: '1850'
path: data/1850-*
- split: '1851'
path: data/1851-*
- split: '1852'
path: data/1852-*
- split: '1853'
path: data/1853-*
- split: '1854'
path: data/1854-*
- split: '1855'
path: data/1855-*
- split: '1856'
path: data/1856-*
- split: '1857'
path: data/1857-*
- split: '1858'
path: data/1858-*
- split: '1859'
path: data/1859-*
- split: '1860'
path: data/1860-*
- split: '1861'
path: data/1861-*
- split: '1862'
path: data/1862-*
- split: '1863'
path: data/1863-*
- split: '1864'
path: data/1864-*
- split: '1865'
path: data/1865-*
- split: '1866'
path: data/1866-*
- split: '1867'
path: data/1867-*
- split: '1868'
path: data/1868-*
- split: '1869'
path: data/1869-*
- split: '1870'
path: data/1870-*
- split: '1871'
path: data/1871-*
- split: '1872'
path: data/1872-*
- split: '1873'
path: data/1873-*
- split: '1874'
path: data/1874-*
- split: '1875'
path: data/1875-*
- split: '1876'
path: data/1876-*
- split: '1877'
path: data/1877-*
- split: '1878'
path: data/1878-*
- split: '1879'
path: data/1879-*
- split: '1880'
path: data/1880-*
- split: '1881'
path: data/1881-*
- split: '1882'
path: data/1882-*
- split: '1883'
path: data/1883-*
- split: '1884'
path: data/1884-*
- split: '1885'
path: data/1885-*
- split: '1886'
path: data/1886-*
- split: '1887'
path: data/1887-*
- split: '1888'
path: data/1888-*
- split: '1889'
path: data/1889-*
- split: '1890'
path: data/1890-*
- split: '1891'
path: data/1891-*
- split: '1892'
path: data/1892-*
- split: '1893'
path: data/1893-*
- split: '1894'
path: data/1894-*
- split: '1895'
path: data/1895-*
- split: '1896'
path: data/1896-*
- split: '1897'
path: data/1897-*
- split: '1898'
path: data/1898-*
- split: '1899'
path: data/1899-*
- split: '1900'
path: data/1900-*
- split: '1901'
path: data/1901-*
- split: '1902'
path: data/1902-*
- split: '1903'
path: data/1903-*
- split: '1904'
path: data/1904-*
- split: '1905'
path: data/1905-*
- split: '1906'
path: data/1906-*
- split: '1907'
path: data/1907-*
- split: '1908'
path: data/1908-*
- split: '1909'
path: data/1909-*
- split: '1910'
path: data/1910-*
- split: '1911'
path: data/1911-*
- split: '1912'
path: data/1912-*
- split: '1913'
path: data/1913-*
- split: '1914'
path: data/1914-*
- split: '1915'
path: data/1915-*
- split: '1916'
path: data/1916-*
- split: '1917'
path: data/1917-*
- split: '1918'
path: data/1918-*
- split: '1919'
path: data/1919-*
- split: '1920'
path: data/1920-*
- split: '1921'
path: data/1921-*
- split: '1922'
path: data/1922-*
- split: '1923'
path: data/1923-*
- split: '1924'
path: data/1924-*
- split: '1925'
path: data/1925-*
- split: '1926'
path: data/1926-*
- split: '1927'
path: data/1927-*
- split: '1928'
path: data/1928-*
- split: '1929'
path: data/1929-*
- split: '1930'
path: data/1930-*
- split: '1931'
path: data/1931-*
- split: '1932'
path: data/1932-*
- split: '1933'
path: data/1933-*
- split: '1934'
path: data/1934-*
- split: '1935'
path: data/1935-*
- split: '1936'
path: data/1936-*
- split: '1937'
path: data/1937-*
- split: '1938'
path: data/1938-*
- split: '1939'
path: data/1939-*
- split: '1940'
path: data/1940-*
- split: '1941'
path: data/1941-*
- split: '1942'
path: data/1942-*
- split: '1943'
path: data/1943-*
- split: '1944'
path: data/1944-*
- split: '1945'
path: data/1945-*
- split: '1946'
path: data/1946-*
- split: '1947'
path: data/1947-*
- split: '1948'
path: data/1948-*
- split: '1949'
path: data/1949-*
- split: '1950'
path: data/1950-*
- split: '1951'
path: data/1951-*
- split: '1952'
path: data/1952-*
- split: '1953'
path: data/1953-*
- split: '1954'
path: data/1954-*
- split: '1955'
path: data/1955-*
- split: '1956'
path: data/1956-*
- split: '1957'
path: data/1957-*
- split: '1958'
path: data/1958-*
- split: '1959'
path: data/1959-*
- split: '1960'
path: data/1960-*
- split: '1961'
path: data/1961-*
- split: '1962'
path: data/1962-*
- split: '1963'
path: data/1963-*
dataset_info:
features:
- name: article_id
dtype: string
- name: newspaper_name
dtype: string
- name: edition
dtype: string
- name: date
dtype: string
- name: page
dtype: string
- name: headline
dtype: string
- name: byline
dtype: string
- name: article
dtype: string
splits:
- name: '1774'
num_bytes: 22245
num_examples: 12
- name: '1798'
num_bytes: 72288
num_examples: 73
- name: '1799'
num_bytes: 946373
num_examples: 623
- name: '1800'
num_bytes: 38139
num_examples: 45
- name: '1801'
num_bytes: 94991
num_examples: 93
- name: '1802'
num_bytes: 1463322
num_examples: 1158
- name: '1803'
num_bytes: 799797
num_examples: 654
- name: '1804'
num_bytes: 120141
num_examples: 103
- name: '1805'
num_bytes: 2475205
num_examples: 2303
- name: '1806'
num_bytes: 2043729
num_examples: 1860
- name: '1807'
num_bytes: 310568
num_examples: 315
- name: '1808'
num_bytes: 75639
num_examples: 45
- name: '1809'
num_bytes: 430706
num_examples: 422
- name: '1810'
num_bytes: 1319755
num_examples: 982
- name: '1811'
num_bytes: 117701
num_examples: 110
- name: '1812'
num_bytes: 75299
num_examples: 67
- name: '1813'
num_bytes: 290966
num_examples: 242
- name: '1814'
num_bytes: 378212
num_examples: 379
- name: '1815'
num_bytes: 185179
num_examples: 160
- name: '1816'
num_bytes: 495706
num_examples: 409
- name: '1817'
num_bytes: 446354
num_examples: 394
- name: '1818'
num_bytes: 1257916
num_examples: 1108
- name: '1819'
num_bytes: 2476297
num_examples: 1997
- name: '1820'
num_bytes: 611884
num_examples: 433
- name: '1821'
num_bytes: 347361
num_examples: 270
- name: '1822'
num_bytes: 286227
num_examples: 264
- name: '1823'
num_bytes: 2030816
num_examples: 1113
- name: '1824'
num_bytes: 5171191
num_examples: 3110
- name: '1825'
num_bytes: 6341915
num_examples: 4005
- name: '1826'
num_bytes: 10462258
num_examples: 7079
- name: '1827'
num_bytes: 11634621
num_examples: 7213
- name: '1828'
num_bytes: 10253681
num_examples: 6350
- name: '1829'
num_bytes: 4021832
num_examples: 2296
- name: '1830'
num_bytes: 8321949
num_examples: 4232
- name: '1831'
num_bytes: 16796125
num_examples: 9699
- name: '1832'
num_bytes: 9982722
num_examples: 6565
- name: '1833'
num_bytes: 6653515
num_examples: 4108
- name: '1834'
num_bytes: 7099875
num_examples: 4632
- name: '1835'
num_bytes: 9066392
num_examples: 6168
- name: '1836'
num_bytes: 10473366
num_examples: 7375
- name: '1837'
num_bytes: 21002773
num_examples: 13609
- name: '1838'
num_bytes: 13735809
num_examples: 8492
- name: '1839'
num_bytes: 12512339
num_examples: 8938
- name: '1840'
num_bytes: 12647911
num_examples: 8052
- name: '1841'
num_bytes: 39146669
num_examples: 30019
- name: '1842'
num_bytes: 26218700
num_examples: 21290
- name: '1843'
num_bytes: 50447372
num_examples: 41657
- name: '1844'
num_bytes: 79351064
num_examples: 61373
- name: '1845'
num_bytes: 131632573
num_examples: 95921
- name: '1846'
num_bytes: 81086068
num_examples: 70331
- name: '1847'
num_bytes: 32733527
num_examples: 24354
- name: '1848'
num_bytes: 44577825
num_examples: 32531
- name: '1849'
num_bytes: 53877014
num_examples: 42711
- name: '1850'
num_bytes: 76697622
num_examples: 49992
- name: '1851'
num_bytes: 128372084
num_examples: 90184
- name: '1852'
num_bytes: 67005975
num_examples: 51172
- name: '1853'
num_bytes: 54210932
num_examples: 48130
- name: '1854'
num_bytes: 150406197
num_examples: 118825
- name: '1855'
num_bytes: 115893679
num_examples: 99390
- name: '1856'
num_bytes: 188859881
num_examples: 157592
- name: '1857'
num_bytes: 152841585
num_examples: 129179
- name: '1858'
num_bytes: 214657030
num_examples: 171877
- name: '1859'
num_bytes: 178711188
num_examples: 160924
- name: '1860'
num_bytes: 163889573
num_examples: 150590
- name: '1861'
num_bytes: 215595661
num_examples: 173990
- name: '1862'
num_bytes: 228323685
num_examples: 171021
- name: '1863'
num_bytes: 197294365
num_examples: 151485
- name: '1864'
num_bytes: 125113713
num_examples: 94415
- name: '1865'
num_bytes: 133515217
num_examples: 99728
- name: '1866'
num_bytes: 180768118
num_examples: 135316
- name: '1867'
num_bytes: 213571876
num_examples: 161180
- name: '1868'
num_bytes: 202156635
num_examples: 140521
- name: '1869'
num_bytes: 236506656
num_examples: 171455
- name: '1870'
num_bytes: 242779857
num_examples: 174061
- name: '1871'
num_bytes: 203189927
num_examples: 151652
- name: '1872'
num_bytes: 242624062
num_examples: 194784
- name: '1873'
num_bytes: 302626176
num_examples: 241902
- name: '1874'
num_bytes: 280814742
num_examples: 213813
- name: '1875'
num_bytes: 319815222
num_examples: 274269
- name: '1876'
num_bytes: 381483980
num_examples: 288199
- name: '1877'
num_bytes: 317703263
num_examples: 254946
- name: '1878'
num_bytes: 381274032
num_examples: 307865
- name: '1879'
num_bytes: 371703798
num_examples: 287784
- name: '1880'
num_bytes: 296465631
num_examples: 272352
- name: '1881'
num_bytes: 294568051
num_examples: 270228
- name: '1882'
num_bytes: 340511400
num_examples: 311920
- name: '1883'
num_bytes: 419078041
num_examples: 387589
- name: '1884'
num_bytes: 329666364
num_examples: 304242
- name: '1885'
num_bytes: 348144660
num_examples: 318732
- name: '1886'
num_bytes: 431746663
num_examples: 423718
- name: '1887'
num_bytes: 493647568
num_examples: 494559
- name: '1888'
num_bytes: 564523528
num_examples: 547165
- name: '1889'
num_bytes: 558168324
num_examples: 536750
- name: '1890'
num_bytes: 566964770
num_examples: 540615
- name: '1891'
num_bytes: 641124243
num_examples: 620461
- name: '1892'
num_bytes: 524812242
num_examples: 527044
- name: '1893'
num_bytes: 645853680
num_examples: 656805
- name: '1894'
num_bytes: 790577208
num_examples: 795408
- name: '1895'
num_bytes: 890097151
num_examples: 897766
- name: '1896'
num_bytes: 1235234882
num_examples: 1175701
- name: '1897'
num_bytes: 1252347746
num_examples: 1275895
- name: '1898'
num_bytes: 1286411001
num_examples: 1323842
- name: '1899'
num_bytes: 1176418162
num_examples: 1218682
- name: '1900'
num_bytes: 1069983237
num_examples: 1118970
- name: '1901'
num_bytes: 1478945214
num_examples: 1468648
- name: '1902'
num_bytes: 1376703767
num_examples: 1417935
- name: '1903'
num_bytes: 1255538379
num_examples: 1319686
- name: '1904'
num_bytes: 1232185827
num_examples: 1340868
- name: '1905'
num_bytes: 1563178627
num_examples: 1635134
- name: '1906'
num_bytes: 1632815247
num_examples: 1683643
- name: '1907'
num_bytes: 1647491794
num_examples: 1714613
- name: '1908'
num_bytes: 1771267430
num_examples: 1842874
- name: '1909'
num_bytes: 1844179657
num_examples: 1926228
- name: '1910'
num_bytes: 1589052587
num_examples: 1684263
- name: '1911'
num_bytes: 1402309564
num_examples: 1510259
- name: '1912'
num_bytes: 1621648367
num_examples: 1774149
- name: '1913'
num_bytes: 1613599136
num_examples: 1822206
- name: '1914'
num_bytes: 1736284455
num_examples: 1931901
- name: '1915'
num_bytes: 1690248452
num_examples: 1878654
- name: '1916'
num_bytes: 1633563499
num_examples: 1838797
- name: '1917'
num_bytes: 1605677226
num_examples: 1810757
- name: '1918'
num_bytes: 1803695589
num_examples: 1920102
- name: '1919'
num_bytes: 1831703767
num_examples: 1981192
- name: '1920'
num_bytes: 1901882705
num_examples: 2041192
- name: '1921'
num_bytes: 2264618667
num_examples: 2334112
- name: '1922'
num_bytes: 2372137567
num_examples: 2405974
- name: '1923'
num_bytes: 812177597
num_examples: 880372
- name: '1924'
num_bytes: 800835690
num_examples: 845520
- name: '1925'
num_bytes: 601426023
num_examples: 662322
- name: '1926'
num_bytes: 565307890
num_examples: 623765
- name: '1927'
num_bytes: 460501197
num_examples: 504835
- name: '1928'
num_bytes: 452526140
num_examples: 487302
- name: '1929'
num_bytes: 366246066
num_examples: 421909
- name: '1930'
num_bytes: 437657836
num_examples: 492695
- name: '1931'
num_bytes: 441972257
num_examples: 493816
- name: '1932'
num_bytes: 640501746
num_examples: 664615
- name: '1933'
num_bytes: 634373318
num_examples: 642380
- name: '1934'
num_bytes: 641841040
num_examples: 654342
- name: '1935'
num_bytes: 612406176
num_examples: 635554
- name: '1936'
num_bytes: 621035178
num_examples: 662015
- name: '1937'
num_bytes: 625107933
num_examples: 676549
- name: '1938'
num_bytes: 616370880
num_examples: 665274
- name: '1939'
num_bytes: 525913265
num_examples: 556283
- name: '1940'
num_bytes: 471830118
num_examples: 496662
- name: '1941'
num_bytes: 599694786
num_examples: 637200
- name: '1942'
num_bytes: 508785410
num_examples: 523923
- name: '1943'
num_bytes: 452079475
num_examples: 467200
- name: '1944'
num_bytes: 442871777
num_examples: 433769
- name: '1945'
num_bytes: 588623743
num_examples: 588477
- name: '1946'
num_bytes: 526027876
num_examples: 470895
- name: '1947'
num_bytes: 461281363
num_examples: 393086
- name: '1948'
num_bytes: 442999943
num_examples: 396660
- name: '1949'
num_bytes: 421752000
num_examples: 419854
- name: '1950'
num_bytes: 403717616
num_examples: 415416
- name: '1951'
num_bytes: 409600217
num_examples: 419622
- name: '1952'
num_bytes: 397051717
num_examples: 396420
- name: '1953'
num_bytes: 366253682
num_examples: 358332
- name: '1954'
num_bytes: 263197428
num_examples: 266338
- name: '1955'
num_bytes: 268993926
num_examples: 273576
- name: '1956'
num_bytes: 85126796
num_examples: 98035
- name: '1957'
num_bytes: 83757036
num_examples: 93543
- name: '1958'
num_bytes: 85807593
num_examples: 98688
- name: '1959'
num_bytes: 112707174
num_examples: 129452
- name: '1960'
num_bytes: 300484826
num_examples: 344550
- name: '1961'
num_bytes: 297225753
num_examples: 339076
- name: '1962'
num_bytes: 231525869
num_examples: 264724
- name: '1963'
num_bytes: 197520960
num_examples: 226859
download_size: 48388744959
dataset_size: 76303058024
---
# Dataset Card for "AmericanStories-parquet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chatblanc-ciel/llm-jp-corpus-v3 | chatblanc-ciel | "2025-02-08T11:59:42Z" | 12,405 | 1 | [
"region:us"
] | null | "2025-02-04T20:41:46Z" | ---
dataset_info:
- config_name: code/code_stack
features:
- name: text
dtype: string
- name: meta
struct:
- name: alphanum_fraction
dtype: float64
- name: avg_line_length
dtype: float64
- name: ext
dtype: string
- name: hexsha
dtype: string
- name: lang
dtype: string
- name: max_forks_count
dtype: int64
- name: max_forks_repo_forks_event_max_datetime
dtype: string
- name: max_forks_repo_forks_event_min_datetime
dtype: string
- name: max_forks_repo_head_hexsha
dtype: string
- name: max_forks_repo_licenses
sequence: string
- name: max_forks_repo_name
dtype: string
- name: max_forks_repo_path
dtype: string
- name: max_issues_count
dtype: int64
- name: max_issues_repo_head_hexsha
dtype: string
- name: max_issues_repo_issues_event_max_datetime
dtype: string
- name: max_issues_repo_issues_event_min_datetime
dtype: string
- name: max_issues_repo_licenses
sequence: string
- name: max_issues_repo_name
dtype: string
- name: max_issues_repo_path
dtype: string
- name: max_line_length
dtype: int64
- name: max_stars_count
dtype: int64
- name: max_stars_repo_head_hexsha
dtype: string
- name: max_stars_repo_licenses
sequence: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_stars_event_max_datetime
dtype: string
- name: max_stars_repo_stars_event_min_datetime
dtype: string
- name: size
dtype: int64
splits:
- name: train
num_bytes: 348173288632
num_examples: 31988785
- name: validation
num_bytes: 222242606
num_examples: 20191
download_size: 114077093040
dataset_size: 348395531238
- config_name: en/en_dolma/c4
features:
- name: text
dtype: string
- name: meta
struct:
- name: added
dtype: string
- name: created
dtype: timestamp[s]
- name: id
dtype: string
- name: metadata
struct:
- name: lang
dtype: string
- name: length
dtype: int64
- name: provenance
dtype: string
- name: split
dtype: string
- name: url
dtype: string
- name: source
dtype: string
- name: version
dtype: string
splits:
- name: train
num_bytes: 867599301252
num_examples: 364154043
download_size: 524569640737
dataset_size: 867599301252
- config_name: en/en_wiki
features:
- name: text
dtype: string
- name: meta
struct:
- name: id
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 19636405423
num_examples: 6681920
- name: validation
num_bytes: 3948807
num_examples: 1479
download_size: 11637388371
dataset_size: 19640354230
- config_name: ja/ja_cc/level0
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 1200355090865
num_examples: 176975451
download_size: 677018847187
dataset_size: 1200355090865
- config_name: ja/ja_cc/level0/CC-MAIN-2013-2016
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 18077652679
num_examples: 3017867
download_size: 7550403004
dataset_size: 18077652679
- config_name: ja/ja_cc/level0/CC-MAIN-2013-2016.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 18077652679
num_examples: 3017867
download_size: 7550403004
dataset_size: 18077652679
- config_name: ja/ja_cc/level0/CC-MAIN-2017-04
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4305233736
num_examples: 862331
download_size: 2402819064
dataset_size: 4305233736
- config_name: ja/ja_cc/level0/CC-MAIN-2017-04.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4305233736
num_examples: 862331
download_size: 2402819064
dataset_size: 4305233736
- config_name: ja/ja_cc/level0/CC-MAIN-2017-09
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5971066540
num_examples: 921503
download_size: 3347473301
dataset_size: 5971066540
- config_name: ja/ja_cc/level0/CC-MAIN-2017-09.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5971066540
num_examples: 921503
download_size: 3347473301
dataset_size: 5971066540
- config_name: ja/ja_cc/level0/CC-MAIN-2017-13
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 13623315580
num_examples: 1848849
download_size: 7696021811
dataset_size: 13623315580
- config_name: ja/ja_cc/level0/CC-MAIN-2017-13.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 13623315580
num_examples: 1848849
download_size: 7696021811
dataset_size: 13623315580
- config_name: ja/ja_cc/level0/CC-MAIN-2017-17
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 19960969764
num_examples: 2725696
download_size: 11328454250
dataset_size: 19960969764
- config_name: ja/ja_cc/level0/CC-MAIN-2017-17.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 19960969764
num_examples: 2725696
download_size: 11328454250
dataset_size: 19960969764
- config_name: ja/ja_cc/level0/CC-MAIN-2017-22
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 17578326638
num_examples: 2397667
download_size: 9974307898
dataset_size: 17578326638
- config_name: ja/ja_cc/level0/CC-MAIN-2017-26
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 22170160936
num_examples: 3112260
download_size: 12584277127
dataset_size: 22170160936
- config_name: ja/ja_cc/level0/CC-MAIN-2017-26.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 22170160936
num_examples: 3112260
download_size: 12584277127
dataset_size: 22170160936
- config_name: ja/ja_cc/level0/CC-MAIN-2017-30
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 14501823246
num_examples: 2085151
download_size: 8224812524
dataset_size: 14501823246
- config_name: ja/ja_cc/level0/CC-MAIN-2017-30.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 14501823246
num_examples: 2085151
download_size: 8224812524
dataset_size: 14501823246
- config_name: ja/ja_cc/level0/CC-MAIN-2017-34
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 23608810552
num_examples: 3356687
download_size: 13414821009
dataset_size: 23608810552
- config_name: ja/ja_cc/level0/CC-MAIN-2017-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 17528722403
num_examples: 2523962
download_size: 9959532528
dataset_size: 17528722403
- config_name: ja/ja_cc/level0/CC-MAIN-2017-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 25287879895
num_examples: 3643052
download_size: 14381021861
dataset_size: 25287879895
- config_name: ja/ja_cc/level0/CC-MAIN-2017-47
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 18652655999
num_examples: 2701257
download_size: 10619509120
dataset_size: 18652655999
- config_name: ja/ja_cc/level0/CC-MAIN-2017-51
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9831692896
num_examples: 1514501
download_size: 5583328328
dataset_size: 9831692896
- config_name: ja/ja_cc/level0/CC-MAIN-2018-05
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 13912502392
num_examples: 2210709
download_size: 7870627736
dataset_size: 13912502392
- config_name: ja/ja_cc/level0/CC-MAIN-2018-09
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 25391259180
num_examples: 3459786
download_size: 14403046625
dataset_size: 25391259180
- config_name: ja/ja_cc/level0/CC-MAIN-2018-13
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 12553020822
num_examples: 1941599
download_size: 7130688242
dataset_size: 12553020822
- config_name: ja/ja_cc/level0/CC-MAIN-2018-17
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 12282399444
num_examples: 1859267
download_size: 6964890817
dataset_size: 12282399444
- config_name: ja/ja_cc/level0/CC-MAIN-2018-22
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9780663564
num_examples: 1494976
download_size: 5544176652
dataset_size: 9780663564
- config_name: ja/ja_cc/level0/CC-MAIN-2018-26
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 27545632634
num_examples: 4024429
download_size: 15644733329
dataset_size: 27545632634
- config_name: ja/ja_cc/level0/CC-MAIN-2018-30
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 31031997328
num_examples: 4586683
download_size: 17624748283
dataset_size: 31031997328
- config_name: ja/ja_cc/level0/CC-MAIN-2018-34
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 22080860363
num_examples: 3264249
download_size: 12492655364
dataset_size: 22080860363
- config_name: ja/ja_cc/level0/CC-MAIN-2018-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 22470675389
num_examples: 3450223
download_size: 12757248945
dataset_size: 22470675389
- config_name: ja/ja_cc/level0/CC-MAIN-2018-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 24488541335
num_examples: 3746893
download_size: 13892794865
dataset_size: 24488541335
- config_name: ja/ja_cc/level0/CC-MAIN-2018-47
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 23947465467
num_examples: 3786390
download_size: 13508537763
dataset_size: 23947465467
- config_name: ja/ja_cc/level0/CC-MAIN-2018-51
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 26614665841
num_examples: 4045567
download_size: 15048832666
dataset_size: 26614665841
- config_name: ja/ja_cc/level0/CC-MAIN-2019-04
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 22884848739
num_examples: 3533334
download_size: 12968492466
dataset_size: 22884848739
- config_name: ja/ja_cc/level0/CC-MAIN-2019-09
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 25283805979
num_examples: 3947864
download_size: 14284971196
dataset_size: 25283805979
- config_name: ja/ja_cc/level0/CC-MAIN-2019-13
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 19746126963
num_examples: 3175341
download_size: 11180781298
dataset_size: 19746126963
- config_name: ja/ja_cc/level0/CC-MAIN-2019-18
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 18807261380
num_examples: 2882443
download_size: 10622540297
dataset_size: 18807261380
- config_name: ja/ja_cc/level0/CC-MAIN-2019-22
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 18715166841
num_examples: 2916593
download_size: 10592510452
dataset_size: 18715166841
- config_name: ja/ja_cc/level0/CC-MAIN-2019-26
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 17440096682
num_examples: 2653638
download_size: 9877372013
dataset_size: 17440096682
- config_name: ja/ja_cc/level0/CC-MAIN-2019-30
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 16610902541
num_examples: 2530721
download_size: 9405838738
dataset_size: 16610902541
- config_name: ja/ja_cc/level0/CC-MAIN-2019-35
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 18894147887
num_examples: 2821144
download_size: 10683286133
dataset_size: 18894147887
- config_name: ja/ja_cc/level0/CC-MAIN-2019-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 15837917988
num_examples: 2354006
download_size: 8962140008
dataset_size: 15837917988
- config_name: ja/ja_cc/level0/CC-MAIN-2019-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 17587825217
num_examples: 2594362
download_size: 9939976141
dataset_size: 17587825217
- config_name: ja/ja_cc/level0/CC-MAIN-2019-47
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 15140862538
num_examples: 2255557
download_size: 8545891143
dataset_size: 15140862538
- config_name: ja/ja_cc/level0/CC-MAIN-2019-51
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 14089060004
num_examples: 2072557
download_size: 7966566882
dataset_size: 14089060004
- config_name: ja/ja_cc/level0/CC-MAIN-2020-05
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 18647569661
num_examples: 2711409
download_size: 10521591947
dataset_size: 18647569661
- config_name: ja/ja_cc/level0/CC-MAIN-2020-10
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 18211548642
num_examples: 2763344
download_size: 10320505385
dataset_size: 18211548642
- config_name: ja/ja_cc/level0/CC-MAIN-2020-16
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 17663121284
num_examples: 2582186
download_size: 9962786925
dataset_size: 17663121284
- config_name: ja/ja_cc/level0/CC-MAIN-2020-24
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 17786736737
num_examples: 2686710
download_size: 10055829013
dataset_size: 17786736737
- config_name: ja/ja_cc/level0/CC-MAIN-2020-29
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 20429230058
num_examples: 2981704
download_size: 11528413690
dataset_size: 20429230058
- config_name: ja/ja_cc/level0/CC-MAIN-2020-34
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 17164243722
num_examples: 2552649
download_size: 9735134902
dataset_size: 17164243722
- config_name: ja/ja_cc/level0/CC-MAIN-2020-40
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 21958669018
num_examples: 3353272
download_size: 12363982083
dataset_size: 21958669018
- config_name: ja/ja_cc/level0/CC-MAIN-2020-45
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 16030087382
num_examples: 2432752
download_size: 9042610368
dataset_size: 16030087382
- config_name: ja/ja_cc/level0/CC-MAIN-2020-50
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 14807185980
num_examples: 2280937
download_size: 8339216612
dataset_size: 14807185980
- config_name: ja/ja_cc/level0/CC-MAIN-2021-04
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 19122357417
num_examples: 2968443
download_size: 10776517688
dataset_size: 19122357417
- config_name: ja/ja_cc/level0/CC-MAIN-2021-10
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 16249894206
num_examples: 2436006
download_size: 9201649563
dataset_size: 16249894206
- config_name: ja/ja_cc/level0/CC-MAIN-2021-17
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 18273376667
num_examples: 2806858
download_size: 10334701554
dataset_size: 18273376667
- config_name: ja/ja_cc/level0/CC-MAIN-2021-21
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 16342382228
num_examples: 2546166
download_size: 9254748271
dataset_size: 16342382228
- config_name: ja/ja_cc/level0/CC-MAIN-2021-25
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 14034301314
num_examples: 2193262
download_size: 7978841314
dataset_size: 14034301314
- config_name: ja/ja_cc/level0/CC-MAIN-2021-31
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 16087560286
num_examples: 2572259
download_size: 9059644471
dataset_size: 16087560286
- config_name: ja/ja_cc/level0/CC-MAIN-2021-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 16656450882
num_examples: 2614496
download_size: 9407766051
dataset_size: 16656450882
- config_name: ja/ja_cc/level0/CC-MAIN-2021-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 20591396768
num_examples: 2999533
download_size: 11602249584
dataset_size: 20591396768
- config_name: ja/ja_cc/level0/CC-MAIN-2021-49
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 20497316219
num_examples: 2514204
download_size: 11553258035
dataset_size: 20497316219
- config_name: ja/ja_cc/level0/CC-MAIN-2022-05
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 25489828225
num_examples: 2927917
download_size: 14311581497
dataset_size: 25489828225
- config_name: ja/ja_cc/level0/CC-MAIN-2022-21
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 24493121649
num_examples: 3196696
download_size: 13850770861
dataset_size: 24493121649
- config_name: ja/ja_cc/level0/CC-MAIN-2022-27
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 21599832701
num_examples: 2803881
download_size: 12265315412
dataset_size: 21599832701
- config_name: ja/ja_cc/level0/CC-MAIN-2022-33
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 18646725890
num_examples: 2457181
download_size: 10674465737
dataset_size: 18646725890
- config_name: ja/ja_cc/level0/CC-MAIN-2022-40
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 17464927597
num_examples: 2423962
download_size: 9959274691
dataset_size: 17464927597
- config_name: ja/ja_cc/level0/CC-MAIN-2022-49
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 18859351029
num_examples: 2689665
download_size: 10773254205
dataset_size: 18859351029
- config_name: ja/ja_cc/level0/CC-MAIN-2023-06
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 19616817707
num_examples: 2717626
download_size: 11282424631
dataset_size: 19616817707
- config_name: ja/ja_cc/level0/CC-MAIN-2023-14
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 16263244400
num_examples: 2371585
download_size: 9256404829
dataset_size: 16263244400
- config_name: ja/ja_cc/level0/CC-MAIN-2023-23
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 16104190046
num_examples: 2458069
download_size: 9055589707
dataset_size: 16104190046
- config_name: ja/ja_cc/level0/CC-MAIN-2023-40
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 17027605768
num_examples: 2613565
download_size: 9572020324
dataset_size: 17027605768
- config_name: ja/ja_cc/level1/CC-MAIN-2013-2016
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8429944479
num_examples: 1265205
download_size: 4724022371
dataset_size: 8429944479
- config_name: ja/ja_cc/level1/CC-MAIN-2013-2016.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8429944479
num_examples: 1265205
download_size: 4724022371
dataset_size: 8429944479
- config_name: ja/ja_cc/level1/CC-MAIN-2017-04
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 1865202982
num_examples: 346766
download_size: 1043090103
dataset_size: 1865202982
- config_name: ja/ja_cc/level1/CC-MAIN-2017-04.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 1865202982
num_examples: 346766
download_size: 1043090103
dataset_size: 1865202982
- config_name: ja/ja_cc/level1/CC-MAIN-2017-09
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 2894653119
num_examples: 412717
download_size: 1625385202
dataset_size: 2894653119
- config_name: ja/ja_cc/level1/CC-MAIN-2017-09.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 2894653119
num_examples: 412717
download_size: 1625385202
dataset_size: 2894653119
- config_name: ja/ja_cc/level1/CC-MAIN-2017-13
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6732096640
num_examples: 883881
download_size: 3805942307
dataset_size: 6732096640
- config_name: ja/ja_cc/level1/CC-MAIN-2017-13.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6732096640
num_examples: 883881
download_size: 3805942307
dataset_size: 6732096640
- config_name: ja/ja_cc/level1/CC-MAIN-2017-17
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 10245840165
num_examples: 1382553
download_size: 5808840941
dataset_size: 10245840165
- config_name: ja/ja_cc/level1/CC-MAIN-2017-17.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 10245840165
num_examples: 1382553
download_size: 5808840941
dataset_size: 10245840165
- config_name: ja/ja_cc/level1/CC-MAIN-2017-22
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8736620627
num_examples: 1176124
download_size: 4956248124
dataset_size: 8736620627
- config_name: ja/ja_cc/level1/CC-MAIN-2017-22.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8736620627
num_examples: 1176124
download_size: 4956248124
dataset_size: 8736620627
- config_name: ja/ja_cc/level1/CC-MAIN-2017-26
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9927280113
num_examples: 1397609
download_size: 5634417780
dataset_size: 9927280113
- config_name: ja/ja_cc/level1/CC-MAIN-2017-26.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9927280113
num_examples: 1397609
download_size: 5634417780
dataset_size: 9927280113
- config_name: ja/ja_cc/level1/CC-MAIN-2017-30
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6843571316
num_examples: 964642
download_size: 3886001957
dataset_size: 6843571316
- config_name: ja/ja_cc/level1/CC-MAIN-2017-30.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6843571316
num_examples: 964642
download_size: 3886001957
dataset_size: 6843571316
- config_name: ja/ja_cc/level1/CC-MAIN-2017-34
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9734681403
num_examples: 1359864
download_size: 5529921973
dataset_size: 9734681403
- config_name: ja/ja_cc/level1/CC-MAIN-2017-34.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9734681403
num_examples: 1359864
download_size: 5529921973
dataset_size: 9734681403
- config_name: ja/ja_cc/level1/CC-MAIN-2017-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7407949656
num_examples: 1032769
download_size: 4208907960
dataset_size: 7407949656
- config_name: ja/ja_cc/level1/CC-MAIN-2017-39.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7407949656
num_examples: 1032769
download_size: 4208907960
dataset_size: 7407949656
- config_name: ja/ja_cc/level1/CC-MAIN-2017-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 10121094133
num_examples: 1432564
download_size: 5754909110
dataset_size: 10121094133
- config_name: ja/ja_cc/level1/CC-MAIN-2017-43.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 10121094133
num_examples: 1432564
download_size: 5754909110
dataset_size: 10121094133
- config_name: ja/ja_cc/level1/CC-MAIN-2017-47
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7538455860
num_examples: 1071392
download_size: 4288085442
dataset_size: 7538455860
- config_name: ja/ja_cc/level1/CC-MAIN-2017-47.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7538455860
num_examples: 1071392
download_size: 4288085442
dataset_size: 7538455860
- config_name: ja/ja_cc/level1/CC-MAIN-2017-51
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3346665967
num_examples: 516714
download_size: 1904289063
dataset_size: 3346665967
- config_name: ja/ja_cc/level1/CC-MAIN-2017-51.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3346665967
num_examples: 516714
download_size: 1904289063
dataset_size: 3346665967
- config_name: ja/ja_cc/level1/CC-MAIN-2018-05
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4882890196
num_examples: 768625
download_size: 2775931427
dataset_size: 4882890196
- config_name: ja/ja_cc/level1/CC-MAIN-2018-05.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4882890196
num_examples: 768625
download_size: 2775931427
dataset_size: 4882890196
- config_name: ja/ja_cc/level1/CC-MAIN-2018-09
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 10534510573
num_examples: 1393743
download_size: 5967772361
dataset_size: 10534510573
- config_name: ja/ja_cc/level1/CC-MAIN-2018-09.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 10534510573
num_examples: 1393743
download_size: 5967772361
dataset_size: 10534510573
- config_name: ja/ja_cc/level1/CC-MAIN-2018-13
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5266258876
num_examples: 820918
download_size: 2991653855
dataset_size: 5266258876
- config_name: ja/ja_cc/level1/CC-MAIN-2018-13.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5266258876
num_examples: 820918
download_size: 2991653855
dataset_size: 5266258876
- config_name: ja/ja_cc/level1/CC-MAIN-2018-17
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5551880137
num_examples: 833837
download_size: 3143552951
dataset_size: 5551880137
- config_name: ja/ja_cc/level1/CC-MAIN-2018-17.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5551880137
num_examples: 833837
download_size: 3143552951
dataset_size: 5551880137
- config_name: ja/ja_cc/level1/CC-MAIN-2018-22
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4053458706
num_examples: 610238
download_size: 2293797636
dataset_size: 4053458706
- config_name: ja/ja_cc/level1/CC-MAIN-2018-22.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4053458706
num_examples: 610238
download_size: 2293797636
dataset_size: 4053458706
- config_name: ja/ja_cc/level1/CC-MAIN-2018-26
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 10390522476
num_examples: 1515844
download_size: 5887605873
dataset_size: 10390522476
- config_name: ja/ja_cc/level1/CC-MAIN-2018-30
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 11466293262
num_examples: 1709826
download_size: 6500879453
dataset_size: 11466293262
- config_name: ja/ja_cc/level1/CC-MAIN-2018-34
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8077512194
num_examples: 1241994
download_size: 4568269424
dataset_size: 8077512194
- config_name: ja/ja_cc/level1/CC-MAIN-2018-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9070593598
num_examples: 1415771
download_size: 5141840641
dataset_size: 9070593598
- config_name: ja/ja_cc/level1/CC-MAIN-2018-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9440990856
num_examples: 1476728
download_size: 5345613583
dataset_size: 9440990856
- config_name: ja/ja_cc/level1/CC-MAIN-2018-47
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8434363027
num_examples: 1325105
download_size: 4771479208
dataset_size: 8434363027
- config_name: ja/ja_cc/level1/CC-MAIN-2018-51
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9584756946
num_examples: 1500777
download_size: 5423022121
dataset_size: 9584756946
- config_name: ja/ja_cc/level1/CC-MAIN-2019-04
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7774032593
num_examples: 1224011
download_size: 4400066026
dataset_size: 7774032593
- config_name: ja/ja_cc/level1/CC-MAIN-2019-09
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8583284095
num_examples: 1358482
download_size: 4845632423
dataset_size: 8583284095
- config_name: ja/ja_cc/level1/CC-MAIN-2019-13
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7607395854
num_examples: 1227930
download_size: 4296073978
dataset_size: 7607395854
- config_name: ja/ja_cc/level1/CC-MAIN-2019-18
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7311356046
num_examples: 1127493
download_size: 4117955573
dataset_size: 7311356046
- config_name: ja/ja_cc/level1/CC-MAIN-2019-22
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7052664717
num_examples: 1100247
download_size: 3977762964
dataset_size: 7052664717
- config_name: ja/ja_cc/level1/CC-MAIN-2019-26
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6701206244
num_examples: 1005702
download_size: 3779793507
dataset_size: 6701206244
- config_name: ja/ja_cc/level1/CC-MAIN-2019-30
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6226307356
num_examples: 935392
download_size: 3509035235
dataset_size: 6226307356
- config_name: ja/ja_cc/level1/CC-MAIN-2019-35
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6785732643
num_examples: 1002010
download_size: 3820800058
dataset_size: 6785732643
- config_name: ja/ja_cc/level1/CC-MAIN-2019-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5252132605
num_examples: 762036
download_size: 2956754296
dataset_size: 5252132605
- config_name: ja/ja_cc/level1/CC-MAIN-2019-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5667148379
num_examples: 808249
download_size: 3186478198
dataset_size: 5667148379
- config_name: ja/ja_cc/level1/CC-MAIN-2019-47
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4781725484
num_examples: 697436
download_size: 2683909281
dataset_size: 4781725484
- config_name: ja/ja_cc/level1/CC-MAIN-2019-51
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4493717764
num_examples: 637349
download_size: 2525588964
dataset_size: 4493717764
- config_name: ja/ja_cc/level1/CC-MAIN-2020-05
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6231003991
num_examples: 879884
download_size: 3495043820
dataset_size: 6231003991
- config_name: ja/ja_cc/level1/CC-MAIN-2020-10
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5283800906
num_examples: 797531
download_size: 2976382024
dataset_size: 5283800906
- config_name: ja/ja_cc/level1/CC-MAIN-2020-16
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6123529712
num_examples: 873232
download_size: 3430458936
dataset_size: 6123529712
- config_name: ja/ja_cc/level1/CC-MAIN-2020-24
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4977538851
num_examples: 759058
download_size: 2796562513
dataset_size: 4977538851
- config_name: ja/ja_cc/level1/CC-MAIN-2020-29
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6491011868
num_examples: 932297
download_size: 3639392949
dataset_size: 6491011868
- config_name: ja/ja_cc/level1/CC-MAIN-2020-34
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4786979713
num_examples: 716519
download_size: 2693713597
dataset_size: 4786979713
- config_name: ja/ja_cc/level1/CC-MAIN-2020-40
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6926656399
num_examples: 1049476
download_size: 3875284436
dataset_size: 6926656399
- config_name: ja/ja_cc/level1/CC-MAIN-2020-45
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4171005393
num_examples: 644863
download_size: 2339292519
dataset_size: 4171005393
- config_name: ja/ja_cc/level1/CC-MAIN-2020-50
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4150806603
num_examples: 643635
download_size: 2321379533
dataset_size: 4150806603
- config_name: ja/ja_cc/level1/CC-MAIN-2021-04
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5543913167
num_examples: 871714
download_size: 3100797036
dataset_size: 5543913167
- config_name: ja/ja_cc/level1/CC-MAIN-2021-10
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4297955475
num_examples: 662474
download_size: 2407685907
dataset_size: 4297955475
- config_name: ja/ja_cc/level1/CC-MAIN-2021-17
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5422437153
num_examples: 859354
download_size: 3040119236
dataset_size: 5422437153
- config_name: ja/ja_cc/level1/CC-MAIN-2021-21
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4505035942
num_examples: 734580
download_size: 2530437256
dataset_size: 4505035942
- config_name: ja/ja_cc/level1/CC-MAIN-2021-25
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3519172464
num_examples: 578076
download_size: 1975706832
dataset_size: 3519172464
- config_name: ja/ja_cc/level1/CC-MAIN-2021-31
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5368771294
num_examples: 894182
download_size: 2999565997
dataset_size: 5368771294
- config_name: ja/ja_cc/level1/CC-MAIN-2021-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4261593253
num_examples: 698010
download_size: 2392837989
dataset_size: 4261593253
- config_name: ja/ja_cc/level1/CC-MAIN-2021-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5053133750
num_examples: 820895
download_size: 2830100978
dataset_size: 5053133750
- config_name: ja/ja_cc/level1/CC-MAIN-2021-49
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3098420711
num_examples: 494146
download_size: 1743659027
dataset_size: 3098420711
- config_name: ja/ja_cc/level1/CC-MAIN-2022-05
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4226682420
num_examples: 665517
download_size: 2369313464
dataset_size: 4226682420
- config_name: ja/ja_cc/level1/CC-MAIN-2022-21
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4610258108
num_examples: 743483
download_size: 2586628579
dataset_size: 4610258108
- config_name: ja/ja_cc/level1/CC-MAIN-2022-27
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4076473603
num_examples: 647089
download_size: 2287839127
dataset_size: 4076473603
- config_name: ja/ja_cc/level1/CC-MAIN-2022-33
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 2936937535
num_examples: 472778
download_size: 1654018471
dataset_size: 2936937535
- config_name: ja/ja_cc/level1/CC-MAIN-2022-40
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3661582069
num_examples: 587486
download_size: 2051311361
dataset_size: 3661582069
- config_name: ja/ja_cc/level1/CC-MAIN-2022-49
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3907204498
num_examples: 630552
download_size: 2193382472
dataset_size: 3907204498
- config_name: ja/ja_cc/level1/CC-MAIN-2023-06
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3739190458
num_examples: 601104
download_size: 2092529568
dataset_size: 3739190458
- config_name: ja/ja_cc/level1/CC-MAIN-2023-14
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3276236620
num_examples: 522467
download_size: 1834582562
dataset_size: 3276236620
- config_name: ja/ja_cc/level1/CC-MAIN-2023-23
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3675239007
num_examples: 581584
download_size: 2052726661
dataset_size: 3675239007
- config_name: ja/ja_cc/level1/CC-MAIN-2023-40
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3609292175
num_examples: 564678
download_size: 2017724228
dataset_size: 3609292175
- config_name: ja/ja_cc/level2/CC-MAIN-2013-2016
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9394893606
num_examples: 1404066
download_size: 5280483664
dataset_size: 9394893606
- config_name: ja/ja_cc/level2/CC-MAIN-2013-2016.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9394893606
num_examples: 1404066
download_size: 5280483664
dataset_size: 9394893606
- config_name: ja/ja_cc/level2/CC-MAIN-2017-04
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 2118277905
num_examples: 376226
download_size: 1186558206
dataset_size: 2118277905
- config_name: ja/ja_cc/level2/CC-MAIN-2017-04.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 2118277905
num_examples: 376226
download_size: 1186558206
dataset_size: 2118277905
- config_name: ja/ja_cc/level2/CC-MAIN-2017-09
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3389191272
num_examples: 459363
download_size: 1904835535
dataset_size: 3389191272
- config_name: ja/ja_cc/level2/CC-MAIN-2017-09.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3389191272
num_examples: 459363
download_size: 1904835535
dataset_size: 3389191272
- config_name: ja/ja_cc/level2/CC-MAIN-2017-13
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8402030984
num_examples: 1064527
download_size: 4759582454
dataset_size: 8402030984
- config_name: ja/ja_cc/level2/CC-MAIN-2017-13.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8402030984
num_examples: 1064527
download_size: 4759582454
dataset_size: 8402030984
- config_name: ja/ja_cc/level2/CC-MAIN-2017-17
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 12318015842
num_examples: 1648615
download_size: 6987533053
dataset_size: 12318015842
- config_name: ja/ja_cc/level2/CC-MAIN-2017-17.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 12318015842
num_examples: 1648615
download_size: 6987533053
dataset_size: 12318015842
- config_name: ja/ja_cc/level2/CC-MAIN-2017-22
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 10103614578
num_examples: 1356562
download_size: 5735990797
dataset_size: 10103614578
- config_name: ja/ja_cc/level2/CC-MAIN-2017-22.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 10103614578
num_examples: 1356562
download_size: 5735990797
dataset_size: 10103614578
- config_name: ja/ja_cc/level2/CC-MAIN-2017-26
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 11213477661
num_examples: 1560019
download_size: 6364687314
dataset_size: 11213477661
- config_name: ja/ja_cc/level2/CC-MAIN-2017-26.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 11213477661
num_examples: 1560019
download_size: 6364687314
dataset_size: 11213477661
- config_name: ja/ja_cc/level2/CC-MAIN-2017-30
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8506119235
num_examples: 1177533
download_size: 4833206455
dataset_size: 8506119235
- config_name: ja/ja_cc/level2/CC-MAIN-2017-30.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8506119235
num_examples: 1177533
download_size: 4833206455
dataset_size: 8506119235
- config_name: ja/ja_cc/level2/CC-MAIN-2017-34
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 10503775715
num_examples: 1443182
download_size: 5961350977
dataset_size: 10503775715
- config_name: ja/ja_cc/level2/CC-MAIN-2017-34.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 10503775715
num_examples: 1443182
download_size: 5961350977
dataset_size: 10503775715
- config_name: ja/ja_cc/level2/CC-MAIN-2017-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8426280826
num_examples: 1149839
download_size: 4785282928
dataset_size: 8426280826
- config_name: ja/ja_cc/level2/CC-MAIN-2017-39.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8426280826
num_examples: 1149839
download_size: 4785282928
dataset_size: 8426280826
- config_name: ja/ja_cc/level2/CC-MAIN-2017-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 11057598690
num_examples: 1540385
download_size: 6279342988
dataset_size: 11057598690
- config_name: ja/ja_cc/level2/CC-MAIN-2017-43.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 11057598690
num_examples: 1540385
download_size: 6279342988
dataset_size: 11057598690
- config_name: ja/ja_cc/level2/CC-MAIN-2017-47
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8559773222
num_examples: 1182784
download_size: 4863721293
dataset_size: 8559773222
- config_name: ja/ja_cc/level2/CC-MAIN-2017-47.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8559773222
num_examples: 1182784
download_size: 4863721293
dataset_size: 8559773222
- config_name: ja/ja_cc/level2/CC-MAIN-2017-51
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3533732728
num_examples: 530470
download_size: 2009160901
dataset_size: 3533732728
- config_name: ja/ja_cc/level2/CC-MAIN-2017-51.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3533732728
num_examples: 530470
download_size: 2009160901
dataset_size: 3533732728
- config_name: ja/ja_cc/level2/CC-MAIN-2018-05
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5759990216
num_examples: 861662
download_size: 3277435502
dataset_size: 5759990216
- config_name: ja/ja_cc/level2/CC-MAIN-2018-09
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 12093607719
num_examples: 1561053
download_size: 6837363288
dataset_size: 12093607719
- config_name: ja/ja_cc/level2/CC-MAIN-2018-09.jsonl.gz
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 12093607719
num_examples: 1561053
download_size: 6837363288
dataset_size: 12093607719
- config_name: ja/ja_cc/level2/CC-MAIN-2018-13
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5648639692
num_examples: 862412
download_size: 3206734199
dataset_size: 5648639692
- config_name: ja/ja_cc/level2/CC-MAIN-2018-17
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6615367162
num_examples: 951727
download_size: 3746889744
dataset_size: 6615367162
- config_name: ja/ja_cc/level2/CC-MAIN-2018-22
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4114836105
num_examples: 598617
download_size: 2327432419
dataset_size: 4114836105
- config_name: ja/ja_cc/level2/CC-MAIN-2018-26
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 11010803655
num_examples: 1573605
download_size: 6239707379
dataset_size: 11010803655
- config_name: ja/ja_cc/level2/CC-MAIN-2018-30
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 12322227820
num_examples: 1796364
download_size: 6980237669
dataset_size: 12322227820
- config_name: ja/ja_cc/level2/CC-MAIN-2018-34
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7746813133
num_examples: 1190068
download_size: 4379879927
dataset_size: 7746813133
- config_name: ja/ja_cc/level2/CC-MAIN-2018-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8811469607
num_examples: 1362161
download_size: 4989881265
dataset_size: 8811469607
- config_name: ja/ja_cc/level2/CC-MAIN-2018-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9262911005
num_examples: 1416995
download_size: 5238593229
dataset_size: 9262911005
- config_name: ja/ja_cc/level2/CC-MAIN-2018-47
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8419850724
num_examples: 1291827
download_size: 4761381589
dataset_size: 8419850724
- config_name: ja/ja_cc/level2/CC-MAIN-2018-51
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9434516924
num_examples: 1460644
download_size: 5331756103
dataset_size: 9434516924
- config_name: ja/ja_cc/level2/CC-MAIN-2019-04
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7452647471
num_examples: 1155046
download_size: 4210232304
dataset_size: 7452647471
- config_name: ja/ja_cc/level2/CC-MAIN-2019-09
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 8671139841
num_examples: 1358323
download_size: 4890352152
dataset_size: 8671139841
- config_name: ja/ja_cc/level2/CC-MAIN-2019-13
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7596149906
num_examples: 1204692
download_size: 4284114515
dataset_size: 7596149906
- config_name: ja/ja_cc/level2/CC-MAIN-2019-18
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 7140695770
num_examples: 1099555
download_size: 4017422809
dataset_size: 7140695770
- config_name: ja/ja_cc/level2/CC-MAIN-2019-22
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6903977484
num_examples: 1063623
download_size: 3885500642
dataset_size: 6903977484
- config_name: ja/ja_cc/level2/CC-MAIN-2019-26
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6390674476
num_examples: 941523
download_size: 3593692548
dataset_size: 6390674476
- config_name: ja/ja_cc/level2/CC-MAIN-2019-30
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5825832990
num_examples: 860680
download_size: 3273290105
dataset_size: 5825832990
- config_name: ja/ja_cc/level2/CC-MAIN-2019-35
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6383336743
num_examples: 916249
download_size: 3582967448
dataset_size: 6383336743
- config_name: ja/ja_cc/level2/CC-MAIN-2019-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4995574683
num_examples: 703602
download_size: 2802793809
dataset_size: 4995574683
- config_name: ja/ja_cc/level2/CC-MAIN-2019-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5216360310
num_examples: 717345
download_size: 2923784198
dataset_size: 5216360310
- config_name: ja/ja_cc/level2/CC-MAIN-2019-47
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4642153383
num_examples: 662074
download_size: 2598715807
dataset_size: 4642153383
- config_name: ja/ja_cc/level2/CC-MAIN-2019-51
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4299087590
num_examples: 582588
download_size: 2407754233
dataset_size: 4299087590
- config_name: ja/ja_cc/level2/CC-MAIN-2020-05
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6289760302
num_examples: 858051
download_size: 3519041559
dataset_size: 6289760302
- config_name: ja/ja_cc/level2/CC-MAIN-2020-10
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4758860270
num_examples: 714734
download_size: 2673577236
dataset_size: 4758860270
- config_name: ja/ja_cc/level2/CC-MAIN-2020-16
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6086751009
num_examples: 843410
download_size: 3400399351
dataset_size: 6086751009
- config_name: ja/ja_cc/level2/CC-MAIN-2020-24
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4285080909
num_examples: 651033
download_size: 2400367573
dataset_size: 4285080909
- config_name: ja/ja_cc/level2/CC-MAIN-2020-29
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6014862321
num_examples: 842488
download_size: 3362494588
dataset_size: 6014862321
- config_name: ja/ja_cc/level2/CC-MAIN-2020-34
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3980141532
num_examples: 585821
download_size: 2231882651
dataset_size: 3980141532
- config_name: ja/ja_cc/level2/CC-MAIN-2020-40
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 6499321430
num_examples: 976875
download_size: 3628292238
dataset_size: 6499321430
- config_name: ja/ja_cc/level2/CC-MAIN-2020-45
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3593677116
num_examples: 552472
download_size: 2011172923
dataset_size: 3593677116
- config_name: ja/ja_cc/level2/CC-MAIN-2020-50
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3770236773
num_examples: 578332
download_size: 2103662861
dataset_size: 3770236773
- config_name: ja/ja_cc/level2/CC-MAIN-2021-04
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5216423992
num_examples: 821473
download_size: 2913403375
dataset_size: 5216423992
- config_name: ja/ja_cc/level2/CC-MAIN-2021-10
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3862552123
num_examples: 594778
download_size: 2157566026
dataset_size: 3862552123
- config_name: ja/ja_cc/level2/CC-MAIN-2021-17
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5123263157
num_examples: 810750
download_size: 2861469737
dataset_size: 5123263157
- config_name: ja/ja_cc/level2/CC-MAIN-2021-21
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4113794716
num_examples: 674539
download_size: 2304138745
dataset_size: 4113794716
- config_name: ja/ja_cc/level2/CC-MAIN-2021-25
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 2987137315
num_examples: 491359
download_size: 1672606708
dataset_size: 2987137315
- config_name: ja/ja_cc/level2/CC-MAIN-2021-31
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 5337751251
num_examples: 894333
download_size: 2974884729
dataset_size: 5337751251
- config_name: ja/ja_cc/level2/CC-MAIN-2021-39
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3658660723
num_examples: 589814
download_size: 2049003592
dataset_size: 3658660723
- config_name: ja/ja_cc/level2/CC-MAIN-2021-43
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4779276524
num_examples: 774828
download_size: 2667952832
dataset_size: 4779276524
- config_name: ja/ja_cc/level2/CC-MAIN-2021-49
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 2337719172
num_examples: 381868
download_size: 1312420137
dataset_size: 2337719172
- config_name: ja/ja_cc/level2/CC-MAIN-2022-05
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3550948087
num_examples: 571553
download_size: 1984787611
dataset_size: 3550948087
- config_name: ja/ja_cc/level2/CC-MAIN-2022-21
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3948048805
num_examples: 649329
download_size: 2206117446
dataset_size: 3948048805
- config_name: ja/ja_cc/level2/CC-MAIN-2022-27
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3352783685
num_examples: 539013
download_size: 1874769194
dataset_size: 3352783685
- config_name: ja/ja_cc/level2/CC-MAIN-2022-33
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 1988792607
num_examples: 323095
download_size: 1117283495
dataset_size: 1988792607
- config_name: ja/ja_cc/level2/CC-MAIN-2022-40
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3251642710
num_examples: 526546
download_size: 1814852709
dataset_size: 3251642710
- config_name: ja/ja_cc/level2/CC-MAIN-2022-49
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3115260102
num_examples: 500740
download_size: 1743464092
dataset_size: 3115260102
- config_name: ja/ja_cc/level2/CC-MAIN-2023-06
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3192668059
num_examples: 514581
download_size: 1781993559
dataset_size: 3192668059
- config_name: ja/ja_cc/level2/CC-MAIN-2023-14
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 2593964775
num_examples: 406232
download_size: 1450027612
dataset_size: 2593964775
- config_name: ja/ja_cc/level2/CC-MAIN-2023-23
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 3076009484
num_examples: 479466
download_size: 1716012252
dataset_size: 3076009484
- config_name: ja/ja_cc/level2/CC-MAIN-2023-40
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 2805739428
num_examples: 431774
download_size: 1567017259
dataset_size: 2805739428
- config_name: ja/ja_warp_html/level0
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: charset
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 4847780475
num_examples: 1119789
download_size: 2711767060
dataset_size: 4847780475
- config_name: ja/ja_warp_html/level1
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: charset
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 1305891625
num_examples: 304649
download_size: 730239447
dataset_size: 1305891625
- config_name: ja/ja_warp_html/level2
features:
- name: text
dtype: string
- name: meta
struct:
- name: docId
dtype: string
- name: url
dtype: string
- name: charset
dtype: string
- name: date
dtype: timestamp[s]
splits:
- name: train
num_bytes: 1040882464
num_examples: 235298
download_size: 580651323
dataset_size: 1040882464
- config_name: ja/ja_wiki
features:
- name: text
dtype: string
- name: meta
struct:
- name: id
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 6254632459
num_examples: 1363395
- name: validation
num_bytes: 4373553
num_examples: 1134
download_size: 3520251395
dataset_size: 6259006012
- config_name: ja/kaken
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4665198601
num_examples: 1058171
- name: validation
num_bytes: 248808941
num_examples: 56208
download_size: 1858435769
dataset_size: 4914007542
- config_name: ko/ko_wiki
features:
- name: text
dtype: string
- name: meta
struct:
- name: id
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 1430850265
num_examples: 655484
- name: validation
num_bytes: 2342145
num_examples: 1000
download_size: 834573775
dataset_size: 1433192410
- config_name: zh/zh_wiki
features:
- name: text
dtype: string
- name: meta
struct:
- name: id
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 2774101324
num_examples: 1398694
- name: validation
num_bytes: 2032067
num_examples: 1000
download_size: 1822495934
dataset_size: 2776133391
configs:
- config_name: code/code_stack
data_files:
- split: train
path: code/code_stack/train-*
- split: validation
path: code/code_stack/validation-*
- config_name: en/en_dolma/c4
data_files:
- split: train
path: en/en_dolma/c4/train-*
- config_name: en/en_wiki
data_files:
- split: train
path: en/en_wiki/train-*
- split: validation
path: en/en_wiki/validation-*
- config_name: ja/ja_cc/level0
data_files:
- split: train
path: ja/ja_cc/level0/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2013-2016
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2013-2016/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2013-2016.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2013-2016.jsonl.gz/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-04
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-04/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-04.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-04.jsonl.gz/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-09
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-09/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-09.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-09.jsonl.gz/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-13
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-13/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-13.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-13.jsonl.gz/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-17
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-17/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-17.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-17.jsonl.gz/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-22
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-22/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-26
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-26/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-26.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-26.jsonl.gz/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-30
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-30/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-30.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-30.jsonl.gz/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-34
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-34/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-39
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-39/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-43
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-43/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-47
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-47/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2017-51
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2017-51/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-05
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-05/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-09
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-09/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-13
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-13/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-17
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-17/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-22
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-22/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-26
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-26/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-30
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-30/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-34
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-34/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-39
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-39/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-43
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-43/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-47
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-47/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2018-51
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2018-51/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-04
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-04/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-09
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-09/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-13
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-13/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-18
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-18/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-22
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-22/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-26
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-26/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-30
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-30/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-35
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-35/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-39
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-39/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-43
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-43/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-47
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-47/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2019-51
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2019-51/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2020-05
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2020-05/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2020-10
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2020-10/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2020-16
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2020-16/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2020-24
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2020-24/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2020-29
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2020-29/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2020-34
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2020-34/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2020-40
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2020-40/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2020-45
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2020-45/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2020-50
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2020-50/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2021-04
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2021-04/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2021-10
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2021-10/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2021-17
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2021-17/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2021-21
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2021-21/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2021-25
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2021-25/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2021-31
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2021-31/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2021-39
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2021-39/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2021-43
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2021-43/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2021-49
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2021-49/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2022-05
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2022-05/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2022-21
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2022-21/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2022-27
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2022-27/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2022-33
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2022-33/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2022-40
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2022-40/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2022-49
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2022-49/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2023-06
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2023-06/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2023-14
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2023-14/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2023-23
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2023-23/train-*
- config_name: ja/ja_cc/level0/CC-MAIN-2023-40
data_files:
- split: train
path: ja/ja_cc/level0/CC-MAIN-2023-40/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2013-2016
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2013-2016/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2013-2016.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2013-2016.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-04
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-04/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-04.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-04.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-09
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-09/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-09.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-09.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-13
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-13/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-13.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-13.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-17
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-17/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-17.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-17.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-22
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-22/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-22.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-22.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-26
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-26/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-26.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-26.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-30
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-30/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-30.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-30.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-34
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-34/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-34.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-34.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-39
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-39/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-39.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-39.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-43
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-43/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-43.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-43.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-47
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-47/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-47.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-47.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-51
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-51/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2017-51.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2017-51.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-05
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-05/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-05.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-05.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-09
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-09/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-09.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-09.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-13
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-13/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-13.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-13.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-17
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-17/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-17.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-17.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-22
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-22/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-22.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-22.jsonl.gz/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-26
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-26/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-30
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-30/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-34
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-34/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-39
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-39/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-43
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-43/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-47
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-47/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2018-51
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2018-51/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-04
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-04/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-09
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-09/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-13
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-13/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-18
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-18/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-22
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-22/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-26
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-26/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-30
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-30/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-35
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-35/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-39
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-39/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-43
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-43/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-47
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-47/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2019-51
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2019-51/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2020-05
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2020-05/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2020-10
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2020-10/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2020-16
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2020-16/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2020-24
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2020-24/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2020-29
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2020-29/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2020-34
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2020-34/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2020-40
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2020-40/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2020-45
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2020-45/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2020-50
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2020-50/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2021-04
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2021-04/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2021-10
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2021-10/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2021-17
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2021-17/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2021-21
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2021-21/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2021-25
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2021-25/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2021-31
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2021-31/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2021-39
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2021-39/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2021-43
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2021-43/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2021-49
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2021-49/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2022-05
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2022-05/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2022-21
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2022-21/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2022-27
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2022-27/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2022-33
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2022-33/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2022-40
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2022-40/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2022-49
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2022-49/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2023-06
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2023-06/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2023-14
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2023-14/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2023-23
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2023-23/train-*
- config_name: ja/ja_cc/level1/CC-MAIN-2023-40
data_files:
- split: train
path: ja/ja_cc/level1/CC-MAIN-2023-40/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2013-2016
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2013-2016/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2013-2016.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2013-2016.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-04
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-04/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-04.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-04.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-09
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-09/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-09.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-09.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-13
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-13/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-13.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-13.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-17
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-17/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-17.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-17.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-22
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-22/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-22.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-22.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-26
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-26/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-26.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-26.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-30
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-30/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-30.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-30.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-34
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-34/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-34.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-34.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-39
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-39/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-39.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-39.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-43
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-43/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-43.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-43.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-47
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-47/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-47.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-47.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-51
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-51/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2017-51.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2017-51.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-05
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-05/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-09
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-09/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-09.jsonl.gz
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-09.jsonl.gz/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-13
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-13/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-17
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-17/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-22
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-22/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-26
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-26/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-30
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-30/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-34
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-34/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-39
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-39/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-43
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-43/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-47
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-47/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2018-51
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2018-51/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-04
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-04/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-09
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-09/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-13
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-13/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-18
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-18/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-22
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-22/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-26
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-26/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-30
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-30/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-35
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-35/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-39
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-39/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-43
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-43/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-47
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-47/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2019-51
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2019-51/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2020-05
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2020-05/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2020-10
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2020-10/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2020-16
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2020-16/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2020-24
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2020-24/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2020-29
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2020-29/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2020-34
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2020-34/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2020-40
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2020-40/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2020-45
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2020-45/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2020-50
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2020-50/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2021-04
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2021-04/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2021-10
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2021-10/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2021-17
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2021-17/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2021-21
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2021-21/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2021-25
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2021-25/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2021-31
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2021-31/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2021-39
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2021-39/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2021-43
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2021-43/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2021-49
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2021-49/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2022-05
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2022-05/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2022-21
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2022-21/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2022-27
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2022-27/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2022-33
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2022-33/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2022-40
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2022-40/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2022-49
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2022-49/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2023-06
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2023-06/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2023-14
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2023-14/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2023-23
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2023-23/train-*
- config_name: ja/ja_cc/level2/CC-MAIN-2023-40
data_files:
- split: train
path: ja/ja_cc/level2/CC-MAIN-2023-40/train-*
- config_name: ja/ja_warp_html/level0
data_files:
- split: train
path: ja/ja_warp_html/level0/train-*
- config_name: ja/ja_warp_html/level1
data_files:
- split: train
path: ja/ja_warp_html/level1/train-*
- config_name: ja/ja_warp_html/level2
data_files:
- split: train
path: ja/ja_warp_html/level2/train-*
- config_name: ja/ja_wiki
data_files:
- split: train
path: ja/ja_wiki/train-*
- split: validation
path: ja/ja_wiki/validation-*
- config_name: ja/kaken
data_files:
- split: train
path: ja/kaken/train-*
- split: validation
path: ja/kaken/validation-*
- config_name: ko/ko_wiki
data_files:
- split: train
path: ko/ko_wiki/train-*
- split: validation
path: ko/ko_wiki/validation-*
- config_name: zh/zh_wiki
data_files:
- split: train
path: zh/zh_wiki/train-*
- split: validation
path: zh/zh_wiki/validation-*
---
|
fimu-docproc-research/CIVQA-TesseractOCR-LayoutLM | fimu-docproc-research | "2023-11-21T20:45:53Z" | 12,399 | 0 | [
"language:cs",
"license:mit",
"region:us",
"finance"
] | null | "2023-11-16T01:45:17Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int64
- name: bbox
dtype:
array2_d:
shape:
- 512
- 4
dtype: int64
- name: attention_mask
sequence: int64
- name: image
dtype:
array3_d:
shape:
- 3
- 224
- 224
dtype: int64
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
- name: questions
dtype: string
- name: answers
dtype: string
splits:
- name: train
num_bytes: 198175471439
num_examples: 160645
- name: validation
num_bytes: 20009392368
num_examples: 16220
download_size: 826530358
dataset_size: 218184863807
language:
- cs
tags:
- finance
pretty_name: C
license: mit
---
# CIVQA TesseractOCR LayoutLM Dataset
The Czech Invoice Visual Question Answering dataset was created with Tesseract OCR and encoded for the LayoutLM.
The pre-encoded dataset can be found on this link: https://huggingface.co/datasets/fimu-docproc-research/CIVQA-TesseractOCR
All invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices.
- Invoice number
- Variable symbol
- Specific symbol
- Constant symbol
- Bank code
- Account number
- ICO
- Total amount
- Invoice date
- Due date
- Name of supplier
- IBAN
- DIC
- QR code
- Supplier's address
The invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: https://forms.gle/tUVJKoB22oeTncUD6
We profoundly appreciate your cooperation and understanding in this matter. |
bigcode/commitpack | bigcode | "2025-02-04T15:41:01Z" | 12,392 | 63 | [
"language:code",
"license:mit",
"arxiv:2308.07124",
"region:us"
] | null | "2023-01-17T11:53:28Z" | ---
license: mit
pretty_name: CommitPack
language:
- code
---

# Dataset Card for CommitPack
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigcode-project/octopack
- **Paper:** [OctoPack: Instruction Tuning Code Large Language Models](https://arxiv.org/abs/2308.07124)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> CommitPack is is a 4TB dataset of commits scraped from GitHub repositories that are permissively licensed.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigcode-project/octopack).
- **Languages:** 350
- **OctoPack🐙🎒:**
<table>
<tr>
<th>Data</t>
<td><a href=https://huggingface.co/datasets/bigcode/commitpack>CommitPack</a></td>
<td>4TB of GitHub commits across 350 programming languages</td>
</tr>
<tr>
<th></t>
<td><a href=https://huggingface.co/datasets/bigcode/commitpackft>CommitPackFT</a></td>
<td>Filtered version of CommitPack for high-quality commit messages that resemble instructions</td>
</tr>
<tr>
<th>Model</t>
<td><a href=https://huggingface.co/bigcode/octocoder>OctoCoder</a></td>
<td>StarCoder (16B parameters) instruction tuned on CommitPackFT + OASST</td>
</tr>
<tr>
<th></t>
<td><a href=https://huggingface.co/bigcode/octogeex>OctoGeeX</a></td>
<td>CodeGeeX2 (6B parameters) instruction tuned on CommitPackFT + OASST</td>
</tr>
<tr>
<th>Evaluation</t>
<td><a href=https://huggingface.co/datasets/bigcode/humanevalpack>HumanEvalPack</a></td>
<td>Extension of OpenAI's HumanEval to cover 3 scenarios across 6 languages</td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example looks as follows:
```json
{
'commit': '0c17311f7fd511f5dae8f8e4acc2dce1a2de3cf5',
'old_file': 'main.py',
'new_file': 'main.py',
'old_contents': "import numpy as np\nimport matplotlib.pyplot as plt\n\n# generate sample data\nx_data = np.linspace(-5, 5, 20)\ny_data = np.random.normal(0.0, 1.0, x_data.size)\n\nplt.plot(x_data, y_data, 'o')\nplt.show()\n",
'new_contents': "import math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# generate sample data\nx_data = np.linspace(-math.pi, math.pi, 30)\ny_data = np.sin(x_data) + np.random.normal(0.0, 0.1, x_data.size)\n\nplt.plot(x_data, y_data, 'o')\nplt.show()\n\n",
'subject': 'Change to sin() function with noise',
'message': 'Change to sin() function with noise\n',
'lang': 'Python',
'license': 'mit',
'repos': 'MorganR/basic-gaussian-process',
'returncode': 0,
'stderr': ''
}
```
### Data Fields
The data fields are the same among all splits:
- `commit`: unique commit id
- `old_file`: name of the file before the commit
- `new_file`: name of the file after the commit
- `old_contents`: contents of the file before the commit
- `new_contents`: contents of the file after the commit
- `subject`: subject of the commit (this is used for all experiments in the paper)
- `message`: message of the commit (commonly the same as the subject)
- `lang`: programming language
- `license`: license of the repository the code stems from, one of `['mit', 'artistic-2.0', 'isc', 'cc0-1.0', 'epl-1.0', 'mpl-2.0', 'unlicense', 'unknown', 'apache-2.0', 'bsd-3-clause', 'agpl-3.0', 'lgpl-2.1', 'bsd-2-clause']`
- `repos`: name of the the repository the code stems from (if multiple, they are comma separated)
- `returncode`: if applicable errorcode during scraping (0 = no error)
- 'stderr': if applicable the error that occured during scraping (empty = no error)
### Data Splits
| Name | Megabytes | % of total | Samples | % of total |
| --- | --- | --- | --- | --- |
| total | 3709175.78 | 100.0% | 57700105 | 100.0% |
| json | 583293.816 | 15.7257% | 3495038 | 6.0572% |
| xml | 279208.676 | 7.5275% | 1923159 | 3.333% |
| text | 270662.596 | 7.2971% | 1389525 | 2.4082% |
| javascript | 262824.844 | 7.0858% | 5401937 | 9.3621% |
| objective-c++ | 239009.3 | 6.4437% | 32227 | 0.0559% |
| python | 234311.564 | 6.3171% | 6189601 | 10.7272% |
| c | 200876.804 | 5.4157% | 2779478 | 4.8171% |
| c++ | 186585.256 | 5.0304% | 2402294 | 4.1634% |
| markdown | 171849.952 | 4.6331% | 7645354 | 13.2502% |
| java | 127103.448 | 3.4267% | 3744377 | 6.4894% |
| html | 105305.284 | 2.839% | 2366841 | 4.102% |
| yaml | 100466.64 | 2.7086% | 2592787 | 4.4936% |
| go | 86444.624 | 2.3306% | 1183612 | 2.0513% |
| csv | 82946.192 | 2.2362% | 79268 | 0.1374% |
| php | 74961.64 | 2.021% | 2555419 | 4.4288% |
| jupyter-notebook | 66854.08 | 1.8024% | 94000 | 0.1629% |
| gettext-catalog | 62296.88 | 1.6795% | 168327 | 0.2917% |
| sql | 56802.764 | 1.5314% | 132772 | 0.2301% |
| unity3d-asset | 39535.008 | 1.0659% | 17867 | 0.031% |
| typescript | 39254.804 | 1.0583% | 572136 | 0.9916% |
| web-ontology-language | 36435.464 | 0.9823% | 7458 | 0.0129% |
| ruby | 35830.74 | 0.966% | 2928702 | 5.0757% |
| c# | 33669.652 | 0.9077% | 923157 | 1.5999% |
| nix | 33547.92 | 0.9045% | 221281 | 0.3835% |
| shell | 25109.952 | 0.677% | 1017977 | 1.7643% |
| perl | 21148.928 | 0.5702% | 374266 | 0.6486% |
| tex | 17471.108 | 0.471% | 89283 | 0.1547% |
| css | 16306.632 | 0.4396% | 548818 | 0.9512% |
| restructuredtext | 15613.888 | 0.421% | 494037 | 0.8562% |
| rust | 15011.296 | 0.4047% | 296214 | 0.5134% |
| groff | 12020.188 | 0.3241% | 32923 | 0.0571% |
| ini | 8375.164 | 0.2258% | 297100 | 0.5149% |
| scala | 8325.96 | 0.2245% | 316064 | 0.5478% |
| coffeescript | 6795.14 | 0.1832% | 292446 | 0.5068% |
| haskell | 6306.12 | 0.17% | 217325 | 0.3766% |
| swift | 5902.716 | 0.1591% | 319289 | 0.5534% |
| lua | 5763.12 | 0.1554% | 139091 | 0.2411% |
| svg | 5645.44 | 0.1522% | 27095 | 0.047% |
| gas | 5585.384 | 0.1506% | 15121 | 0.0262% |
| ocaml | 5355.4 | 0.1444% | 81360 | 0.141% |
| erlang | 5043.32 | 0.136% | 93685 | 0.1624% |
| makefile | 4238.512 | 0.1143% | 343379 | 0.5951% |
| asciidoc | 4138.588 | 0.1116% | 96671 | 0.1675% |
| emacs-lisp | 3988.652 | 0.1075% | 83228 | 0.1442% |
| scss | 3944.936 | 0.1064% | 288190 | 0.4995% |
| clojure | 3523.408 | 0.095% | 158674 | 0.275% |
| org | 3126.22 | 0.0843% | 30198 | 0.0523% |
| common-lisp | 2954.904 | 0.0797% | 74628 | 0.1293% |
| diff | 2586.048 | 0.0697% | 21021 | 0.0364% |
| groovy | 2569.14 | 0.0693% | 110057 | 0.1907% |
| html+erb | 2450.676 | 0.0661% | 225379 | 0.3906% |
| nesc | 2439.564 | 0.0658% | 473 | 0.0008% |
| dart | 2395.796 | 0.0646% | 56873 | 0.0986% |
| powershell | 2289.276 | 0.0617% | 55381 | 0.096% |
| f# | 2289.236 | 0.0617% | 66840 | 0.1158% |
| dm | 2223.144 | 0.0599% | 55584 | 0.0963% |
| kotlin | 2219.248 | 0.0598% | 124266 | 0.2154% |
| pascal | 2194.676 | 0.0592% | 42511 | 0.0737% |
| jsx | 2124.744 | 0.0573% | 139148 | 0.2412% |
| viml | 1948.208 | 0.0525% | 74062 | 0.1284% |
| actionscript | 1844.148 | 0.0497% | 28819 | 0.0499% |
| cython | 1736.588 | 0.0468% | 25927 | 0.0449% |
| turtle | 1698.948 | 0.0458% | 3882 | 0.0067% |
| less | 1616.564 | 0.0436% | 88634 | 0.1536% |
| mathematica | 1475.044 | 0.0398% | 925 | 0.0016% |
| xslt | 1441.456 | 0.0389% | 27956 | 0.0485% |
| scheme | 1249.244 | 0.0337% | 30546 | 0.0529% |
| perl6 | 1223.16 | 0.033% | 12167 | 0.0211% |
| edn | 1186.94 | 0.032% | 2289 | 0.004% |
| fortran | 1178.548 | 0.0318% | 13463 | 0.0233% |
| java-server-pages | 1173.072 | 0.0316% | 53574 | 0.0928% |
| standard-ml | 1133.476 | 0.0306% | 20097 | 0.0348% |
| cmake | 1132.068 | 0.0305% | 58446 | 0.1013% |
| json5 | 1108.2 | 0.0299% | 1827 | 0.0032% |
| vala | 1104.512 | 0.0298% | 14822 | 0.0257% |
| vue | 1093.8 | 0.0295% | 68967 | 0.1195% |
| freemarker | 1032.332 | 0.0278% | 36216 | 0.0628% |
| graphql | 1004.844 | 0.0271% | 2009 | 0.0035% |
| twig | 958.96 | 0.0259% | 39588 | 0.0686% |
| tcl | 869.832 | 0.0235% | 16407 | 0.0284% |
| pod | 859.016 | 0.0232% | 14922 | 0.0259% |
| dockerfile | 849.728 | 0.0229% | 259379 | 0.4495% |
| yacc | 845.704 | 0.0228% | 8230 | 0.0143% |
| postscript | 800.728 | 0.0216% | 903 | 0.0016% |
| racket | 796.64 | 0.0215% | 16615 | 0.0288% |
| eagle | 785.684 | 0.0212% | 2237 | 0.0039% |
| haxe | 772.896 | 0.0208% | 28447 | 0.0493% |
| julia | 752.068 | 0.0203% | 22695 | 0.0393% |
| handlebars | 740.816 | 0.02% | 49842 | 0.0864% |
| smarty | 720.944 | 0.0194% | 41065 | 0.0712% |
| visual-basic | 681.516 | 0.0184% | 10511 | 0.0182% |
| literate-haskell | 673.74 | 0.0182% | 10729 | 0.0186% |
| smalltalk | 665.892 | 0.018% | 11741 | 0.0203% |
| isabelle | 655.82 | 0.0177% | 8359 | 0.0145% |
| nimrod | 652.86 | 0.0176% | 12023 | 0.0208% |
| zig | 621.384 | 0.0168% | 4290 | 0.0074% |
| m4 | 603.584 | 0.0163% | 12465 | 0.0216% |
| max | 603.56 | 0.0163% | 2259 | 0.0039% |
| elixir | 558.116 | 0.015% | 35473 | 0.0615% |
| mako | 543.012 | 0.0146% | 8943 | 0.0155% |
| arduino | 534.176 | 0.0144% | 32350 | 0.0561% |
| jade | 531.4 | 0.0143% | 46993 | 0.0814% |
| haml | 502.012 | 0.0135% | 74792 | 0.1296% |
| elm | 481.968 | 0.013% | 18542 | 0.0321% |
| purebasic | 474.276 | 0.0128% | 36 | 0.0001% |
| coldfusion | 470.78 | 0.0127% | 9263 | 0.0161% |
| lean | 470.032 | 0.0127% | 7507 | 0.013% |
| r | 454.32 | 0.0122% | 12858 | 0.0223% |
| cuda | 437.668 | 0.0118% | 11450 | 0.0198% |
| textile | 425.116 | 0.0115% | 18491 | 0.032% |
| robotframework | 421.612 | 0.0114% | 9211 | 0.016% |
| abap | 409.62 | 0.011% | 1955 | 0.0034% |
| rdoc | 397.028 | 0.0107% | 38760 | 0.0672% |
| llvm | 382.2 | 0.0103% | 10727 | 0.0186% |
| ada | 380.7 | 0.0103% | 13258 | 0.023% |
| batchfile | 372.16 | 0.01% | 43674 | 0.0757% |
| qml | 361.452 | 0.0097% | 19360 | 0.0336% |
| jasmin | 359.82 | 0.0097% | 4782 | 0.0083% |
| assembly | 343.62 | 0.0093% | 8126 | 0.0141% |
| g-code | 334.964 | 0.009% | 3690 | 0.0064% |
| cucumber | 331.38 | 0.0089% | 26677 | 0.0462% |
| html+php | 323.348 | 0.0087% | 18381 | 0.0319% |
| kicad | 321.936 | 0.0087% | 759 | 0.0013% |
| api-blueprint | 317.852 | 0.0086% | 4765 | 0.0083% |
| eiffel | 311.48 | 0.0084% | 373 | 0.0006% |
| toml | 292.676 | 0.0079% | 63517 | 0.1101% |
| modelica | 284.616 | 0.0077% | 2611 | 0.0045% |
| bitbake | 277.576 | 0.0075% | 43239 | 0.0749% |
| lex | 275.96 | 0.0074% | 705 | 0.0012% |
| stylus | 273.056 | 0.0074% | 21967 | 0.0381% |
| protocol-buffer | 254.124 | 0.0069% | 9202 | 0.0159% |
| unknown | 252.228 | 0.0068% | 30570 | 0.053% |
| nit | 244.54 | 0.0066% | 4951 | 0.0086% |
| factor | 241.192 | 0.0065% | 15378 | 0.0267% |
| xs | 239.04 | 0.0064% | 3215 | 0.0056% |
| sass | 230.648 | 0.0062% | 23144 | 0.0401% |
| parrot-internal-representation | 230.196 | 0.0062% | 6231 | 0.0108% |
| html+django | 217.04 | 0.0059% | 10535 | 0.0183% |
| mediawiki | 214.324 | 0.0058% | 10188 | 0.0177% |
| logos | 212.296 | 0.0057% | 1733 | 0.003% |
| genshi | 209.3 | 0.0056% | 956 | 0.0017% |
| coldfusion-cfc | 208.164 | 0.0056% | 4410 | 0.0076% |
| xtend | 179.544 | 0.0048% | 7775 | 0.0135% |
| sqf | 168.656 | 0.0045% | 7778 | 0.0135% |
| vhdl | 155.948 | 0.0042% | 2185 | 0.0038% |
| antlr | 143.548 | 0.0039% | 3651 | 0.0063% |
| systemverilog | 140.192 | 0.0038% | 3944 | 0.0068% |
| hcl | 136.752 | 0.0037% | 13379 | 0.0232% |
| asp | 136.104 | 0.0037% | 4286 | 0.0074% |
| nsis | 129.124 | 0.0035% | 4048 | 0.007% |
| inform-7 | 120.188 | 0.0032% | 184 | 0.0003% |
| slim | 119.036 | 0.0032% | 18726 | 0.0325% |
| groovy-server-pages | 117.368 | 0.0032% | 6695 | 0.0116% |
| ceylon | 116.144 | 0.0031% | 7256 | 0.0126% |
| fish | 111.28 | 0.003% | 15351 | 0.0266% |
| processing | 108.58 | 0.0029% | 5912 | 0.0102% |
| component-pascal | 105.5 | 0.0028% | 43 | 0.0001% |
| lasso | 104.168 | 0.0028% | 67 | 0.0001% |
| glsl | 99.488 | 0.0027% | 9478 | 0.0164% |
| saltstack | 98.196 | 0.0026% | 12314 | 0.0213% |
| xbase | 94.424 | 0.0025% | 1670 | 0.0029% |
| autohotkey | 94.22 | 0.0025% | 1452 | 0.0025% |
| liquid | 93.792 | 0.0025% | 2651 | 0.0046% |
| purescript | 92.412 | 0.0025% | 5024 | 0.0087% |
| agda | 92.06 | 0.0025% | 4956 | 0.0086% |
| inno-setup | 91.36 | 0.0025% | 3014 | 0.0052% |
| oz | 90.476 | 0.0024% | 1551 | 0.0027% |
| chapel | 89.62 | 0.0024% | 26447 | 0.0458% |
| arc | 87.212 | 0.0024% | 758 | 0.0013% |
| opencl | 86.432 | 0.0023% | 2489 | 0.0043% |
| graphviz-dot | 85.804 | 0.0023% | 1525 | 0.0026% |
| pawn | 85.424 | 0.0023% | 580 | 0.001% |
| jsoniq | 75.152 | 0.002% | 1343 | 0.0023% |
| bluespec | 72.38 | 0.002% | 2500 | 0.0043% |
| smali | 71.38 | 0.0019% | 174 | 0.0003% |
| krl | 69.868 | 0.0019% | 1879 | 0.0033% |
| maple | 68.284 | 0.0018% | 1311 | 0.0023% |
| unrealscript | 67.668 | 0.0018% | 585 | 0.001% |
| ooc | 63.188 | 0.0017% | 3416 | 0.0059% |
| pure-data | 62.624 | 0.0017% | 603 | 0.001% |
| xquery | 61.956 | 0.0017% | 2237 | 0.0039% |
| digital-command-language | 59.644 | 0.0016% | 833 | 0.0014% |
| moonscript | 59.208 | 0.0016% | 1951 | 0.0034% |
| awk | 57.176 | 0.0015% | 2206 | 0.0038% |
| pike | 52.872 | 0.0014% | 1262 | 0.0022% |
| livescript | 51.228 | 0.0014% | 5194 | 0.009% |
| solidity | 50.856 | 0.0014% | 3689 | 0.0064% |
| monkey | 48.256 | 0.0013% | 1367 | 0.0024% |
| jsonld | 48.012 | 0.0013% | 462 | 0.0008% |
| zephir | 42.684 | 0.0012% | 1265 | 0.0022% |
| crystal | 41.924 | 0.0011% | 4217 | 0.0073% |
| rhtml | 41.02 | 0.0011% | 4551 | 0.0079% |
| stata | 40.684 | 0.0011% | 1344 | 0.0023% |
| idris | 39.896 | 0.0011% | 3025 | 0.0052% |
| raml | 39.388 | 0.0011% | 948 | 0.0016% |
| openscad | 37.732 | 0.001% | 2178 | 0.0038% |
| red | 35.26 | 0.001% | 1108 | 0.0019% |
| c2hs-haskell | 34.472 | 0.0009% | 1021 | 0.0018% |
| cycript | 33.96 | 0.0009% | 197 | 0.0003% |
| applescript | 33.512 | 0.0009% | 1304 | 0.0023% |
| mupad | 32.488 | 0.0009% | 178 | 0.0003% |
| literate-agda | 31.384 | 0.0008% | 567 | 0.001% |
| boo | 31.172 | 0.0008% | 26289 | 0.0456% |
| sourcepawn | 29.528 | 0.0008% | 717 | 0.0012% |
| qmake | 29.508 | 0.0008% | 3632 | 0.0063% |
| ragel-in-ruby-host | 28.296 | 0.0008% | 888 | 0.0015% |
| io | 27.952 | 0.0008% | 1247 | 0.0022% |
| desktop | 27.648 | 0.0007% | 5021 | 0.0087% |
| propeller-spin | 26.772 | 0.0007% | 625 | 0.0011% |
| thrift | 26.748 | 0.0007% | 1007 | 0.0017% |
| volt | 25.052 | 0.0007% | 1660 | 0.0029% |
| xproc | 24.212 | 0.0007% | 914 | 0.0016% |
| igor-pro | 23.748 | 0.0006% | 388 | 0.0007% |
| lolcode | 23.74 | 0.0006% | 24861 | 0.0431% |
| html+eex | 21.412 | 0.0006% | 2100 | 0.0036% |
| logtalk | 20.428 | 0.0006% | 1035 | 0.0018% |
| mirah | 20.104 | 0.0005% | 706 | 0.0012% |
| gnuplot | 19.676 | 0.0005% | 889 | 0.0015% |
| literate-coffeescript | 19.016 | 0.0005% | 1041 | 0.0018% |
| jflex | 18.608 | 0.0005% | 555 | 0.001% |
| emberscript | 18.392 | 0.0005% | 1024 | 0.0018% |
| cobol | 17.0 | 0.0005% | 24953 | 0.0432% |
| yang | 16.94 | 0.0005% | 597 | 0.001% |
| rebol | 16.468 | 0.0004% | 239 | 0.0004% |
| linker-script | 16.084 | 0.0004% | 1604 | 0.0028% |
| cartocss | 15.916 | 0.0004% | 555 | 0.001% |
| urweb | 13.068 | 0.0004% | 304 | 0.0005% |
| rmarkdown | 13.032 | 0.0004% | 750 | 0.0013% |
| darcs-patch | 13.008 | 0.0004% | 80 | 0.0001% |
| csound | 12.852 | 0.0003% | 229 | 0.0004% |
| squirrel | 12.844 | 0.0003% | 531 | 0.0009% |
| apl | 12.56 | 0.0003% | 586 | 0.001% |
| hlsl | 12.168 | 0.0003% | 1529 | 0.0026% |
| latte | 11.888 | 0.0003% | 1380 | 0.0024% |
| pony | 11.836 | 0.0003% | 624 | 0.0011% |
| ioke | 10.86 | 0.0003% | 373 | 0.0006% |
| hy | 10.512 | 0.0003% | 879 | 0.0015% |
| uno | 10.356 | 0.0003% | 628 | 0.0011% |
| pan | 10.336 | 0.0003% | 637 | 0.0011% |
| xojo | 10.308 | 0.0003% | 642 | 0.0011% |
| papyrus | 10.256 | 0.0003% | 130 | 0.0002% |
| stan | 10.252 | 0.0003% | 540 | 0.0009% |
| slash | 9.904 | 0.0003% | 640 | 0.0011% |
| supercollider | 9.796 | 0.0003% | 318 | 0.0006% |
| vcl | 9.456 | 0.0003% | 747 | 0.0013% |
| smt | 9.032 | 0.0002% | 117 | 0.0002% |
| glyph | 8.948 | 0.0002% | 7 | 0.0% |
| wisp | 8.736 | 0.0002% | 262 | 0.0005% |
| renpy | 8.3 | 0.0002% | 421 | 0.0007% |
| clips | 7.728 | 0.0002% | 450 | 0.0008% |
| dns-zone | 7.56 | 0.0002% | 54 | 0.0001% |
| sas | 7.536 | 0.0002% | 269 | 0.0005% |
| rouge | 7.196 | 0.0002% | 396 | 0.0007% |
| ec | 7.032 | 0.0002% | 94 | 0.0002% |
| dylan | 6.82 | 0.0002% | 280 | 0.0005% |
| tcsh | 6.524 | 0.0002% | 748 | 0.0013% |
| aspectj | 6.332 | 0.0002% | 451 | 0.0008% |
| netlogo | 6.304 | 0.0002% | 140 | 0.0002% |
| gap | 6.096 | 0.0002% | 46 | 0.0001% |
| fancy | 5.952 | 0.0002% | 675 | 0.0012% |
| coq | 5.744 | 0.0002% | 80 | 0.0001% |
| click | 5.74 | 0.0002% | 9 | 0.0% |
| capn-proto | 5.644 | 0.0002% | 330 | 0.0006% |
| flux | 5.572 | 0.0002% | 47 | 0.0001% |
| forth | 5.512 | 0.0001% | 265 | 0.0005% |
| ats | 5.424 | 0.0001% | 383 | 0.0007% |
| netlinx | 5.172 | 0.0001% | 144 | 0.0002% |
| clean | 5.068 | 0.0001% | 171 | 0.0003% |
| parrot-assembly | 4.664 | 0.0001% | 227 | 0.0004% |
| alloy | 4.644 | 0.0001% | 203 | 0.0004% |
| lfe | 4.576 | 0.0001% | 287 | 0.0005% |
| gdscript | 4.488 | 0.0001% | 460 | 0.0008% |
| augeas | 4.444 | 0.0001% | 395 | 0.0007% |
| sparql | 4.404 | 0.0001% | 1036 | 0.0018% |
| lilypond | 4.308 | 0.0001% | 265 | 0.0005% |
| scilab | 4.088 | 0.0001% | 375 | 0.0006% |
| autoit | 4.06 | 0.0001% | 279 | 0.0005% |
| myghty | 3.864 | 0.0001% | 105 | 0.0002% |
| blitzmax | 3.74 | 0.0001% | 220 | 0.0004% |
| creole | 3.416 | 0.0001% | 337 | 0.0006% |
| harbour | 3.336 | 0.0001% | 107 | 0.0002% |
| piglatin | 3.168 | 0.0001% | 513 | 0.0009% |
| opa | 3.164 | 0.0001% | 211 | 0.0004% |
| sage | 3.032 | 0.0001% | 414 | 0.0007% |
| ston | 2.848 | 0.0001% | 414 | 0.0007% |
| maxscript | 2.8 | 0.0001% | 47 | 0.0001% |
| lsl | 2.68 | 0.0001% | 74 | 0.0001% |
| gentoo-ebuild | 2.576 | 0.0001% | 601 | 0.001% |
| nu | 2.38 | 0.0001% | 170 | 0.0003% |
| bro | 2.34 | 0.0001% | 333 | 0.0006% |
| xc | 2.02 | 0.0001% | 88 | 0.0002% |
| j | 1.808 | 0.0% | 142 | 0.0002% |
| metal | 1.724 | 0.0% | 151 | 0.0003% |
| module-management-system | 1.544 | 0.0% | 91 | 0.0002% |
| webidl | 1.508 | 0.0% | 96 | 0.0002% |
| tea | 1.468 | 0.0% | 29 | 0.0001% |
| redcode | 1.272 | 0.0% | 149 | 0.0003% |
| shen | 1.2 | 0.0% | 71 | 0.0001% |
| pov-ray-sdl | 1.136 | 0.0% | 104 | 0.0002% |
| x10 | 1.008 | 0.0% | 33 | 0.0001% |
| brainfuck | 0.964 | 0.0% | 167 | 0.0003% |
| ninja | 0.952 | 0.0% | 187 | 0.0003% |
| golo | 0.896 | 0.0% | 115 | 0.0002% |
| webassembly | 0.86 | 0.0% | 83 | 0.0001% |
| self | 0.824 | 0.0% | 15 | 0.0% |
| labview | 0.808 | 0.0% | 61 | 0.0001% |
| octave | 0.804 | 0.0% | 12 | 0.0% |
| pogoscript | 0.804 | 0.0% | 74 | 0.0001% |
| d | 0.796 | 0.0% | 20 | 0.0% |
| http | 0.736 | 0.0% | 140 | 0.0002% |
| ecl | 0.664 | 0.0% | 48 | 0.0001% |
| chuck | 0.584 | 0.0% | 99 | 0.0002% |
| gosu | 0.524 | 0.0% | 60 | 0.0001% |
| parrot | 0.52 | 0.0% | 17 | 0.0% |
| opal | 0.472 | 0.0% | 69 | 0.0001% |
| objective-j | 0.456 | 0.0% | 37 | 0.0001% |
| kit | 0.412 | 0.0% | 48 | 0.0001% |
| gams | 0.376 | 0.0% | 18 | 0.0% |
| prolog | 0.276 | 0.0% | 35 | 0.0001% |
| clarion | 0.268 | 0.0% | 13 | 0.0% |
| mask | 0.252 | 0.0% | 37 | 0.0001% |
| brightscript | 0.244 | 0.0% | 28 | 0.0% |
| scaml | 0.184 | 0.0% | 31 | 0.0001% |
| matlab | 0.164 | 0.0% | 29 | 0.0001% |
| idl | 0.148 | 0.0% | 1 | 0.0% |
| ags-script | 0.124 | 0.0% | 31 | 0.0001% |
| lookml | 0.12 | 0.0% | 10 | 0.0% |
| apacheconf | 0.108 | 0.0% | 59 | 0.0001% |
| oxygene | 0.104 | 0.0% | 9 | 0.0% |
| txl | 0.096 | 0.0% | 3 | 0.0% |
| grammatical-framework | 0.088 | 0.0% | 39 | 0.0001% |
| renderscript | 0.064 | 0.0% | 54 | 0.0001% |
| mtml | 0.052 | 0.0% | 13 | 0.0% |
| unified-parallel-c | 0.052 | 0.0% | 6 | 0.0% |
| dogescript | 0.04 | 0.0% | 10 | 0.0% |
| gentoo-eclass | 0.04 | 0.0% | 6 | 0.0% |
| zimpl | 0.04 | 0.0% | 7 | 0.0% |
| irc-log | 0.036 | 0.0% | 9 | 0.0% |
| fantom | 0.028 | 0.0% | 11 | 0.0% |
| numpy | 0.028 | 0.0% | 1 | 0.0% |
| cirru | 0.024 | 0.0% | 4 | 0.0% |
| xpages | 0.024 | 0.0% | 7 | 0.0% |
| nginx | 0.02 | 0.0% | 6 | 0.0% |
| objdump | 0.02 | 0.0% | 1 | 0.0% |
| python-traceback | 0.02 | 0.0% | 10 | 0.0% |
| realbasic | 0.012 | 0.0% | 1 | 0.0% |
| befunge | 0.008 | 0.0% | 2 | 0.0% |
| bison | 0.008 | 0.0% | 1 | 0.0% |
| m | 0.008 | 0.0% | 1 | 0.0% |
| omgrofl | 0.008 | 0.0% | 1 | 0.0% |
## Additional Information
### Licensing Information
Each sample comes from a code repository with a permissive license. The license is provided by the `license` field for each sample.
### Citation Information
```bibtex
@article{muennighoff2023octopack,
title={OctoPack: Instruction Tuning Code Large Language Models},
author={Niklas Muennighoff and Qian Liu and Armel Zebaze and Qinkai Zheng and Binyuan Hui and Terry Yue Zhuo and Swayam Singh and Xiangru Tang and Leandro von Werra and Shayne Longpre},
journal={arXiv preprint arXiv:2308.07124},
year={2023}
}
```
|
lukebarousse/data_jobs | lukebarousse | "2024-05-27T19:07:08Z" | 12,383 | 37 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-21T01:33:05Z" | ---
license: apache-2.0
---
|
Subsets and Splits