Datasets:
Tasks:
Automatic Speech Recognition
Formats:
parquet
Languages:
English
Size:
10M - 100M
ArXiv:
License:
File size: 7,987 Bytes
0b32f5a af0c684 0b32f5a 93285c8 0b32f5a 88b8aa6 0b32f5a 88b8aa6 0b32f5a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 |
---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
license:
- cc-by-3.0
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1T<n
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: LargeScaleASR
tags:
- robust-speech-recognition
- noisy-speech-recognition
- speech-recognition
configs:
- config_name: large
features:
- name: ID
dtype: string
- name: duration
dtype: float32
- name: wav
dtype:
audio:
sample_rate: 16000
decode: False
- name: spk_id
dtype: string
- name: sex
dtype: string
- name: text
dtype: string
data_files:
- split: train
path: large/train*
- split: dev
path: dev/dev*
- split: test
path: test/test*
- config_name: small
features:
- name: ID
dtype: string
- name: duration
dtype: float32
- name: wav
dtype:
audio:
sample_rate: 16000
decode: False
- name: spk_id
dtype: string
- name: sex
dtype: string
- name: text
dtype: string
data_files:
- split: train
path: small/train*
- split: dev
path: dev/dev*
- split: test
path: test/test*
- config_name: medium
features:
- name: ID
dtype: string
- name: duration
dtype: float32
- name: wav
dtype:
audio:
sample_rate: 16000
decode: False
- name: spk_id
dtype: string
- name: sex
dtype: string
- name: text
dtype: string
data_files:
- split: train
path: medium/train*
- split: dev
path: dev/dev*
- split: test
path: test/test*
---
# LargeScaleASR: 25,000 hours of transcribed and heterogeneous English speech recognition data for research and commercial use.
Made of 6 subsets:
1. **large** contains 25,000 hours of read / spontaneous and clean / noisy transcribed speech.
2. **medium** contains 2,500 hours of read / spontaneous and clean / noisy transcribed speech.
3. **small** contains 250 hours of read / spontaneous and clean / noisy transcribed speech.
5. **dev** contains 15 hours (more details in the next section).
6. **test** contains 21 hours (more details in the next section).
The large split requires 4TB of storage (including HuggingFace extraction). The shards only are 2TB.
Example:
```python
from datasets import load_dataset
ds = load_dataset('speechbrain/LargeScaleASR', {'small'||'medium'||'large'}, num_proc={nb_of_cpu_cores_you_want})
print(ds['train'])
from io import BytesIO
import torchaudio
wav_tensor = torchaudio.load(BytesIO(ds["train"][0]["wav"][bytes]))
```
## Training recipe
A full conformer ASR training recipe is available [here](https://github.com/speechbrain/speechbrain/pull/2806).
## Data description (Following information are directly copy-pasted from the SpeechBrain data preparation README)
TLS is a mix of 5 existing dataset with permissive licences. The way it is mixed
is described in the following table:
| Dataset | Amount Taken (large/medium/small/dev/test) | License |
| ------------- | ------------- | ------------- |
| VoxPopuli | 550/500/50/5/7 | CC0 |
| LibriHeavy | 11,000/500/50/0/0 | CC BY 4.0 |
| Librispeech (dev-/test-other) | 0/0/0/5/7 | CC BY 4.0 |
| yodas | 6,100/500/50/1.5/1.5 | CC BY 3.0 |
| people's speech | 5,900/500/50/1.5/1.5 | CC-BY 4.0 |
| CommonVoice 18.0 | 1660/500/50/5/7 | CC0 |
*For dev and tests splits, only data from the corresponding dev and test sets of the considered dataset is used (i.e. not extracted from the train except for YODAS). For YODAS we extract data from the en003 split and verify the audio/transcription manually to form the dev/test partitions*
More information relative to each dataset is given as:
- [**voxpopuli**](https://arxiv.org/abs/2101.00390): we follow the standard SpeechBrain data preparation.
- [**LibriHeavy**](https://arxiv.org/html/2309.08105v2): samples are randomly selected, but we follow the standard data preparation.
- [**Librispeech**](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf): Librispeech is only used for the validation and test sets of LargeScaleASR. More precisely, we extract samples from *dev-others* and *test-others* as they are the most challenging subsets.
- [**YODAS**](https://arxiv.org/abs/2406.00899): The YODAS dataset is unfortunately unreliable. Indeed, audio are crawled from YouTube, and a lot of them (almost half) do not have the correct language. We used a [SpeechBrain language ID model](https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa) to make sure that we only integrate samples where people speak in English. Transcriptions have also been heavily normalised (see next section). We decided arbitrarily to use the *en000* and *en001* subsets of Yodas. Transcriptions may be a bit noisy. This is why this dataset is excluded from the dev and test sets of LargeScaleASR.
- [**People's Speech**](https://huggingface.co/datasets/MLCommons/peoples_speech): Only the *clean* subset of this dataset is used in LargeScaleASR as the transcriptions there already have errors. This is why this dataset is excluded from the dev and test sets of LargeScaleASR.
- [**CommonVoice 18.0**](https://commonvoice.mozilla.org/en): We removed a few speakers that had too many samples (above 9000 samples) to avoid any bias. Aside from this, we used only samples coming from the *validated* csv to ensure an optimal level of transcriptions. Text was also heavily normalised (see next section).
### Text and audio normalisation
Some of the above datasets, in particular People's Speech, Yodas and CommonVoice have very little normalisation. This is an important issue as the pronunciation is then either incorrect or uncertain. We normalised all the sentences to ensure a set of characters containing only the standard 26 letter of the European alphabet plus the "'". Numerical values were converted to text using the [Nemo text processing WFST tool](https://github.com/NVIDIA/NeMo-text-processing). The rest of the text was properly filtered to remove symbols, youtube annotations like "applause" or many others elements. When sentences were too noisy, we simply decided to remove them (e.g. too many symbols). The text normalisation can be found in *speechbrain.utils.text_normalisation*.
Audios are embedded as raw bytes (can be decoded by soundfile). We chunked and created smaller audio files from long ones based on start and stop supervision from the different manifests of the datasets (this is necessary for HuggingFace). Language ID with a [SpeechBrain language ID model](https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa) was performed on Yodas.
#### Referencing SpeechBrain
```
@article{speechbrainV1,
author = {Mirco Ravanelli and Titouan Parcollet and Adel Moumen and Sylvain de Langen and Cem Subakan and Peter Plantinga and Yingzhi Wang and Pooneh Mousavi and Luca Della Libera and Artem Ploujnikov and Francesco Paissan and Davide Borra and Salah Zaiem and Zeyu Zhao and Shucong Zhang and Georgios Karakasidis and Sung-Lin Yeh and Pierre Champion and Aku Rouhe and Rudolf Braun and Florian Mai and Juan Zuluaga-Gomez and Seyed Mahed Mousavi and Andreas Nautsch and Ha Nguyen and Xuechen Liu and Sangeet Sagar and Jarod Duret and Salima Mdhaffar and Ga{{\"e}}lle Laperri{{\`e}}re and Mickael Rouvier and Renato De Mori and Yannick Est{{\`e}}ve},
title = {Open-Source Conversational AI with SpeechBrain 1.0},
journal = {Journal of Machine Learning Research},
year = {2024},
volume = {25},
number = {333},
pages = {1--11},
url = {http://jmlr.org/papers/v25/24-0991.html}
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
|