LargeScaleASR / README.md
ChickySparrow's picture
Update README.md
93285c8 verified
|
raw
history blame
7.99 kB
metadata
annotations_creators:
  - crowdsourced
  - machine-generated
language_creators:
  - crowdsourced
  - machine-generated
language:
  - en
license:
  - cc-by-3.0
  - cc-by-4.0
multilinguality:
  - monolingual
size_categories:
  - 1T<n
task_categories:
  - automatic-speech-recognition
task_ids: []
pretty_name: LargeScaleASR
tags:
  - robust-speech-recognition
  - noisy-speech-recognition
  - speech-recognition
configs:
  - config_name: large
    features:
      - name: ID
        dtype: string
      - name: duration
        dtype: float32
      - name: wav
        dtype:
          audio:
            sample_rate: 16000
            decode: false
      - name: spk_id
        dtype: string
      - name: sex
        dtype: string
      - name: text
        dtype: string
    data_files:
      - split: train
        path: large/train*
      - split: dev
        path: dev/dev*
      - split: test
        path: test/test*
  - config_name: small
    features:
      - name: ID
        dtype: string
      - name: duration
        dtype: float32
      - name: wav
        dtype:
          audio:
            sample_rate: 16000
            decode: false
      - name: spk_id
        dtype: string
      - name: sex
        dtype: string
      - name: text
        dtype: string
    data_files:
      - split: train
        path: small/train*
      - split: dev
        path: dev/dev*
      - split: test
        path: test/test*
  - config_name: medium
    features:
      - name: ID
        dtype: string
      - name: duration
        dtype: float32
      - name: wav
        dtype:
          audio:
            sample_rate: 16000
            decode: false
      - name: spk_id
        dtype: string
      - name: sex
        dtype: string
      - name: text
        dtype: string
    data_files:
      - split: train
        path: medium/train*
      - split: dev
        path: dev/dev*
      - split: test
        path: test/test*

LargeScaleASR: 25,000 hours of transcribed and heterogeneous English speech recognition data for research and commercial use.

Made of 6 subsets:

  1. large contains 25,000 hours of read / spontaneous and clean / noisy transcribed speech.
  2. medium contains 2,500 hours of read / spontaneous and clean / noisy transcribed speech.
  3. small contains 250 hours of read / spontaneous and clean / noisy transcribed speech.
  4. dev contains 15 hours (more details in the next section).
  5. test contains 21 hours (more details in the next section).

The large split requires 4TB of storage (including HuggingFace extraction). The shards only are 2TB.

Example:

from datasets import load_dataset

ds = load_dataset('speechbrain/LargeScaleASR', {'small'||'medium'||'large'}, num_proc={nb_of_cpu_cores_you_want})
print(ds['train'])

from io import BytesIO
import torchaudio
wav_tensor = torchaudio.load(BytesIO(ds["train"][0]["wav"][bytes]))

Training recipe

A full conformer ASR training recipe is available here.

Data description (Following information are directly copy-pasted from the SpeechBrain data preparation README)

TLS is a mix of 5 existing dataset with permissive licences. The way it is mixed is described in the following table:

Dataset Amount Taken (large/medium/small/dev/test) License
VoxPopuli 550/500/50/5/7 CC0
LibriHeavy 11,000/500/50/0/0 CC BY 4.0
Librispeech (dev-/test-other) 0/0/0/5/7 CC BY 4.0
yodas 6,100/500/50/1.5/1.5 CC BY 3.0
people's speech 5,900/500/50/1.5/1.5 CC-BY 4.0
CommonVoice 18.0 1660/500/50/5/7 CC0

For dev and tests splits, only data from the corresponding dev and test sets of the considered dataset is used (i.e. not extracted from the train except for YODAS). For YODAS we extract data from the en003 split and verify the audio/transcription manually to form the dev/test partitions

More information relative to each dataset is given as:

  • voxpopuli: we follow the standard SpeechBrain data preparation.
  • LibriHeavy: samples are randomly selected, but we follow the standard data preparation.
  • Librispeech: Librispeech is only used for the validation and test sets of LargeScaleASR. More precisely, we extract samples from dev-others and test-others as they are the most challenging subsets.
  • YODAS: The YODAS dataset is unfortunately unreliable. Indeed, audio are crawled from YouTube, and a lot of them (almost half) do not have the correct language. We used a SpeechBrain language ID model to make sure that we only integrate samples where people speak in English. Transcriptions have also been heavily normalised (see next section). We decided arbitrarily to use the en000 and en001 subsets of Yodas. Transcriptions may be a bit noisy. This is why this dataset is excluded from the dev and test sets of LargeScaleASR.
  • People's Speech: Only the clean subset of this dataset is used in LargeScaleASR as the transcriptions there already have errors. This is why this dataset is excluded from the dev and test sets of LargeScaleASR.
  • CommonVoice 18.0: We removed a few speakers that had too many samples (above 9000 samples) to avoid any bias. Aside from this, we used only samples coming from the validated csv to ensure an optimal level of transcriptions. Text was also heavily normalised (see next section).

Text and audio normalisation

Some of the above datasets, in particular People's Speech, Yodas and CommonVoice have very little normalisation. This is an important issue as the pronunciation is then either incorrect or uncertain. We normalised all the sentences to ensure a set of characters containing only the standard 26 letter of the European alphabet plus the "'". Numerical values were converted to text using the Nemo text processing WFST tool. The rest of the text was properly filtered to remove symbols, youtube annotations like "applause" or many others elements. When sentences were too noisy, we simply decided to remove them (e.g. too many symbols). The text normalisation can be found in speechbrain.utils.text_normalisation.

Audios are embedded as raw bytes (can be decoded by soundfile). We chunked and created smaller audio files from long ones based on start and stop supervision from the different manifests of the datasets (this is necessary for HuggingFace). Language ID with a SpeechBrain language ID model was performed on Yodas.

Referencing SpeechBrain

@article{speechbrainV1,
  author  = {Mirco Ravanelli and Titouan Parcollet and Adel Moumen and Sylvain de Langen and Cem Subakan and Peter Plantinga and Yingzhi Wang and Pooneh Mousavi and Luca Della Libera and Artem Ploujnikov and Francesco Paissan and Davide Borra and Salah Zaiem and Zeyu Zhao and Shucong Zhang and Georgios Karakasidis and Sung-Lin Yeh and Pierre Champion and Aku Rouhe and Rudolf Braun and Florian Mai and Juan Zuluaga-Gomez and Seyed Mahed Mousavi and Andreas Nautsch and Ha Nguyen and Xuechen Liu and Sangeet Sagar and Jarod Duret and Salima Mdhaffar and Ga{{\"e}}lle Laperri{{\`e}}re and Mickael Rouvier and Renato De Mori and Yannick Est{{\`e}}ve},
  title   = {Open-Source Conversational AI with SpeechBrain 1.0},
  journal = {Journal of Machine Learning Research},
  year    = {2024},
  volume  = {25},
  number  = {333},
  pages   = {1--11},
  url     = {http://jmlr.org/papers/v25/24-0991.html}
}

About SpeechBrain

SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.

Website: https://speechbrain.github.io/

GitHub: https://github.com/speechbrain/speechbrain