metadata
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
size_categories:
- 100K<n<1M
Original Dataset can be found in https://www.openslr.org/145/
- I made it for personal use, so the code might not be pefect.
How to Use
Almost the same with Librispeech dataset module since i refered to the source code.
three types of transcripts are given
text_normalized
: the trascript from Librispeech ASRtext
,text_raw
: the trascripts from Librispeech-PC
from datasets import load_dataset
from pprint import pprint
from IPython.display import display, Audio
import aiohttp
import os
# set huggingface cache directory for the extracted raw files and huggingface-cli token
os.environ['HF_HOME'] = "/data/to/download"
!export HF_HOME="/data/to/download"
# download dataset
# if already have librispeech_asr in the cache_dir it will use the same audio files.
libripc = load_dataset("yoom618/librispeech_pc",
"all", # all, clean, other
cache_dir="/data/to/download",
trust_remote_code=True,
# storage_options={'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=7200)}}, # add if you need to increase the timeout
# check dataset info
print(libripc)
display(Audio(libripc['train.clean.100'][0]['audio']['array'],
rate=libripc['train.clean.100'][0]['audio']['sampling_rate'],
autoplay=False))
pprint(libripc['train.clean.100'][0]['audio'])
The number of samples in Librispeech-PC
- train
train.clean.100
: 26,041 ( 2,498 out of 28,539 were dropped )train.clean.360
: 95,404 ( 8,610 out of 104,014 were dropped )train.other.500
: 134,679 ( 14,009 out of 148,688 were dropped )
- dev
dev.clean
: 2,530 ( 173 out of 2,703 were dropped )dev.other
: 2,728 ( 136 out of 2,864 were dropped )
- test
test.clean
: 2,417 ( 203 out of 2,620 were dropped )test.other
: 2,856 ( 83 out of 2,939 were dropped )
Citation Information
@article{meister2023librispeechpc,
title={LibriSpeech-PC: Benchmark for Evaluation of Punctuation and Capitalization Capabilities of end-to-end ASR Models},
author={A. Meister and M. Novikov and N. Karpov and E. Bakhturina and V. Lavrukhin and B. Ginsburg},
journal={arXiv preprint arXiv:2310.02943},
year={2023},
}