File size: 2,619 Bytes
3245442
 
 
 
 
 
 
 
 
 
992126e
3245442
dc1febd
 
 
992126e
3245442
992126e
 
5e21e77
 
 
 
 
992126e
3245442
 
e564b1f
3245442
 
 
790e11f
 
 
 
 
 
3245442
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
790e11f
 
3245442
 
e564b1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
size_categories:
- 100K<n<1M
---

### Original Dataset can be found in <https://www.openslr.org/145/>

- I made it for personal use, so the code might not be pefect.


### Citation Information

```
@article{meister2023librispeechpc,
         title={LibriSpeech-PC: Benchmark for Evaluation of Punctuation and Capitalization Capabilities of end-to-end ASR Models}, 
         author={A. Meister and M. Novikov and N. Karpov and E. Bakhturina and V. Lavrukhin and B. Ginsburg},
         journal={arXiv preprint arXiv:2310.02943},
         year={2023},
        }
```


### How to Use

- Almost the same with [Librispeech](https://huggingface.co/datasets/openslr/librispeech_asr) dataset module since i refered to the [source code](https://huggingface.co/datasets/openslr/librispeech_asr/blob/main/librispeech_asr.py).

- three types of transcripts are given
  - `text_normalized` : the trascript from Librispeech ASR
  - `text`, `text_raw` : the trascripts from Librispeech-PC




```python
from datasets import load_dataset
from pprint import pprint
from IPython.display import display, Audio
import aiohttp
import os

# set huggingface cache directory for the extracted raw files and huggingface-cli token
os.environ['HF_HOME'] = "/data/to/download"  
!export HF_HOME="/data/to/download"


# download dataset
# if already have librispeech_asr in the cache_dir it will use the same audio files.
libripc = load_dataset("yoom618/librispeech_pc", 
                       "all", # all, clean, other
                       cache_dir="/data/to/download",
                       trust_remote_code=True,
                       # storage_options={'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=7200)}},  # add if you need to increase the timeout


# check dataset info
print(libripc)

display(Audio(libripc['train.clean.100'][0]['audio']['array'], 
              rate=libripc['train.clean.100'][0]['audio']['sampling_rate'],
              autoplay=False))
pprint(libripc['train.clean.100'][0]['audio'])

```


### The number of samples in Librispeech-PC

- train
  - `train.clean.100` :  26,041  (  2,498 out of  28,539 were dropped )
  - `train.clean.360` :  95,404  (  8,610 out of 104,014 were dropped )
  - `train.other.500` : 134,679  ( 14,009 out of 148,688 were dropped )
- dev
  - `dev.clean`  : 2,530  ( 173 out of 2,703 were dropped )
  - `dev.other`  : 2,728  ( 136 out of 2,864 were dropped )
- test
  - `test.clean` : 2,417  ( 203 out of 2,620 were dropped )
  - `test.other` : 2,856  (  83 out of 2,939 were dropped )