uk-pods / README.md
taras-sereda's picture
usage
c41c3f9
---
language:
- uk
pretty_name: UK-PODS
tags:
- podcasts
license: cc-by-nc-4.0
task_categories:
- automatic-speech-recognition
---
# uk-pods - speech datasets of Ukrainian podcasts.
## Preparation
1. Clone the dataset repository and extract the content of `clips.tar.gz` archive.
```
git clone https://huggingface.co/datasets/taras-sereda/uk-pods
cd uk-pods && tar -zxvf clips.tar.gz
```
2. To use these manifests for training/inference with NeMo [1] modify `audio_filepath` to absolute locations of audio files extracted in previous step.
```
# data_root=<clonned_repo_dir> # /home/ubuntu/uk-pods
data_root=$(realpath .)
sed -i -e "s|\"audio_filepath\":\"|\"audio_filepath\":\"${data_root}\/|g" pods_train.json
sed -i -e "s|\"audio_filepath\":\"|\"audio_filepath\":\"${data_root}\/|g" pods_test.json
```
## Usage
1. Install NeMo toolkit
```
pip install nemo_toolkit['all']
```
2. Run inference with **uk-pods-conformer** [2] on all files from `pods_test.json` manifest:
```
import json
from nemo.collections.asr.models import EncDecCTCModelBPE
asr_model = EncDecCTCModelBPE.from_pretrained("taras-sereda/uk-pods-conformer")
with open('pods_test.json') as fd:
audio_paths = []
for line in fd:
audio_paths.append(json.loads(line)['audio_filepath'])
transcripts = asr_model.transcribe(audio_paths)
```
## Dataset statistics
```
Number of wav files: 34231
Total duration: 51.066 hours
MIN duration: 1.020 sec
MAX duration: 19.999 sec
MEAN duration: 5.370 sec
MEDIAN duration: 4.640 sec
```
## References
- [1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
- [2] [uk-pods-coformer ASR mode](https://huggingface.co/taras-sereda/uk-pods-conformer)