Datasets:
Tasks:
Automatic Speech Recognition
Formats:
json
Languages:
Ukrainian
Size:
10K - 100K
Tags:
podcasts
License:
metadata
language:
- uk
pretty_name: UK-PODS
tags:
- podcasts
license: cc-by-nc-4.0
task_categories:
- automatic-speech-recognition
uk-pods - speech datasets of Ukrainian podcasts.
Preparation
- Clone the dataset repository and extract the content of
clips.tar.gz
archive.
git clone https://huggingface.co/datasets/taras-sereda/uk-pods
cd uk-pods && tar -zxvf clips.tar.gz
- To use these manifests for training/inference with NeMo [1] modify
audio_filepath
to absolute locations of audio files extracted in previous step.
# data_root=<clonned_repo_dir> # /home/ubuntu/uk-pods
data_root=$(realpath .)
sed -i -e "s|\"audio_filepath\":\"|\"audio_filepath\":\"${data_root}\/|g" pods_train.json
sed -i -e "s|\"audio_filepath\":\"|\"audio_filepath\":\"${data_root}\/|g" pods_test.json
Usage
- Install NeMo toolkit
pip install nemo_toolkit['all']
- Run inference with uk-pods-conformer [2] on all files from
pods_test.json
manifest:
import json
from nemo.collections.asr.models import EncDecCTCModelBPE
asr_model = EncDecCTCModelBPE.from_pretrained("taras-sereda/uk-pods-conformer")
with open('pods_test.json') as fd:
audio_paths = []
for line in fd:
audio_paths.append(json.loads(line)['audio_filepath'])
transcripts = asr_model.transcribe(audio_paths)
Dataset statistics
Number of wav files: 34231
Total duration: 51.066 hours
MIN duration: 1.020 sec
MAX duration: 19.999 sec
MEAN duration: 5.370 sec
MEDIAN duration: 4.640 sec