Datasets:
File size: 696 Bytes
9d943bd 8061802 61e0e1e 9b3c7c9 61e0e1e 8061802 9d943bd 91f4480 6d38573 91f4480 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
---
task_categories:
- automatic-speech-recognition
pretty_name: MangoSpeech
configs:
- config_name: sth
data_files: "data/sth.parquet"
- config_name: opodcast
data_files: "data/opodcast.parquet"
---
# The list of all subsets in the dataset
Each subset is generated splitting videos from given particular ukrainiam YouTube channel
- channel "О! ПОДКАСТ" is "opodcast" subset
# Loading a particular subset
```
>>> data_files = {"train": "data/<your_subset>.parquet"}
>>> data = load_dataset("Zarakun/youtube_ua_subtitles_test", data_files=data_files)
>>> data
DatasetDict({
train: Dataset({
features: ['audio', 'rate', 'duration'],
num_rows: 4505
})
})
``` |