Datasets:

Modalities:
Audio
Languages:
Kazakh
License:
rassulya's picture
Upload README.md with huggingface_hub
fac0c5b verified
|
raw
history blame
6.59 kB

Kazakh Speech Corpus (KSC) Dataset Card

This dataset card describes the Kazakh Speech Corpus (KSC), a large-scale, open-source speech corpus for the Kazakh language.

Summary

The KSC contains approximately 332 hours of transcribed audio, comprising over 153,000 utterances spoken by participants from diverse regions, age groups, and genders. It's the largest publicly available Kazakh speech corpus, designed to advance speech and language processing applications for the Kazakh language, a low-resource language within the Turkic language family. The data was crowdsourced via a web-based platform, and rigorously checked by native Kazakh speakers to ensure high quality. Preliminary speech recognition experiments yielded promising results (2.8% character error rate and 8.7% word error rate on the test set). An ESPnet recipe for reproducible speech recognition experiments is also provided.

Dataset Statistics

Category Train Valid Test Total
Duration (hours) 318.4 7.1 7.1 332.6
# Utterances 147,236 3,283 3,334 153,853
# Words 1.61M 35.2k 35.8k 1.68M
# Unique Words 157,191 13,525 13,959 160,041
# Device IDs 1,554 29 29 1,612
# Speakers - 29 29 -

Validation and Test Set Speaker Details

Category Valid (%) Test (%)
Gender (%)
Female 51.7 51.7
Male 48.3 48.3
Age (%)
18-27 37.9 34.5
28-37 34.5 31.0
38-47 10.4 13.8
48 and above 17.2 20.7
Region (%)
East 13.8 13.8
West 20.7 17.2
North 13.8 20.7
South 37.9 41.4
Center 13.8 6.9
Device (%)
Phone 62.1 79.3
Computer 37.9 20.7
Headphone (%)
Yes 20.7 17.2
No 79.3 82.8

Citations

  • Dave, B. (2007). Kazakhstan—ethnicity, language and power. Routledge.
  • Du, J., Na, X., Liu, X., & Bu, H. (2018). AISHELL-2: Transforming mandarin ASR research into industrial scale. arXiv preprint arXiv:1808.10583.
  • Hannun, A. Y., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., ... & Ng, A. (2014). Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567.
  • Koh, J. X., Mislan, A., Khoo, K., Ang, B., Ang, W., Ng, C., & Tan, Y. Y. (2019). Building the singapore english national speech corpus. In INTERSPEECH 2019.
  • Kudo, T., & Richardson, J. (2018). SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations.
  • Makhambetov, O., Makazhanov, A., Yessenbayev, Z., Matkarimov, B., Sabyrgaliyev, I., & Sharafudinov, A. (2013). Assembling the kazakh language corpus. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing.
  • Mamyrbayev, O., Alimhan, K., Zhumazhanov, B., Turdalykyzy, T., & Gusmanova, F. (2020). End-to-end speech recognition in agglutinative languages. In ACIIDS 2020.
  • Mamyrbayev, O. J., Turdalyuly, M., Mekebayev, N., Alimhan, K., Kydyrbekova, A., & Turdalykyzy, T. (2019). Automatic recognition of kazakh speech using deep neural networks. In ACIIDS 2019.
  • Povey, D., Cheng, G., Wang, Y., Li, K., Xu, H., Yarmohammadi, M., & Khudanpur, S. (2018). Semi-orthogonal low-rank matrix factorization for deep neural networks. In INTERSPEECH 2018.
  • Povey, D., Peddinti, V., Galvez, D., Ghahremani, P., Manohar, V., Na, X., ... & Khudanpur, S. (2016). Purely sequence-trained neural networks for ASR based on lattice-free MMI. In INTERSPEECH 2016.
  • Sainath, T. N., Prabhavalkar, R., Kumar, S., Lee, S., Kannan, A., Rybach, D., ... & Chiu, C. C. (2018). No need for a lexicon? Evaluating the value of the pronunciation lexica in end-to-end models. In ICASSP 2018.
  • Sennrich, R., Haddow, B., & Birch, A. (2016). Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
  • Shi, Y., Hamdullah, A., Tang, Z., Wang, D., & Zheng, T. F. (2017). A free Kazakh speech database and a speech recognition baseline. In 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC).
  • Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations.
  • Snow, R., O’Connor, B., Jurafsky, D., & Ng, A. Y. (2008). Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. In Proceedings of EMNLP 2008.
  • Stolcke, A. (2002). SRILM—an extensible language modeling toolkit. In Proceedings of the International Conference on Spoken Language Processing, ICSLP 2002.
  • Takamichi, S., & Saruwatari, H. (2018). CPJD corpus: Crowdsourced parallel speech corpus of Japanese dialects. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018).
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems.
  • Watanabe, S., Hori, T., Karita, S., Hayashi, T., Nishitoba, J., Unno, Y., ... & Ochiai, T. (2018). ESPnet: End-to-end speech processing toolkit. In INTERSPEECH 2018.
  • Yu, D., & Deng, L. (2014). Automatic speech recognition: a deep learning approach. Springer.
  • Zhou, W., Michel, W., Irie, K., Kitza, M., Schlüter, R., & Ney, H. (2020). The RWTH ASR system for TED-LIUM Release 2: Improving Hybrid HMM with SpecAugment. In ICASSP 2020.

GitHub Repository

https://github.com/IS2AI/ISSAI_SAIDA_Kazakh_ASR

(Note: README.md was unavailable at the provided link.)