File size: 7,276 Bytes
38053a7 05eb561 4cb1029 05eb561 4cb1029 917be0d 4cb1029 1ffa3fc 4cb1029 1ffa3fc 38053a7 02ad01f 917be0d 1ffa3fc 7cc2ffa 1ffa3fc 819351a 1ffa3fc da14282 1ffa3fc bdd1ad1 1ffa3fc 917be0d 1ffa3fc 917be0d 1ffa3fc dde731b 1ffa3fc 917be0d f1989a9 38053a7 05eb561 da14282 05eb561 38053a7 bdd1ad1 38053a7 dd11591 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
---
license: cc-by-4.0
configs:
- config_name: manifest
data_files:
- split: 22khz
path: "22khz/manifest_22khz.json"
- split: 44khz
path: "44khz/manifest_44khz.json"
- config_name: chapters
data_files:
- split: 22khz
path: "22khz/chapters_22khz.json"
- split: 44khz
path: "44khz/chapters_44khz.json"
---
# HiFiTTS-2: A Large-Scale High Bandwidth Speech Dataset
<style>
img {
display: inline-table;
vertical-align: small;
margin: 0;
padding: 0;
}
</style>
## Dataset Description
This repository contains the metadata for HiFiTTS-2, a large scale speech dataset derived from LibriVox audiobooks. For more details, please refer to [our paper](https://arxiv.org/abs/2506.04152).
The dataset contains metadata for approximately 36.7k hours of audio from 5k speakers that can be downloaded from LibriVox at a 48 kHz sampling rate.
The metadata contains estimated bandwidth, which can be used to infer the original sampling rate the audio was recorded at. The base dataset is filtered for a bandwidth appropriate for training speech models at 22 kHz. We also provide a precomputed subset with 31.7k hours appropriate for 44 kHz training. Users can modify the download script to use any sampling rate and bandwidth threshold which might be more appropriate for their work.
LibriVox audiobooks are not redistributed on Hugging Face. All audio in the dataset can be downloaded from LibriVox, following the instructions below.
### Frequently Asked Questions
- Downloading the 22 kHz version will require approximately 2.8TB of disk space. Downloading the 44 kHz version will require approximately 4.0TB of disk space.
- During download there might be warning messages from "libmpg123". These warnings can be safely ignored. These errors look like `[src/libmpg123/id3.c:INT123_id3_to_utf8():394] warning: Weird tag size 119 for encoding 1 - I will probably trim too early or something but I think the MP3 is broken.`
- By default, the script will download audio files into the workspace directory under *{workspace_dir}/audio_22khz*. The download will ignore HTTP errors and store information for any failed downloads into *{workspace_dir}/errors_22khz.json*. A new manifest will be created at *{worksapce_dir}/manifest_filtered_22khz.json* with utterances from failed audiobooks removed. You can override the default behavior by modifying the [config.yaml file](https://github.com/NVIDIA/NeMo-speech-data-processor/blob/main/dataset_configs/english/hifitts2/config_22khz.yaml) in your local SDP repository.
- If you want to retry the download for failed audiobooks, rerun the script with the output *errors_22khz.json* file:
```bash
python /home/NeMo-speech-data-processor/main.py \
--config-path="/home/NeMo-speech-data-processor/dataset_configs/english/hifitts2" \
--config-name="config_22khz.yaml" \
workspace_dir="/home/hifitts2" \
chapter_filename="/home/hifitts2/errors_22khz.json" \
max_workers=8
```
## Dataset Format
The dataset contains an utterance level manifest with these fields:
- **audio_filepath**: Relative path where utterance is stored
- **speaker**: LibriVox speaker ID
- **set**: Dataset partition, either "train", "test_seen", "dev_seen", "test_unseen", or "dev_unseen"
- **duration**: Duration of utterance
- **bandwidth**: Estimated bandwidth of audiobook chapter containing this utterance
- **speaker_count**: Number of speakers detected in this utterance
- **wer**: ASR word error rate of *normalized_text*
- **cer**: ASR character error rate of *normalized_text*
- **text_source**: Data source text was taken from, either 'book' or 'mls'
- **text**: Original data source transcription
- **normalized_text**: Transcription output by text processing pipeline
The dataset contains an audiobook chapter level manifest with these fields:
- **url**: Download URL for the LibriVox audiobook chapter
- **chapter_filepath**: Relative path where audiobook chapter is stored
- **duration**: Duration of chapter
- **bandwidth**: Bandwidth estimated using the first 30 seconds of the chapter
- **utterances**: List of utterance metadata with the following fields
- **utterances.audio_filepath**: Relative path where utterance is stored
- **utterances.offset**: Offset of utterance within the chapter
- **utterances.duration**: Duration of utterance
Bandwidth is estimated from the first 30 seconds of each audiobook using the approach from [`estimate_bandwidth.py`](https://github.com/NVIDIA/NeMo-speech-data-processor/blob/main/sdp/processors/nemo/estimate_bandwidth.py#L65) in [Speech Data Processor (SDP) Toolkit](https://github.com/NVIDIA/NeMo-speech-data-processor). The bandwidth fmax is estimated by using the mean of the power spectrum to find the highest frequency that has at least -50 dB level
relative to the peak value of the spectrum, namely,
$$f_{\text{max}} = \max\left\{f \in [0, f_{\text{Nyquist}}] \, \bigg|\, 10 \log_{10} \left(\frac{P(f)}{P_{\text{peak}}}\right) \geq -50\, \text{dB}\right\}$$
where `P(f)` is the power spectral density and `P_peak` the maximum spectral power.
## Download Instructions
1. Download the *manifet.json* file and *chapter.json* files corresponding to your desired sampling rate from this Hugging Face repository. Copy these into a workspace directory (in this example */home/hifitts2*).
2. Install NeMo-speech-data-processor (SDP) using the *Installation* instructions on https://github.com/NVIDIA/NeMo-speech-data-processor.
3. Run the SDP script to download the dataset to local disk.
```bash
python /home/NeMo-speech-data-processor/main.py \
--config-path="/home/NeMo-speech-data-processor/dataset_configs/english/hifitts2" \
--config-name="config_22khz.yaml" \
workspace_dir="/home/hifitts2" \
max_workers=8
```
*max_workers* is the number of threads to use for downloading the data. To download the 44khz dataset, specify *config_44khz.yaml*.
Please see [FAQs](#frequently-asked-questions) for further help regarding download. Or raise an issue on the community tab.
## Dataset Owner(s)
NVIDIA Corporation
## Dataset Creation Date
June 2025
## License/Terms of Use
GOVERNING TERMS: This dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0) available at https://creativecommons.org/licenses/by/4.0/legalcode.
## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
## Citation
If you find this dataset useful, please cite:
```
@inproceedings{rlangman2025hifitts2,
title={HiFiTTS-2: A Large-Scale High Bandwidth Speech Dataset},
author={Ryan Langman and Xuesong Yang and Paarth Neekhara and Shehzeen Hussain and Edresson Casanova and Evelina Bakhturina and Jason Li},
booktitle={Interspeech},
year={2025},
}
``` |