|
--- |
|
license: mit |
|
language: |
|
- en |
|
pretty_name: Speech Brown |
|
size_categories: |
|
- 10K<n<100K |
|
task_categories: |
|
- text-to-speech |
|
|
|
--- |
|
[](https://arxiv.org/abs/2412.13071) [](https://github.com/language-modeling-lab/CLASP) |
|
|
|
## Dataset Summary |
|
|
|
**Speech Brown** is a comprehensive, synthetic, and diverse paired speech-text dataset in 15 categories, covering a wide range of topics from fiction to religion. This dataset consists of over 55,000 sentence-level samples. |
|
|
|
To train the [CLASP](https://huggingface.co/llm-lab/CLASP) model, we created this dataset based on the Brown Corpus. The synthetic speech was generated using the [NVIDIA Tacotron 2](https://pytorch.org/hub/nvidia_deeplearningexamples_tacotron2/) text-to-speech model. |
|
|
|
For more information about our proposed model, please refer to this [paper](https://arxiv.org/abs/2412.13071). The dataset generation pipeline, along with code and usage instructions, is available on this [GitHub page](https://github.com/language-modeling-lab/CLASP). |
|
|
|
 |
|
|
|
## Dataset Statistics |
|
1. Total size: Approximately 30 GB. |
|
2. Number of samples: 55,173 pairs of speech and text. |
|
3. Average tokens per sample: 19.00. |
|
4. Maximum tokens in a sample: 48. |
|
5. Average characters per sample: 96.72. |
|
6. Number of unique tokens: 50,667 |
|
7. Categories: 15 categories consist of `adventure`, `belles_lettres`, `editorial`, `fiction`, `government`, `hobbies`, `humor`, `learned`, `lore`, `mystery`, `news`, `religion`, `reviews`, `romance`, `science_fiction`. |
|
|
|
## Dataset Structure |
|
To ensure ease of use, the dataset is partitioned into 10 parts. Each part can be used independently if it meets the requirements of your task and model. |
|
|
|
### Metadata Files |
|
1. **global_metadata**: A JSON file containing metadata for all 55,173 samples. |
|
2. **localized_metadata**: A JSON file containing metadata for all samples, categorized into the 10 dataset partitions. |
|
|
|
### Metadata Fields |
|
1. **id**: The unique identifier for the sample. |
|
2. **audio_file_path**: The file path for the audio in the dataset. |
|
3. **category**: The category of the sample's text. |
|
4. **text**: The corresponding text of the audio file. |
|
|
|
## Usage Instructions |
|
|
|
To use this dataset, download the parts and metadata files as follows: |
|
|
|
#### Option 1: Manual Download |
|
Visit the [dataset repository](https://huggingface.co/datasets/llm-lab/SpeechBrown/tree/main) and download all `dataset_partX.zip` files and the `global_metadata.json` file. |
|
|
|
#### Option 2: Programmatic Download |
|
Use the `huggingface_hub` library to download the files programmatically: |
|
|
|
```python |
|
from huggingface_hub import hf_hub_download |
|
from zipfile import ZipFile |
|
import os |
|
import json |
|
|
|
# Download dataset parts |
|
zip_file_path1 = hf_hub_download(repo_id="llm-lab/SpeechBrown", filename="dataset_part1.zip", repo_type="dataset") |
|
zip_file_path2 = hf_hub_download(repo_id="llm-lab/SpeechBrown", filename="dataset_part2.zip", repo_type="dataset") |
|
# Download other parts... |
|
|
|
# Download metadata |
|
metadata_file_path = hf_hub_download(repo_id="llm-lab/SpeechBrown", filename="global_metadata.json", repo_type="dataset") |
|
|
|
for i in range(1, 11): |
|
with ZipFile(f'dataset_part{i}.zip', 'r') as zip_ref: |
|
zip_ref.extractall(f'dataset_part{i}') |
|
os.remove(f'dataset_part{i}.zip') |
|
|
|
with open('global_metadata.json', 'r') as f: |
|
metadata = json.load(f) |
|
metadata.keys() |
|
``` |
|
|
|
## Citations |
|
If you find our paper, code, data, or models useful, please cite the paper: |
|
``` |
|
@misc{abootorabi2024claspcontrastivelanguagespeechpretraining, |
|
title={CLASP: Contrastive Language-Speech Pretraining for Multilingual Multimodal Information Retrieval}, |
|
author={Mohammad Mahdi Abootorabi and Ehsaneddin Asgari}, |
|
year={2024}, |
|
eprint={2412.13071}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url={https://arxiv.org/abs/2412.13071}, |
|
} |
|
``` |
|
|
|
## Contact |
|
If you have questions, please email [email protected] or [email protected]. |