Datasets:

Modalities:
Audio
Languages:
English
ArXiv:
License:
File size: 4,214 Bytes
9fcc038
 
 
 
 
 
 
 
 
 
 
2eff4b8
9fcc038
ab9c37c
9fcc038
 
 
 
 
 
 
 
 
ab9c37c
9fcc038
 
faa147c
 
 
 
 
9fcc038
ab9c37c
9fcc038
 
bb3ba5f
9fcc038
 
 
bb3ba5f
9fcc038
 
 
230f997
 
ab9c37c
230f997
 
 
ab9c37c
230f997
 
ab9c37c
230f997
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
141cc25
ab9c37c
141cc25
 
 
 
 
 
 
 
 
 
 
 
 
ab9c37c
141cc25
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: mit  
language:  
- en  
pretty_name: Speech Brown  
size_categories:  
- 10K<n<100K  
task_categories:  
- text-to-speech  

---
[![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2412.13071) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/language-modeling-lab/CLASP)

## Dataset Summary

**Speech Brown** is a comprehensive, synthetic, and diverse paired speech-text dataset in 15 categories, covering a wide range of topics from fiction to religion. This dataset consists of over 55,000 sentence-level samples.  

To train the [CLASP](https://huggingface.co/llm-lab/CLASP) model, we created this dataset based on the Brown Corpus. The synthetic speech was generated using the [NVIDIA Tacotron 2](https://pytorch.org/hub/nvidia_deeplearningexamples_tacotron2/) text-to-speech model.  

For more information about our proposed model, please refer to this [paper](https://arxiv.org/abs/2412.13071). The dataset generation pipeline, along with code and usage instructions, is available on this [GitHub page](https://github.com/language-modeling-lab/CLASP).  

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/5dy1Cb3-ZmGytf3QbQN9a.png)  

## Dataset Statistics
1. Total size: Approximately 30 GB.  
2. Number of samples: 55,173 pairs of speech and text.  
3. Average tokens per sample: 19.00.  
4. Maximum tokens in a sample: 48.  
5. Average characters per sample: 96.72.
6. Number of unique tokens: 50,667
7. Categories: 15 categories consist of `adventure`, `belles_lettres`, `editorial`, `fiction`, `government`, `hobbies`, `humor`, `learned`, `lore`, `mystery`, `news`, `religion`, `reviews`, `romance`, `science_fiction`.  

## Dataset Structure
To ensure ease of use, the dataset is partitioned into 10 parts. Each part can be used independently if it meets the requirements of your task and model.  

### Metadata Files
1. **global_metadata**: A JSON file containing metadata for all 55,173 samples.  
2. **localized_metadata**: A JSON file containing metadata for all samples, categorized into the 10 dataset partitions.  

### Metadata Fields
1. **id**: The unique identifier for the sample.  
2. **audio_file_path**: The file path for the audio in the dataset.  
3. **category**: The category of the sample's text.  
4. **text**: The corresponding text of the audio file.

## Usage Instructions

To use this dataset, download the parts and metadata files as follows:

#### Option 1: Manual Download
Visit the [dataset repository](https://huggingface.co/datasets/llm-lab/SpeechBrown/tree/main) and download all `dataset_partX.zip` files and the `global_metadata.json` file.

#### Option 2: Programmatic Download
Use the `huggingface_hub` library to download the files programmatically:

```python
from huggingface_hub import hf_hub_download
from zipfile import ZipFile
import os
import json

# Download dataset parts
zip_file_path1 = hf_hub_download(repo_id="llm-lab/SpeechBrown", filename="dataset_part1.zip", repo_type="dataset")
zip_file_path2 = hf_hub_download(repo_id="llm-lab/SpeechBrown", filename="dataset_part2.zip", repo_type="dataset")
# Download other parts...

# Download metadata
metadata_file_path = hf_hub_download(repo_id="llm-lab/SpeechBrown", filename="global_metadata.json", repo_type="dataset")

for i in range(1, 11):
    with ZipFile(f'dataset_part{i}.zip', 'r') as zip_ref:
        zip_ref.extractall(f'dataset_part{i}')
    os.remove(f'dataset_part{i}.zip')

with open('global_metadata.json', 'r') as f:
    metadata = json.load(f)
metadata.keys()
```

## Citations
If you find our paper, code, data, or models useful, please cite the paper:  
```
@misc{abootorabi2024claspcontrastivelanguagespeechpretraining,
      title={CLASP: Contrastive Language-Speech Pretraining for Multilingual Multimodal Information Retrieval}, 
      author={Mohammad Mahdi Abootorabi and Ehsaneddin Asgari},
      year={2024},
      eprint={2412.13071},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.13071}, 
}
```

## Contact
If you have questions, please email [email protected] or [email protected].