Datasets:

Modalities:
Audio
Languages:
English
ArXiv:
License:
aboots commited on
Commit
ab9c37c
·
verified ·
1 Parent(s): f2f6768

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -11,7 +11,7 @@ task_categories:
11
  ---
12
  [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2412.13071) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/language-modeling-lab/CLASP)
13
 
14
- # Dataset Summary
15
 
16
  **Speech Brown** is a comprehensive, synthetic, and diverse paired speech-text dataset in 15 categories, covering a wide range of topics from fiction to religion. This dataset consists of over 55,000 sentence-level samples.
17
 
@@ -21,7 +21,7 @@ For more information about our proposed model, please refer to this [paper](http
21
 
22
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/5dy1Cb3-ZmGytf3QbQN9a.png)
23
 
24
- # Dataset Statistics
25
  1. Total size: Approximately 30 GB.
26
  2. Number of samples: 55,173 pairs of speech and text.
27
  3. Average tokens per sample: 19.00.
@@ -30,27 +30,27 @@ For more information about our proposed model, please refer to this [paper](http
30
  6. Number of unique tokens: 50,667
31
  7. Categories: 15 categories consist of `adventure`, `belles_lettres`, `editorial`, `fiction`, `government`, `hobbies`, `humor`, `learned`, `lore`, `mystery`, `news`, `religion`, `reviews`, `romance`, `science_fiction`.
32
 
33
- # Dataset Structure
34
  To ensure ease of use, the dataset is partitioned into 10 parts. Each part can be used independently if it meets the requirements of your task and model.
35
 
36
- ## Metadata Files:
37
  1. **global_metadata**: A JSON file containing metadata for all 55,173 samples.
38
  2. **localized_metadata**: A JSON file containing metadata for all samples, categorized into the 10 dataset partitions.
39
 
40
- ## Metadata Fields:
41
  1. **id**: The unique identifier for the sample.
42
  2. **audio_file_path**: The file path for the audio in the dataset.
43
  3. **category**: The category of the sample's text.
44
  4. **text**: The corresponding text of the audio file.
45
 
46
- # Usage Instructions
47
 
48
  To use this dataset, download the parts and metadata files as follows:
49
 
50
- ### Option 1: Manual Download
51
  Visit the [dataset repository](https://huggingface.co/datasets/llm-lab/SpeechBrown/tree/main) and download all `dataset_partX.zip` files and the `global_metadata.json` file.
52
 
53
- ### Option 2: Programmatic Download
54
  Use the `huggingface_hub` library to download the files programmatically:
55
 
56
  ```python
@@ -77,7 +77,7 @@ with open('global_metadata.json', 'r') as f:
77
  metadata.keys()
78
  ```
79
 
80
- # Citations
81
  If you find our paper, code, data, or models useful, please cite the paper:
82
  ```
83
  @misc{abootorabi2024claspcontrastivelanguagespeechpretraining,
@@ -91,5 +91,5 @@ If you find our paper, code, data, or models useful, please cite the paper:
91
  }
92
  ```
93
 
94
- # Contact
95
  If you have questions, please email [email protected] or [email protected].
 
11
  ---
12
  [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2412.13071) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/language-modeling-lab/CLASP)
13
 
14
+ ## Dataset Summary
15
 
16
  **Speech Brown** is a comprehensive, synthetic, and diverse paired speech-text dataset in 15 categories, covering a wide range of topics from fiction to religion. This dataset consists of over 55,000 sentence-level samples.
17
 
 
21
 
22
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/5dy1Cb3-ZmGytf3QbQN9a.png)
23
 
24
+ ## Dataset Statistics
25
  1. Total size: Approximately 30 GB.
26
  2. Number of samples: 55,173 pairs of speech and text.
27
  3. Average tokens per sample: 19.00.
 
30
  6. Number of unique tokens: 50,667
31
  7. Categories: 15 categories consist of `adventure`, `belles_lettres`, `editorial`, `fiction`, `government`, `hobbies`, `humor`, `learned`, `lore`, `mystery`, `news`, `religion`, `reviews`, `romance`, `science_fiction`.
32
 
33
+ ## Dataset Structure
34
  To ensure ease of use, the dataset is partitioned into 10 parts. Each part can be used independently if it meets the requirements of your task and model.
35
 
36
+ ### Metadata Files:
37
  1. **global_metadata**: A JSON file containing metadata for all 55,173 samples.
38
  2. **localized_metadata**: A JSON file containing metadata for all samples, categorized into the 10 dataset partitions.
39
 
40
+ ### Metadata Fields:
41
  1. **id**: The unique identifier for the sample.
42
  2. **audio_file_path**: The file path for the audio in the dataset.
43
  3. **category**: The category of the sample's text.
44
  4. **text**: The corresponding text of the audio file.
45
 
46
+ ## Usage Instructions
47
 
48
  To use this dataset, download the parts and metadata files as follows:
49
 
50
+ #### Option 1: Manual Download
51
  Visit the [dataset repository](https://huggingface.co/datasets/llm-lab/SpeechBrown/tree/main) and download all `dataset_partX.zip` files and the `global_metadata.json` file.
52
 
53
+ #### Option 2: Programmatic Download
54
  Use the `huggingface_hub` library to download the files programmatically:
55
 
56
  ```python
 
77
  metadata.keys()
78
  ```
79
 
80
+ ## Citations
81
  If you find our paper, code, data, or models useful, please cite the paper:
82
  ```
83
  @misc{abootorabi2024claspcontrastivelanguagespeechpretraining,
 
91
  }
92
  ```
93
 
94
+ ## Contact
95
  If you have questions, please email [email protected] or [email protected].