Update README.md
Browse files
README.md
CHANGED
@@ -1,208 +1,246 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
-
language:
|
4 |
-
- en
|
5 |
-
- zh
|
6 |
tags:
|
7 |
-
-
|
8 |
-
|
9 |
-
|
10 |
-
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
<p style="margin-top: 0;margin-bottom: 0;">
|
20 |
-
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
|
21 |
-
</p>
|
22 |
-
<div style="display: flex; gap: 5px; align-items: center; ">
|
23 |
-
<a href="https://github.com/unslothai/unsloth/">
|
24 |
-
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
|
25 |
-
</a>
|
26 |
-
<a href="https://discord.gg/unsloth">
|
27 |
-
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
|
28 |
-
</a>
|
29 |
-
<a href="https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning">
|
30 |
-
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
|
31 |
-
</a>
|
32 |
-
</div>
|
33 |
-
<h1 style="margin-top: 0rem;">✨ Run & Fine-tune TTS models with Unsloth!</h1>
|
34 |
-
</div>
|
35 |
-
|
36 |
-
- Fine-tune TTS models for free using our Google [Colab notebooks here](https://docs.unsloth.ai/get-started/unsloth-notebooks#text-to-speech-tts-notebooks)!
|
37 |
-
- Read our Blog about TTS support: [unsloth.ai/blog/tts](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning)
|
38 |
-
|
39 |
-
| Unsloth supports | Free Notebooks | Performance | Memory use |
|
40 |
-
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
|
41 |
-
| **Spark-TTS** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Spark_TTS_(0_5B).ipynb) | 1.5x faster | 58% less |
|
42 |
-
| **Whisper Large V3** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Whisper.ipynb) | 1.5x faster | 50% less |
|
43 |
-
| **Qwen3 (14B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 2x faster | 70% less |
|
44 |
-
| **Llama 3.2 Vision (11B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 1.8x faster | 50% less |
|
45 |
-
|
46 |
-
<div align="center">
|
47 |
-
<h1>
|
48 |
-
Spark-TTS
|
49 |
-
</h1>
|
50 |
-
<p>
|
51 |
-
Official model for <br>
|
52 |
-
<b><em>Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens</em></b>
|
53 |
-
</p>
|
54 |
-
<p>
|
55 |
-
<img src="src/logo/SparkTTS.jpg" alt="Spark-TTS Logo" style="width: 200px; height: 200px;">
|
56 |
-
</p>
|
57 |
-
</div>
|
58 |
-
|
59 |
-
|
60 |
-
## Spark-TTS 🔥
|
61 |
-
|
62 |
-
### 👉🏻 [Spark-TTS Demos](https://sparkaudio.github.io/spark-tts/) 👈🏻
|
63 |
-
|
64 |
-
### 👉🏻 [Github Repo](https://github.com/SparkAudio/Spark-TTS) 👈🏻
|
65 |
-
|
66 |
-
### 👉🏻 [Paper](https://arxiv.org/pdf/2503.01710) 👈🏻
|
67 |
-
|
68 |
-
### Overview
|
69 |
-
|
70 |
-
Spark-TTS is an advanced text-to-speech system that uses the power of large language models (LLM) for highly accurate and natural-sounding voice synthesis. It is designed to be efficient, flexible, and powerful for both research and production use.
|
71 |
-
|
72 |
-
### Key Features
|
73 |
-
|
74 |
-
- **Simplicity and Efficiency**: Built entirely on Qwen2.5, Spark-TTS eliminates the need for additional generation models like flow matching. Instead of relying on separate models to generate acoustic features, it directly reconstructs audio from the code predicted by the LLM. This approach streamlines the process, improving efficiency and reducing complexity.
|
75 |
-
- **High-Quality Voice Cloning**: Supports zero-shot voice cloning, which means it can replicate a speaker's voice even without specific training data for that voice. This is ideal for cross-lingual and code-switching scenarios, allowing for seamless transitions between languages and voices without requiring separate training for each one.
|
76 |
-
- **Bilingual Support**: Supports both Chinese and English, and is capable of zero-shot voice cloning for cross-lingual and code-switching scenarios, enabling the model to synthesize speech in multiple languages with high naturalness and accuracy.
|
77 |
-
- **Controllable Speech Generation**: Supports creating virtual speakers by adjusting parameters such as gender, pitch, and speaking rate.
|
78 |
-
|
79 |
---
|
80 |
|
81 |
-
|
82 |
-
<tr>
|
83 |
-
<td align="center"><b>Inference Overview of Voice Cloning</b><br><img src="src/figures/infer_voice_cloning.png" width="80%" /></td>
|
84 |
-
</tr>
|
85 |
-
<tr>
|
86 |
-
<td align="center"><b>Inference Overview of Controlled Generation</b><br><img src="src/figures/infer_control.png" width="80%" /></td>
|
87 |
-
</tr>
|
88 |
-
</table>
|
89 |
-
|
90 |
-
|
91 |
-
## Install
|
92 |
-
**Clone and Install**
|
93 |
-
|
94 |
-
- Clone the repo
|
95 |
-
``` sh
|
96 |
-
git clone https://github.com/SparkAudio/Spark-TTS.git
|
97 |
-
cd Spark-TTS
|
98 |
-
```
|
99 |
-
|
100 |
-
- Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
|
101 |
-
- Create Conda env:
|
102 |
-
|
103 |
-
``` sh
|
104 |
-
conda create -n sparktts -y python=3.12
|
105 |
-
conda activate sparktts
|
106 |
-
pip install -r requirements.txt
|
107 |
-
# If you are in mainland China, you can set the mirror as follows:
|
108 |
-
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
|
109 |
-
```
|
110 |
-
|
111 |
-
**Model Download**
|
112 |
-
|
113 |
-
Download via python:
|
114 |
-
```python
|
115 |
-
from huggingface_hub import snapshot_download
|
116 |
-
|
117 |
-
snapshot_download("SparkAudio/Spark-TTS-0.5B", local_dir="pretrained_models/Spark-TTS-0.5B")
|
118 |
-
```
|
119 |
|
120 |
-
|
121 |
-
```sh
|
122 |
-
mkdir -p pretrained_models
|
123 |
|
124 |
-
|
125 |
-
git lfs install
|
126 |
|
127 |
-
|
128 |
-
```
|
129 |
|
130 |
-
|
131 |
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
|
138 |
-
|
139 |
|
140 |
-
|
141 |
-
python -m cli.inference \
|
142 |
-
--text "text to synthesis." \
|
143 |
-
--device 0 \
|
144 |
-
--save_dir "path/to/save/audio" \
|
145 |
-
--model_dir pretrained_models/Spark-TTS-0.5B \
|
146 |
-
--prompt_text "transcript of the prompt audio" \
|
147 |
-
--prompt_speech_path "path/to/prompt_audio"
|
148 |
-
```
|
149 |
|
150 |
-
|
151 |
|
152 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
153 |
|
154 |
|
155 |
-
|
156 |
-
|:-------------------:|:-------------------:|
|
157 |
-
|  |  |
|
158 |
|
|
|
159 |
|
160 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
161 |
|
162 |
-
|
163 |
-
-
|
164 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
165 |
|
166 |
-
|
167 |
|
168 |
-
|
169 |
-
@misc{wang2025sparktts,
|
170 |
-
title={Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens},
|
171 |
-
author={Xinsheng Wang and Mingqi Jiang and Ziyang Ma and Ziyu Zhang and Songxiang Liu and Linqin Li and Zheng Liang and Qixi Zheng and Rui Wang and Xiaoqin Feng and Weizhen Bian and Zhen Ye and Sitong Cheng and Ruibin Yuan and Zhixian Zhao and Xinfa Zhu and Jiahao Pan and Liumeng Xue and Pengcheng Zhu and Yunlin Chen and Zhifei Li and Xie Chen and Lei Xie and Yike Guo and Wei Xue},
|
172 |
-
year={2025},
|
173 |
-
eprint={2503.01710},
|
174 |
-
archivePrefix={arXiv},
|
175 |
-
primaryClass={cs.SD},
|
176 |
-
url={https://arxiv.org/abs/2503.01710},
|
177 |
-
}
|
178 |
-
```
|
179 |
|
|
|
180 |
|
181 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
182 |
|
183 |
-
|
184 |
|
185 |
-
|
186 |
|
187 |
-
|
188 |
|
189 |
-
|
|
|
|
|
|
|
190 |
|
191 |
-
|
192 |
|
193 |
-
|
194 |
|
|
|
195 |
|
196 |
-
|
197 |
|
198 |
-
|
199 |
|
200 |
-
|
|
|
|
|
201 |
|
202 |
-
|
203 |
|
204 |
-
|
205 |
|
206 |
-
|
207 |
|
208 |
-
We advocate for the responsible development and use of AI and encourage the community to uphold safety and ethical principles in AI research and applications. If you have any concerns regarding ethics or misuse, please contact us.
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
|
|
|
|
|
|
3 |
tags:
|
4 |
+
- spark-tts
|
5 |
+
- text-to-speech
|
6 |
+
- nonverbal
|
7 |
+
- emotional
|
8 |
+
- audio
|
9 |
+
- speech-synthesis
|
10 |
+
- huggingface
|
11 |
+
language:
|
12 |
+
- en
|
13 |
+
model-index:
|
14 |
+
- name: SparkNV-Voice
|
15 |
+
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
---
|
17 |
|
18 |
+
# 🔊 SparkNV-Voice
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
+
**SparkNV-Voice** is a fine-tuned version of the [Spark-TTS](https://huggingface.co/suno-ai/spark-tts) model trained on the [NonverbalTTS](https://huggingface.co/datasets/deepvk/NonverbalTTS) dataset. It enables expressive speech synthesis with **nonverbal cues** (like laughter, sighs, sneezing, etc.) and rich emotional tone.
|
|
|
|
|
21 |
|
22 |
+
Built for applications that require **natural, human-like vocalization**, this model produces speech with **semantic tokens** and **global prosody control** using BiCodec detokenization.
|
|
|
23 |
|
24 |
+
---
|
|
|
25 |
|
26 |
+
## 🧾 Model Details
|
27 |
|
28 |
+
- **Base**: `suno-ai/spark-tts`
|
29 |
+
- **Dataset**: [`deepvk/NonverbalTTS`](https://huggingface.co/datasets/deepvk/NonverbalTTS)
|
30 |
+
- **Architecture**: Causal Language Model + BiCodec for audio token generation
|
31 |
+
- **Language**: English
|
32 |
+
- **Voice**: Single-speaker (no multi-speaker conditioning)
|
33 |
|
34 |
+
---
|
35 |
|
36 |
+
## 🛠 Installation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
+
To run this model, install the required dependencies:
|
39 |
|
40 |
+
```bash
|
41 |
+
pip install --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl triton cut_cross_entropy unsloth_zoo
|
42 |
+
pip install sentencepiece protobuf "datasets>=3.4.1,<4.0.0" "huggingface_hub>=0.34.0" hf_transfer
|
43 |
+
pip install --no-deps unsloth
|
44 |
+
git clone https://github.com/SparkAudio/Spark-TTS
|
45 |
+
pip install omegaconf einx
|
46 |
+
````
|
47 |
|
48 |
|
49 |
+
---
|
|
|
|
|
50 |
|
51 |
+
## 🚀 Inference Code
|
52 |
|
53 |
+
```python
|
54 |
+
import torch
|
55 |
+
import re
|
56 |
+
import numpy as np
|
57 |
+
from typing import Dict, Any
|
58 |
+
import torchaudio.transforms as T
|
59 |
+
from unsloth import FastModel
|
60 |
+
import sys
|
61 |
+
sys.path.append('Spark-TTS')
|
62 |
+
from sparktts.models.audio_tokenizer import BiCodecTokenizer
|
63 |
+
from huggingface_hub import snapshot_download
|
64 |
|
65 |
+
# Download model and code
|
66 |
+
snapshot_download("yasserrmd/SparkNV-Voice", local_dir = "SparkNV-Voice")
|
67 |
+
|
68 |
+
|
69 |
+
max_seq_length = 2048 # Choose any for long context!
|
70 |
+
model, tokenizer = FastModel.from_pretrained(
|
71 |
+
model_name = "SparkNV-Voice",
|
72 |
+
max_seq_length = max_seq_length,
|
73 |
+
dtype = torch.float32, # Spark seems to only work on float32 for now
|
74 |
+
full_finetuning = True, # We support full finetuning now!
|
75 |
+
load_in_4bit = False,
|
76 |
+
#token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
|
77 |
+
)
|
78 |
+
|
79 |
+
FastModel.for_inference(model) # Enable native 2x faster inference
|
80 |
+
|
81 |
+
audio_tokenizer = BiCodecTokenizer("SparkNV-Voice", "cuda")
|
82 |
+
audio_tokenizer.model.to("cuda")
|
83 |
+
|
84 |
+
input_text = "Hey there, my name is Yasser, and I'm a 🌬️ speech generation model that can sound like a person."
|
85 |
+
chosen_voice = None # None for single-speaker
|
86 |
+
|
87 |
+
@torch.inference_mode()
|
88 |
+
def generate_speech_from_text(
|
89 |
+
text: str,
|
90 |
+
temperature: float = 0.8, # Generation temperature
|
91 |
+
top_k: int = 50, # Generation top_k
|
92 |
+
top_p: float = 1, # Generation top_p
|
93 |
+
max_new_audio_tokens: int = 2048, # Max tokens for audio part
|
94 |
+
device: torch.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
95 |
+
) -> np.ndarray:
|
96 |
+
"""
|
97 |
+
Generates speech audio from text using default voice control parameters.
|
98 |
+
|
99 |
+
Args:
|
100 |
+
text (str): The text input to be converted to speech.
|
101 |
+
temperature (float): Sampling temperature for generation.
|
102 |
+
top_k (int): Top-k sampling parameter.
|
103 |
+
top_p (float): Top-p (nucleus) sampling parameter.
|
104 |
+
max_new_audio_tokens (int): Max number of new tokens to generate (limits audio length).
|
105 |
+
device (torch.device): Device to run inference on.
|
106 |
+
|
107 |
+
Returns:
|
108 |
+
np.ndarray: Generated waveform as a NumPy array.
|
109 |
+
"""
|
110 |
+
|
111 |
+
torch.compiler.reset()
|
112 |
+
|
113 |
+
prompt = "".join([
|
114 |
+
"<|task_tts|>",
|
115 |
+
"<|start_content|>",
|
116 |
+
text,
|
117 |
+
"<|end_content|>",
|
118 |
+
"<|start_global_token|>"
|
119 |
+
])
|
120 |
+
|
121 |
+
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
|
122 |
+
|
123 |
+
print("Generating token sequence...")
|
124 |
+
generated_ids = model.generate(
|
125 |
+
**model_inputs,
|
126 |
+
max_new_tokens=max_new_audio_tokens, # Limit generation length
|
127 |
+
do_sample=True,
|
128 |
+
temperature=temperature,
|
129 |
+
top_k=top_k,
|
130 |
+
top_p=top_p,
|
131 |
+
eos_token_id=tokenizer.eos_token_id, # Stop token
|
132 |
+
pad_token_id=tokenizer.pad_token_id # Use models pad token id
|
133 |
+
)
|
134 |
+
print("Token sequence generated.")
|
135 |
+
|
136 |
+
|
137 |
+
generated_ids_trimmed = generated_ids[:, model_inputs.input_ids.shape[1]:]
|
138 |
+
|
139 |
+
|
140 |
+
predicts_text = tokenizer.batch_decode(generated_ids_trimmed, skip_special_tokens=False)[0]
|
141 |
+
# print(f"\nGenerated Text (for parsing):\n{predicts_text}\n") # Debugging
|
142 |
+
|
143 |
+
# Extract semantic token IDs using regex
|
144 |
+
semantic_matches = re.findall(r"<\|bicodec_semantic_(\d+)\|>", predicts_text)
|
145 |
+
if not semantic_matches:
|
146 |
+
print("Warning: No semantic tokens found in the generated output.")
|
147 |
+
# Handle appropriately - perhaps return silence or raise error
|
148 |
+
return np.array([], dtype=np.float32)
|
149 |
+
|
150 |
+
pred_semantic_ids = torch.tensor([int(token) for token in semantic_matches]).long().unsqueeze(0) # Add batch dim
|
151 |
+
|
152 |
+
# Extract global token IDs using regex (assuming controllable mode also generates these)
|
153 |
+
global_matches = re.findall(r"<\|bicodec_global_(\d+)\|>", predicts_text)
|
154 |
+
if not global_matches:
|
155 |
+
print("Warning: No global tokens found in the generated output (controllable mode). Might use defaults or fail.")
|
156 |
+
pred_global_ids = torch.zeros((1, 1), dtype=torch.long)
|
157 |
+
else:
|
158 |
+
pred_global_ids = torch.tensor([int(token) for token in global_matches]).long().unsqueeze(0) # Add batch dim
|
159 |
+
|
160 |
+
pred_global_ids = pred_global_ids.unsqueeze(0) # Shape becomes (1, 1, N_global)
|
161 |
+
|
162 |
+
print(f"Found {pred_semantic_ids.shape[1]} semantic tokens.")
|
163 |
+
print(f"Found {pred_global_ids.shape[2]} global tokens.")
|
164 |
+
|
165 |
+
|
166 |
+
# 5. Detokenize using BiCodecTokenizer
|
167 |
+
print("Detokenizing audio tokens...")
|
168 |
+
# Ensure audio_tokenizer and its internal model are on the correct device
|
169 |
+
audio_tokenizer.device = device
|
170 |
+
audio_tokenizer.model.to(device)
|
171 |
+
# Squeeze the extra dimension from global tokens as seen in SparkTTS example
|
172 |
+
wav_np = audio_tokenizer.detokenize(
|
173 |
+
pred_global_ids.to(device).squeeze(0), # Shape (1, N_global)
|
174 |
+
pred_semantic_ids.to(device) # Shape (1, N_semantic)
|
175 |
+
)
|
176 |
+
print("Detokenization complete.")
|
177 |
+
|
178 |
+
return wav_np
|
179 |
+
|
180 |
+
if __name__ == "__main__":
|
181 |
+
print(f"Generating speech for: '{input_text}'")
|
182 |
+
text = f"{chosen_voice}: " + input_text if chosen_voice else input_text
|
183 |
+
generated_waveform = generate_speech_from_text(input_text)
|
184 |
+
|
185 |
+
if generated_waveform.size > 0:
|
186 |
+
import soundfile as sf
|
187 |
+
output_filename = "generated_speech_controllable.wav"
|
188 |
+
sample_rate = audio_tokenizer.config.get("sample_rate", 16000)
|
189 |
+
sf.write(output_filename, generated_waveform, sample_rate)
|
190 |
+
print(f"Audio saved to {output_filename}")
|
191 |
+
|
192 |
+
# Optional: Play in notebook
|
193 |
+
from IPython.display import Audio, display
|
194 |
+
display(Audio(generated_waveform, rate=sample_rate))
|
195 |
+
else:
|
196 |
+
print("Audio generation failed (no tokens found?).")
|
197 |
+
````
|
198 |
|
199 |
+
---
|
200 |
|
201 |
+
## 🎛️ Supported Nonverbal Cues
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
202 |
|
203 |
+
The model is fine-tuned on sequences containing:
|
204 |
|
205 |
+
* `<|laughing|>`
|
206 |
+
* `<|sighing|>`
|
207 |
+
* `<|groaning|>`
|
208 |
+
* `<|grunting|>`
|
209 |
+
* `<|sniffing|>`
|
210 |
+
* `<|sneezing|>`
|
211 |
+
* `<|breathing|>`
|
212 |
+
* `<|coughing|>`
|
213 |
+
* `<|snoring|>`
|
214 |
+
* `<|throat_clearing|>`
|
215 |
|
216 |
+
You can combine these with your prompt to guide tone/emotion or rely on semantic token generation.
|
217 |
|
218 |
+
---
|
219 |
|
220 |
+
## 🧠 Dataset Highlights: `NonverbalTTS`
|
221 |
|
222 |
+
* 17+ hours of annotated emotional & nonverbal English speech
|
223 |
+
* Automatic + human-validated labels
|
224 |
+
* Sources: VoxCeleb, Expresso
|
225 |
+
* Paper: [arXiv:2507.13155](https://arxiv.org/abs/2507.13155)
|
226 |
|
227 |
+
---
|
228 |
|
229 |
+
## 📜 License
|
230 |
|
231 |
+
Apache 2.0 — free for commercial and academic use.
|
232 |
|
233 |
+
---
|
234 |
|
235 |
+
## 🤝 Credits
|
236 |
|
237 |
+
* Base model: [`suno-ai/spark-tts`](https://huggingface.co/suno-ai/spark-tts)
|
238 |
+
* Dataset: [`deepvk/NonverbalTTS`](https://huggingface.co/datasets/deepvk/NonverbalTTS)
|
239 |
+
* Author: [`@yasserrmd`](https://huggingface.co/yasserrmd)
|
240 |
|
241 |
+
---
|
242 |
|
243 |
+
## 💬 Feedback & Contributions
|
244 |
|
245 |
+
Open a discussion or issue on this repo. Contributions are welcome!
|
246 |
|
|