Commit
·
e7836b2
verified
·
0
Parent(s):
Duplicate from microsoft/VibeVoice-1.5B
Browse filesCo-authored-by: FW <[email protected]>
- .gitattributes +37 -0
- README.md +87 -0
- config.json +115 -0
- figures/Fig1.png +3 -0
- model-00001-of-00003.safetensors +3 -0
- model-00002-of-00003.safetensors +3 -0
- model-00003-of-00003.safetensors +3 -0
- model.safetensors.index.json +0 -0
- preprocessor_config.json +13 -0
.gitattributes
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
37 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- zh
|
6 |
+
pipeline_tag: text-to-speech
|
7 |
+
tags:
|
8 |
+
- Podcast
|
9 |
+
---
|
10 |
+
|
11 |
+
## VibeVoice: A Frontier Open-Source Text-to-Speech Model
|
12 |
+
|
13 |
+
VibeVoice is a novel framework designed for generating expressive, long-form, multi-speaker conversational audio, such as podcasts, from text. It addresses significant challenges in traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking.
|
14 |
+
|
15 |
+
A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a next-token diffusion framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details.
|
16 |
+
|
17 |
+
The model can synthesize speech up to **90 minutes** long with up to **4 distinct speakers**, surpassing the typical 1-2 speaker limits of many prior models.
|
18 |
+
|
19 |
+
➡️ **Technical Report:** [VibeVoice Technical Report](https://arxiv.org/abs/2508.19205)
|
20 |
+
|
21 |
+
➡️ **Project Page:** [microsoft/VibeVoice](https://microsoft.github.io/VibeVoice)
|
22 |
+
|
23 |
+
➡️ **Code:** [microsoft/VibeVoice-Code](https://github.com/microsoft/VibeVoice)
|
24 |
+
|
25 |
+
<p align="left">
|
26 |
+
<img src="figures/Fig1.png" alt="VibeVoice Overview" height="250px">
|
27 |
+
</p>
|
28 |
+
|
29 |
+
## Training Details
|
30 |
+
Transformer-based Large Language Model (LLM) integrated with specialized acoustic and semantic tokenizers and a diffusion-based decoding head.
|
31 |
+
- LLM: [Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) for this release.
|
32 |
+
- Tokenizers:
|
33 |
+
- Acoustic Tokenizer: Based on a σ-VAE variant (proposed in [LatentLM](https://arxiv.org/pdf/2412.08635)), with a mirror-symmetric encoder-decoder structure featuring 7 stages of modified Transformer blocks. Achieves 3200x downsampling from 24kHz input. Encoder/decoder components are ~340M parameters each.
|
34 |
+
- Semantic Tokenizer: Encoder mirrors the Acoustic Tokenizer's architecture (without VAE components). Trained with an ASR proxy task.
|
35 |
+
- Diffusion Head: Lightweight module (4 layers, ~123M parameters) conditioned on LLM hidden states. Predicts acoustic VAE features using a Denoising Diffusion Probabilistic Models (DDPM) process. Uses Classifier-Free Guidance (CFG) and DPM-Solver (and variants) during inference.
|
36 |
+
- Context Length: Trained with a curriculum increasing up to 65,536 tokens.
|
37 |
+
- Training Stages:
|
38 |
+
- Tokenizer Pre-training: Acoustic and Semantic tokenizers are pre-trained separately.
|
39 |
+
- VibeVoice Training: Pre-trained tokenizers are frozen; only the LLM and diffusion head parameters are trained. A curriculum learning strategy is used for input sequence length (4k -> 16K -> 32K -> 64K). Text tokenizer not explicitly specified, but the LLM (Qwen2.5) typically uses its own. Audio is "tokenized" via the acoustic and semantic tokenizers.
|
40 |
+
|
41 |
+
|
42 |
+
## Models
|
43 |
+
| Model | Context Length | Generation Length | Weight |
|
44 |
+
|-------|----------------|----------|----------|
|
45 |
+
| VibeVoice-0.5B-Streaming | - | - | On the way |
|
46 |
+
| VibeVoice-1.5B | 64K | ~90 min | You are here. |
|
47 |
+
| VibeVoice-7B-Preview| 32K | ~45 min | [HF link](https://huggingface.co/WestZhang/VibeVoice-Large-pt) |
|
48 |
+
|
49 |
+
## Installation and Usage
|
50 |
+
|
51 |
+
Please refer to [GitHub README](https://github.com/microsoft/VibeVoice?tab=readme-ov-file#installation)
|
52 |
+
|
53 |
+
## Responsible Usage
|
54 |
+
### Direct intended uses
|
55 |
+
The VibeVoice model is limited to research purpose use exploring highly realistic audio dialogue generation detailed in the [tech report](https://github.com/microsoft/VibeVoice/blob/main/report/TechnicalReport.pdf).
|
56 |
+
|
57 |
+
### Out-of-scope uses
|
58 |
+
Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by MIT License. Use to generate any text transcript. Furthermore, this release is not intended or licensed for any of the following scenarios:
|
59 |
+
|
60 |
+
- Voice impersonation without explicit, recorded consent – cloning a real individual’s voice for satire, advertising, ransom, social‑engineering, or authentication bypass.
|
61 |
+
- Disinformation or impersonation – creating audio presented as genuine recordings of real people or events.
|
62 |
+
- Real‑time or low‑latency voice conversion – telephone or video‑conference “live deep‑fake” applications.
|
63 |
+
- Unsupported language – the model is trained only on English and Chinese data; outputs in other languages are unsupported and may be unintelligible or offensive.
|
64 |
+
- Generation of background ambience, Foley, or music – VibeVoice is speech‑only and will not produce coherent non‑speech audio.
|
65 |
+
|
66 |
+
|
67 |
+
## Risks and limitations
|
68 |
+
While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model (specifically, Qwen2.5 1.5b in this release).
|
69 |
+
Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content.
|
70 |
+
English and Chinese only: Transcripts in language other than English or Chinese may result in unexpected audio outputs.
|
71 |
+
Non-Speech Audio: The model focuses solely on speech synthesis and does not handle background noise, music, or other sound effects.
|
72 |
+
Overlapping Speech: The current model does not explicitly model or generate overlapping speech segments in conversations.
|
73 |
+
|
74 |
+
|
75 |
+
## Recommendations
|
76 |
+
We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly.
|
77 |
+
|
78 |
+
To mitigate the risks of misuse, we have:
|
79 |
+
Embedded an audible disclaimer (e.g. “This segment was generated by AI”) automatically into every synthesized audio file.
|
80 |
+
Added an imperceptible watermark to generated audio so third parties can verify VibeVoice provenance. Please see contact information at the end of this model card.
|
81 |
+
Logged inference requests (hashed) for abuse pattern detection and publishing aggregated statistics quarterly.
|
82 |
+
Users are responsible for sourcing their datasets legally and ethically. This may include securing appropriate rights and/or anonymizing data prior to use with VibeVoice. Users are reminded to be mindful of data privacy concerns.
|
83 |
+
|
84 |
+
|
85 |
+
## Contact
|
86 |
+
This project was conducted by members of Microsoft Research. We welcome feedback and collaboration from our audience. If you have suggestions, questions, or observe unexpected/offensive behavior in our technology, please contact us at [email protected].
|
87 |
+
If the team receives reports of undesired behavior or identifies issues independently, we will update this repository with appropriate mitigations.
|
config.json
ADDED
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"acoustic_vae_dim": 64,
|
3 |
+
"acoustic_tokenizer_config": {
|
4 |
+
"causal": true,
|
5 |
+
"channels": 1,
|
6 |
+
"conv_bias": true,
|
7 |
+
"conv_norm": "none",
|
8 |
+
"corpus_normalize": 0.0,
|
9 |
+
"decoder_depths": null,
|
10 |
+
"decoder_n_filters": 32,
|
11 |
+
"decoder_ratios": [
|
12 |
+
8,
|
13 |
+
5,
|
14 |
+
5,
|
15 |
+
4,
|
16 |
+
2,
|
17 |
+
2
|
18 |
+
],
|
19 |
+
"disable_last_norm": true,
|
20 |
+
"encoder_depths": "3-3-3-3-3-3-8",
|
21 |
+
"encoder_n_filters": 32,
|
22 |
+
"encoder_ratios": [
|
23 |
+
8,
|
24 |
+
5,
|
25 |
+
5,
|
26 |
+
4,
|
27 |
+
2,
|
28 |
+
2
|
29 |
+
],
|
30 |
+
"fix_std": 0.5,
|
31 |
+
"layer_scale_init_value": 1e-06,
|
32 |
+
"layernorm": "RMSNorm",
|
33 |
+
"layernorm_elementwise_affine": true,
|
34 |
+
"layernorm_eps": 1e-05,
|
35 |
+
"mixer_layer": "depthwise_conv",
|
36 |
+
"model_type": "vibevoice_acoustic_tokenizer",
|
37 |
+
"pad_mode": "constant",
|
38 |
+
"std_dist_type": "gaussian",
|
39 |
+
"vae_dim": 64,
|
40 |
+
"weight_init_value": 0.01
|
41 |
+
},
|
42 |
+
"architectures": [
|
43 |
+
"VibeVoiceForConditionalGeneration"
|
44 |
+
],
|
45 |
+
"decoder_config": {
|
46 |
+
"attention_dropout": 0.0,
|
47 |
+
"hidden_act": "silu",
|
48 |
+
"hidden_size": 1536,
|
49 |
+
"initializer_range": 0.02,
|
50 |
+
"intermediate_size": 8960,
|
51 |
+
"max_position_embeddings": 65536,
|
52 |
+
"max_window_layers": 28,
|
53 |
+
"model_type": "qwen2",
|
54 |
+
"num_attention_heads": 12,
|
55 |
+
"num_hidden_layers": 28,
|
56 |
+
"num_key_value_heads": 2,
|
57 |
+
"rms_norm_eps": 1e-06,
|
58 |
+
"rope_scaling": null,
|
59 |
+
"rope_theta": 1000000.0,
|
60 |
+
"sliding_window": null,
|
61 |
+
"tie_word_embeddings": true,
|
62 |
+
"torch_dtype": "bfloat16",
|
63 |
+
"use_cache": true,
|
64 |
+
"use_sliding_window": false,
|
65 |
+
"vocab_size": 151936
|
66 |
+
},
|
67 |
+
"diffusion_head_config": {
|
68 |
+
"ddpm_batch_mul": 4,
|
69 |
+
"ddpm_beta_schedule": "cosine",
|
70 |
+
"ddpm_num_inference_steps": 20,
|
71 |
+
"ddpm_num_steps": 1000,
|
72 |
+
"diffusion_type": "ddpm",
|
73 |
+
"head_ffn_ratio": 3.0,
|
74 |
+
"head_layers": 4,
|
75 |
+
"hidden_size": 1536,
|
76 |
+
"latent_size": 64,
|
77 |
+
"model_type": "vibevoice_diffusion_head",
|
78 |
+
"prediction_type": "v_prediction",
|
79 |
+
"rms_norm_eps": 1e-05,
|
80 |
+
"speech_vae_dim": 64
|
81 |
+
},
|
82 |
+
"model_type": "vibevoice",
|
83 |
+
"semantic_tokenizer_config": {
|
84 |
+
"causal": true,
|
85 |
+
"channels": 1,
|
86 |
+
"conv_bias": true,
|
87 |
+
"conv_norm": "none",
|
88 |
+
"corpus_normalize": 0.0,
|
89 |
+
"disable_last_norm": true,
|
90 |
+
"encoder_depths": "3-3-3-3-3-3-8",
|
91 |
+
"encoder_n_filters": 32,
|
92 |
+
"encoder_ratios": [
|
93 |
+
8,
|
94 |
+
5,
|
95 |
+
5,
|
96 |
+
4,
|
97 |
+
2,
|
98 |
+
2
|
99 |
+
],
|
100 |
+
"fix_std": 0,
|
101 |
+
"layer_scale_init_value": 1e-06,
|
102 |
+
"layernorm": "RMSNorm",
|
103 |
+
"layernorm_elementwise_affine": true,
|
104 |
+
"layernorm_eps": 1e-05,
|
105 |
+
"mixer_layer": "depthwise_conv",
|
106 |
+
"model_type": "vibevoice_semantic_tokenizer",
|
107 |
+
"pad_mode": "constant",
|
108 |
+
"std_dist_type": "none",
|
109 |
+
"vae_dim": 128,
|
110 |
+
"weight_init_value": 0.01
|
111 |
+
},
|
112 |
+
"semantic_vae_dim": 128,
|
113 |
+
"torch_dtype": "bfloat16",
|
114 |
+
"transformers_version": "4.51.3"
|
115 |
+
}
|
figures/Fig1.png
ADDED
![]() |
Git LFS Details
|
model-00001-of-00003.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c5f0a61ddeaeb028e3af540ba4dee7933ad30f9f30b6e1320dd9c875a2daa033
|
3 |
+
size 1975317828
|
model-00002-of-00003.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:81c3891f7b2493eb48a9eb6f5be0df48d4f1a4bfd952d84e21683ca6d0bf7969
|
3 |
+
size 1983051688
|
model-00003-of-00003.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cb6e7e5e86b4a41fffbe1f3aaf445d0d50b5e21ed47574101b777f77d75fa196
|
3 |
+
size 1449832938
|
model.safetensors.index.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
preprocessor_config.json
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"processor_class": "VibeVoiceProcessor",
|
3 |
+
"speech_tok_compress_ratio": 3200,
|
4 |
+
"db_normalize": true,
|
5 |
+
"audio_processor": {
|
6 |
+
"feature_extractor_type": "VibeVoiceTokenizerProcessor",
|
7 |
+
"sampling_rate": 24000,
|
8 |
+
"normalize_audio": true,
|
9 |
+
"target_dB_FS": -25,
|
10 |
+
"eps": 1e-06
|
11 |
+
},
|
12 |
+
"language_model_pretrained_name": "Qwen/Qwen2.5-1.5B"
|
13 |
+
}
|