Update README.md
Browse files
README.md
CHANGED
|
@@ -11,11 +11,13 @@ license: cc-by-nc-4.0
|
|
| 11 |
<p>
|
| 12 |
</p>
|
| 13 |
</p>
|
| 14 |
-
<a href="https" style="color:red">Paper </a> |
|
| 15 |
-
<a href="https://huggingface.co/IDEA-Emdoor/DistilCodec-v1.0" style="color:#FFD700">
|
| 16 |
<a href="https://github.com/IDEA-Emdoor-Lab/DistilCodec" style="color:gray">Code</a>
|
| 17 |
<p>
|
| 18 |
<img src="./idea_logo.png" alt="Institution 1" style="width: 200px; height: 60px;">
|
|
|
|
|
|
|
| 19 |
<img src="./yidao_logo.png" alt="Institution 2" style="width: 200px; height: 60px;">
|
| 20 |
<img src="./yijiayiban.png" alt="Institution 3" style="width: 200px; height: 60px;">
|
| 21 |
</p>
|
|
@@ -23,9 +25,9 @@ license: cc-by-nc-4.0
|
|
| 23 |
|
| 24 |
|
| 25 |
# 🔥 News
|
| 26 |
-
- *2025.05.
|
| 27 |
-
- *2025.05.26*:
|
| 28 |
-
- *2025.05.
|
| 29 |
|
| 30 |
## Introduction of DistilCodec
|
| 31 |
The Joint Laboratory of International Digital Economy Academy (IDEA) and Emdoor, in collaboration with Emdoor Information Technology Co., Ltd., and Shenzhen Yijiayiban Information Technology Co., Ltd, has launched DistilCodec - A Single-Codebook Neural Audio Codec (NAC) with 32768 codes trained on uniersal audio.The foundational network architecture of DistilCodec adopts an Encoder-VQ-Decoder framework
|
|
@@ -121,18 +123,38 @@ codec.save_wav(
|
|
| 121 |
## Available DistilCodec models
|
| 122 |
|Model Version| Huggingface | Corpus | Token/s | Domain |
|
| 123 |
|-----------------------|---------|---------------|---------------|-----------------------------------|
|
| 124 |
-
| DistilCodec-v1.0 | [
|
| 125 |
|
| 126 |
|
| 127 |
## Citation
|
| 128 |
|
| 129 |
-
If you find
|
| 130 |
|
| 131 |
```
|
| 132 |
-
@
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
|
|
|
|
|
|
|
|
|
| 137 |
}
|
| 138 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
<p>
|
| 12 |
</p>
|
| 13 |
</p>
|
| 14 |
+
<a href="https://arxiv.org/abs/2505.17426" style="color:red">Paper </a> |
|
| 15 |
+
<a href="https://huggingface.co/IDEA-Emdoor/DistilCodec-v1.0" style="color:#FFD700">HuggingFace Model</a> |
|
| 16 |
<a href="https://github.com/IDEA-Emdoor-Lab/DistilCodec" style="color:gray">Code</a>
|
| 17 |
<p>
|
| 18 |
<img src="./idea_logo.png" alt="Institution 1" style="width: 200px; height: 60px;">
|
| 19 |
+
</p>
|
| 20 |
+
<p>
|
| 21 |
<img src="./yidao_logo.png" alt="Institution 2" style="width: 200px; height: 60px;">
|
| 22 |
<img src="./yijiayiban.png" alt="Institution 3" style="width: 200px; height: 60px;">
|
| 23 |
</p>
|
|
|
|
| 25 |
|
| 26 |
|
| 27 |
# 🔥 News
|
| 28 |
+
- *2025.05.26*: We release DistilCodec-v1.0 checkpoint on [huggingface](https://huggingface.co/IDEA-Emdoor/DistilCodec-v1.0).
|
| 29 |
+
- *2025.05.26*: The paper is available on [arxiv](https://arxiv.org/abs/2505.17426).
|
| 30 |
+
- *2025.05.23*: We submit paper to arxiv.
|
| 31 |
|
| 32 |
## Introduction of DistilCodec
|
| 33 |
The Joint Laboratory of International Digital Economy Academy (IDEA) and Emdoor, in collaboration with Emdoor Information Technology Co., Ltd., and Shenzhen Yijiayiban Information Technology Co., Ltd, has launched DistilCodec - A Single-Codebook Neural Audio Codec (NAC) with 32768 codes trained on uniersal audio.The foundational network architecture of DistilCodec adopts an Encoder-VQ-Decoder framework
|
|
|
|
| 123 |
## Available DistilCodec models
|
| 124 |
|Model Version| Huggingface | Corpus | Token/s | Domain |
|
| 125 |
|-----------------------|---------|---------------|---------------|-----------------------------------|
|
| 126 |
+
| DistilCodec-v1.0 | [HuggingFace](https://huggingface.co/IDEA-Emdoor/DistilCodec-v1.0) | Universal Audio | 93 | Universal Audio |
|
| 127 |
|
| 128 |
|
| 129 |
## Citation
|
| 130 |
|
| 131 |
+
If you find our work useful in your research, please cite our work:
|
| 132 |
|
| 133 |
```
|
| 134 |
+
@misc{wang2025unittsendtoendttsdecoupling,
|
| 135 |
+
title={UniTTS: An end-to-end TTS system without decoupling of acoustic and semantic information},
|
| 136 |
+
author={Rui Wang and Qianguo Sun and Tianrong Chen and Zhiyun Zeng and Junlong Wu and Jiaxing Zhang},
|
| 137 |
+
year={2025},
|
| 138 |
+
eprint={2505.17426},
|
| 139 |
+
archivePrefix={arXiv},
|
| 140 |
+
primaryClass={cs.SD},
|
| 141 |
+
url={https://arxiv.org/abs/2505.17426},
|
| 142 |
}
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
|
| 146 |
+
## Disclaimer
|
| 147 |
+
|
| 148 |
+
DistilCodec provides the capability of universal audio discretion only for academic research purposes. We encourage the community to uphold safety and ethical principles in AI research and applications.
|
| 149 |
+
|
| 150 |
+
Important Notes:
|
| 151 |
+
|
| 152 |
+
- Compliance with the model's open-source license is mandatory.
|
| 153 |
+
|
| 154 |
+
- Unauthorized voice replication applications are strictly prohibited.
|
| 155 |
+
|
| 156 |
+
- Developers bear no responsibility for any misuse of this model.
|
| 157 |
+
|
| 158 |
+
|
| 159 |
+
## License
|
| 160 |
+
<a href="https://arxiv.org/abs/2505.17426">UniTTS: An end-to-end TTS system without decoupling of acoustic and semantic information</a> © 2025 by <a href="https://creativecommons.org">Rui Wang, Qianguo Sun, Tianrong Chen, Zhiyun Zeng, Junlong Wu, Jiaxing Zhang</a> is licensed under <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND 4.0</a><img src="https://mirrors.creativecommons.org/presskit/icons/cc.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;"><img src="https://mirrors.creativecommons.org/presskit/icons/by.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;"><img src="https://mirrors.creativecommons.org/presskit/icons/nc.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;"><img src="https://mirrors.creativecommons.org/presskit/icons/nd.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;">
|