Update README.md
Browse files
README.md
CHANGED
|
@@ -105,4 +105,16 @@ The Transformer models in this repository are licensed under the MIT License. Th
|
|
| 105 |
|
| 106 |
## Acknowledgements
|
| 107 |
- The VAE component is from `FLUX.1 [schnell]`, licensed under Apache 2.0.
|
| 108 |
-
- The text encoders are from `google/t5-v1_1-xxl` (licensed under Apache 2.0) and `meta-llama/Meta-Llama-3.1-8B-Instruct` (licensed under the Llama 3.1 Community License Agreement).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 105 |
|
| 106 |
## Acknowledgements
|
| 107 |
- The VAE component is from `FLUX.1 [schnell]`, licensed under Apache 2.0.
|
| 108 |
+
- The text encoders are from `google/t5-v1_1-xxl` (licensed under Apache 2.0) and `meta-llama/Meta-Llama-3.1-8B-Instruct` (licensed under the Llama 3.1 Community License Agreement).
|
| 109 |
+
|
| 110 |
+
|
| 111 |
+
## Citation
|
| 112 |
+
|
| 113 |
+
```bibtex
|
| 114 |
+
@article{hidreami1technicalreport,
|
| 115 |
+
title={HiDream-I1: A High-Efficient Image Generative Foundation Model with Sparse Diffusion Transformer},
|
| 116 |
+
author={Cai, Qi and Chen, Jingwen and Chen, Yang and Li, Yehao and Long, Fuchen and Pan, Yingwei and Qiu, Zhaofan and Zhang, Yiheng and Gao, Fengbin and Xu, Peihan and others},
|
| 117 |
+
journal={arXiv preprint arXiv:2505.22705},
|
| 118 |
+
year={2025}
|
| 119 |
+
}
|
| 120 |
+
```
|