Update README.md
Browse files
README.md
CHANGED
@@ -46,9 +46,9 @@ language:
|
|
46 |
<br>
|
47 |
<a href="https://x-dyna.github.io/xdyna.github.io/">Project Page</a>
|
48 |
路
|
49 |
-
<a href="https://github.com/
|
50 |
路
|
51 |
-
<a href="
|
52 |
</p>
|
53 |
|
54 |
|
@@ -90,7 +90,7 @@ a) IP-Adapter encodes the reference image as an image CLIP embedding and injects
|
|
90 |
|
91 |
|
92 |
## 馃П Download Pretrained Models
|
93 |
-
Due to restrictions we are not able to release the model pretrained with in-house data. Instead, we re-train our model on public datasets, e.g. [
|
94 |
|
95 |
```bash
|
96 |
X-Dyna
|
|
|
46 |
<br>
|
47 |
<a href="https://x-dyna.github.io/xdyna.github.io/">Project Page</a>
|
48 |
路
|
49 |
+
<a href="https://github.com/bytedance/X-Dyna">Code</a>
|
50 |
路
|
51 |
+
<a href="">Paper</a>
|
52 |
</p>
|
53 |
|
54 |
|
|
|
90 |
|
91 |
|
92 |
## 馃П Download Pretrained Models
|
93 |
+
Due to restrictions we are not able to release the model pretrained with in-house data. Instead, we re-train our model on public datasets, e.g. [HumanVid](https://github.com/zhenzhiwang/HumanVid), and other human video data for research use, e.g.[Pexels](https://www.pexels.com/). We follow the implementation details in our paper and release pretrained weights and other necessary network modules in this huggingface repository. The Stable Diffusion 1.5 UNet can be found [here](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) and place it under pretrained_weights/unet_initialization/SD. After downloading, please put all of them under the pretrained_weights folder. Your file structure should look like this:
|
94 |
|
95 |
```bash
|
96 |
X-Dyna
|