aimin commited on
Commit
fa645e8
·
verified ·
1 Parent(s): 505158b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +117 -3
README.md CHANGED
@@ -1,3 +1,117 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ViViD
2
+ ViViD: Video Virtual Try-on using Diffusion Models
3
+
4
+ [![arXiv](https://img.shields.io/badge/arXiv-2405.11794-b31b1b.svg)](https://arxiv.org/abs/2405.11794)
5
+ [![Project Page](https://img.shields.io/badge/Project-Website-green)](https://alibaba-yuanjing-aigclab.github.io/ViViD)
6
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-yellow)](https://huggingface.co/alibaba-yuanjing-aigclab/ViViD)
7
+
8
+
9
+
10
+
11
+ ## Installation
12
+
13
+ ```
14
+ git clone https://github.com/alibaba-yuanjing-aigclab/ViViD
15
+ cd ViViD
16
+ ```
17
+
18
+ ### Environment
19
+ ```
20
+ conda create -n vivid python=3.10
21
+ conda activate vivid
22
+ pip install -r requirements.txt
23
+ ```
24
+
25
+ ### Weights
26
+ You can place the weights anywhere you like, for example, ```./ckpts```. If you put them somewhere else, you just need to update the path in ```./configs/prompts/*.yaml```.
27
+
28
+
29
+ #### Stable Diffusion Image Variations
30
+ ```
31
+ cd ckpts
32
+
33
+ git lfs install
34
+ git clone https://huggingface.co/lambdalabs/sd-image-variations-diffusers
35
+ ```
36
+ #### SD-VAE-ft-mse
37
+ ```
38
+ git lfs install
39
+ git clone https://huggingface.co/stabilityai/sd-vae-ft-mse
40
+ ```
41
+ #### Motion Module
42
+ Download [mm_sd_v15_v2](https://huggingface.co/guoyww/animatediff/blob/main/mm_sd_v15_v2.ckpt)
43
+
44
+ #### ViViD
45
+ ```
46
+ git lfs install
47
+ git clone git clone https://huggingface.co/alibaba-yuanjing-aigclab/ViViD
48
+ ```
49
+ ## Inference
50
+ We provide two demos in ```./configs/prompts/```, run the following commands to have a try😼.
51
+
52
+ ```
53
+ python vivid.py --config ./configs/prompts/upper1.yaml
54
+
55
+ python vivid.py --config ./configs/prompts/lower1.yaml
56
+ ```
57
+
58
+ ## Data
59
+ As illustrated in ```./data```, the following data should be provided.
60
+ ```text
61
+ ./data/
62
+ |-- agnostic
63
+ | |-- video1.mp4
64
+ | |-- video2.mp4
65
+ | ...
66
+ |-- agnostic_mask
67
+ | |-- video1.mp4
68
+ | |-- video2.mp4
69
+ | ...
70
+ |-- cloth
71
+ | |-- cloth1.jpg
72
+ | |-- cloth2.jpg
73
+ | ...
74
+ |-- cloth_mask
75
+ | |-- cloth1.jpg
76
+ | |-- cloth2.jpg
77
+ | ...
78
+ |-- densepose
79
+ | |-- video1.mp4
80
+ | |-- video2.mp4
81
+ | ...
82
+ |-- videos
83
+ | |-- video1.mp4
84
+ | |-- video2.mp4
85
+ | ...
86
+ ```
87
+
88
+ ### Agnostic and agnostic_mask video
89
+ This part is a bit complex, you can obtain them through any of the following three ways:
90
+ 1. Follow [OOTDiffusion](https://github.com/levihsu/OOTDiffusion) to extract them frame-by-frame.(recommended)
91
+ 2. Use [SAM](https://github.com/facebookresearch/segment-anything) + Gaussian Blur.(see ```./tools/sam_agnostic.py``` for an example)
92
+ 3. Mask editor tools.
93
+
94
+ Note that the shape and size of the agnostic area may affect the try-on results.
95
+
96
+ ### Densepose video
97
+ See [vid2densepose](https://github.com/Flode-Labs/vid2densepose).(Thanks)
98
+
99
+ ### Cloth mask
100
+ Any detection tool is ok for obtaining the mask, like [SAM](https://github.com/facebookresearch/segment-anything).
101
+
102
+ ## BibTeX
103
+ ```text
104
+ @misc{fang2024vivid,
105
+ title={ViViD: Video Virtual Try-on using Diffusion Models},
106
+ author={Zixun Fang and Wei Zhai and Aimin Su and Hongliang Song and Kai Zhu and Mao Wang and Yu Chen and Zhiheng Liu and Yang Cao and Zheng-Jun Zha},
107
+ year={2024},
108
+ eprint={2405.11794},
109
+ archivePrefix={arXiv},
110
+ primaryClass={cs.CV}
111
+ }
112
+ ```
113
+
114
+ ## Contact Us
115
+ **Zixun Fang**: [[email protected]](mailto:[email protected])
116
+ **Yu Chen**: [[email protected]](mailto:[email protected])
117
+