Spaces:
Configuration error
Configuration error
Upload 4 files
Browse files- README.md +190 -12
- customize.py +1209 -0
- requirements.txt +7 -6
- reward_guidance.py +576 -0
README.md
CHANGED
@@ -1,12 +1,190 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Direct Consistency Optimization for Compositional Text-to-Image Personalization
|
2 |
+
This is an official implementation of paper 'Direct Consistency Optimization for Compositional Text-to-Image Personalization'
|
3 |
+
- [paper](https://arxiv.org/abs/2402.12004)
|
4 |
+
- [project page](https://dco-t2i.github.io/)
|
5 |
+
|
6 |
+
Our code is based on [diffusers](https://github.com/huggingface/diffusers), which we fine-tune [SDXL](https://huggingface.co/docs/diffusers/using-diffusers/sdxl) using LoRA from [peft](https://github.com/huggingface/peft) library.
|
7 |
+
|
8 |
+
## Installation
|
9 |
+
We recommend to install from the source the latest version of diffusers:
|
10 |
+
```bash
|
11 |
+
git clone https://github.com/huggingface/diffusers
|
12 |
+
cd diffusers
|
13 |
+
pip install -e .
|
14 |
+
```
|
15 |
+
|
16 |
+
Then go to the repository and install via
|
17 |
+
```bash
|
18 |
+
cd dco/
|
19 |
+
pip install -r requirements.txt
|
20 |
+
```
|
21 |
+
|
22 |
+
And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
|
23 |
+
|
24 |
+
```bash
|
25 |
+
accelerate config
|
26 |
+
```
|
27 |
+
|
28 |
+
Or for a default accelerate configuration without answering questions about your environment
|
29 |
+
|
30 |
+
```bash
|
31 |
+
accelerate config default
|
32 |
+
```
|
33 |
+
|
34 |
+
Or if your environment doesn't support an interactive shell e.g. a notebook
|
35 |
+
|
36 |
+
```python
|
37 |
+
from accelerate.utils import write_basic_config
|
38 |
+
write_basic_config()
|
39 |
+
```
|
40 |
+
|
41 |
+
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
|
42 |
+
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment.
|
43 |
+
|
44 |
+
## Subject Personalization
|
45 |
+
### Data preparation
|
46 |
+
|
47 |
+
We encourage to use **comprehensive caption** for text-to-image personlization, which provides descriptive visual details on the attributes, backgrounds, etc. Also we do not use rare token identifier (e.g., 'sks'), which may inherit the unfavorable semantics. We also train additional textual embeddings to enhance the subject fidelity. See paper for details.
|
48 |
+
|
49 |
+
In `dataset/dreambooth/config.json`, we provide an example of comprehensive captions that we used:
|
50 |
+
```
|
51 |
+
'comprehensive': {
|
52 |
+
"images":[
|
53 |
+
"dataset/dreambooth/dog/00.jpg",
|
54 |
+
"dataset/dreambooth/dog/01.jpg",
|
55 |
+
"dataset/dreambooth/dog/02.jpg",
|
56 |
+
"dataset/dreambooth/dog/03.jpg",
|
57 |
+
"dataset/dreambooth/dog/04.jpg"
|
58 |
+
],
|
59 |
+
"prompts": [
|
60 |
+
"a closed-up photo of a <dog> in front of trees, macro style",
|
61 |
+
"a low-angle photo of a <dog> sitting on a ledge in front of blossom trees, macro style",
|
62 |
+
"a photo of a <dog> sitting on a ledge in front of red wall and tree, macro style",
|
63 |
+
"a photo of side-view of a <dog> sitting on a ledge in front of red wall and tree, macro style",
|
64 |
+
"a photo of a <dog> sitting on a street, in front of lush trees, macro style"
|
65 |
+
],
|
66 |
+
"base_prompts": [
|
67 |
+
"a closed-up photo of a dog in front of trees, macro style",
|
68 |
+
"a low-angle photo of a dog sitting on a ledge in front of blossom trees, macro style",
|
69 |
+
"a photo of a dog sitting on a ledge in front of red wall and tree, macro style",
|
70 |
+
"a photo of side-view of a dog sitting on a ledge in front of red wall and tree, macro style",
|
71 |
+
"a photo of a dog sitting on a street, in front of lush trees, macro style"
|
72 |
+
],
|
73 |
+
"inserting_tokens" : ["<dog>"],
|
74 |
+
"initializer_tokens" : ["dog"]
|
75 |
+
}
|
76 |
+
```
|
77 |
+
`images` is a list of directories for training images, `prompts` are list of training prompts with training tokens (*e.g.,* `<dog>`), and `base_prompts` are list of training prompts without new tokens. `inserting tokens` are list of learning tokens, and `initializer_tokens` are list of tokens that are used for initialization. If you do not want initializer token than put empty string (*i.e.,* `""`) in `initializer_tokens`. Note that the norm of token embeddings are rescaled after each iteration to be same as original one.
|
78 |
+
|
79 |
+
|
80 |
+
### Training scripts
|
81 |
+
To train the model, run following command:
|
82 |
+
```
|
83 |
+
accelerate launch customize.py \
|
84 |
+
--config_dir="dataset/dreambooth/dog/config.json" \
|
85 |
+
--config_name="comprehensive" \
|
86 |
+
--output_dir="./output" \
|
87 |
+
--learning_rate=5e-5 \
|
88 |
+
--text_encoder_lr=5e-6 \
|
89 |
+
--dcoloss_beta=1000 \
|
90 |
+
--rank=32 \
|
91 |
+
--max_train_steps=2000 \
|
92 |
+
--checkpointing_steps=1000 \
|
93 |
+
--seed="0" \
|
94 |
+
--train_text_encoder_ti
|
95 |
+
```
|
96 |
+
Note that `--dcoloss_beta` is a hyperparameter that is used for DCO loss (1000-2000 works fine in our experiments). `--train_text_encoder_ti` is to indicate learning with textual embeddings.
|
97 |
+
|
98 |
+
### Inference
|
99 |
+
To infer with reward guidance, import `RGPipe` from `reward_guidance.py`. Then load lora weights and textual embeddings:
|
100 |
+
```
|
101 |
+
import torch
|
102 |
+
import os
|
103 |
+
from safetensors.torch import load_file
|
104 |
+
from reward_guidance import RGPipe
|
105 |
+
|
106 |
+
pipe = RGPipe.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0" torch_dtype=torch.float16).to("cuda")
|
107 |
+
lora_dir = "OUTPUT_DIR" # saved lora directory
|
108 |
+
pipe.load_lora_weights(lora_dir)
|
109 |
+
|
110 |
+
inserting_tokens = ["<dog>"] # load new tokens
|
111 |
+
state_dict = load_file(lora_dir+"/learned_embeds.safetensors")
|
112 |
+
pipe.load_textual_inversion(state_dict["clip_l"], token=inserting_tokens, text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
|
113 |
+
pipe.load_textual_inversion(state_dict["clip_g"], token=inserting_tokens, text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
|
114 |
+
|
115 |
+
prompt = "A <dog> playing saxophone in sticker style" # prompt including new tokens
|
116 |
+
base_prompt = "A dog playing saxophone in sticker style" # prompt without new tokens
|
117 |
+
|
118 |
+
seed = 42
|
119 |
+
generator = torch.Generator("cuda").manual_seed(seed)
|
120 |
+
|
121 |
+
rg_scale = 3.0 # rg scale. 0.0 for original CFG sampling
|
122 |
+
if rg_scale > 0.0:
|
123 |
+
image = pipe.my_gen(
|
124 |
+
prompt=base_prompt,
|
125 |
+
prompt_ti=prompt,
|
126 |
+
generator=generator,
|
127 |
+
cross_attention_kwargs={"scale": 1.0},
|
128 |
+
guidance_scale=7.5,
|
129 |
+
guidance_scale_lora=rg_scale,
|
130 |
+
).images[0]
|
131 |
+
else:
|
132 |
+
image = pipe(
|
133 |
+
prompt=prompt,
|
134 |
+
generator=generator,
|
135 |
+
cross_attention_kwargs={"scale": 1.0},
|
136 |
+
guidance_scale=7.5,
|
137 |
+
).images[0]
|
138 |
+
image
|
139 |
+
```
|
140 |
+
|
141 |
+
## Style Personlization
|
142 |
+
### Data Preparation
|
143 |
+
We use same format as before, but we do not train textual embeddings for style personalization. The example config is given by
|
144 |
+
```
|
145 |
+
"style":{
|
146 |
+
"images" : ["dataset/styledrop/style.jpg"],
|
147 |
+
"prompts": ["A person working on a laptop in flat cartoon illustration style"]
|
148 |
+
}
|
149 |
+
```
|
150 |
+
|
151 |
+
### Training scripts
|
152 |
+
```
|
153 |
+
accelerate launch customize.py \
|
154 |
+
--config_dir="dataset/styledrop/config.json" \
|
155 |
+
--config_name="style_1" \
|
156 |
+
--output_dir="./output_style" \
|
157 |
+
--learning_rate=5e-5 \
|
158 |
+
--dcoloss_beta=1000 \
|
159 |
+
--rank=64 \
|
160 |
+
--max_train_steps=1000 \
|
161 |
+
--seed="0" \
|
162 |
+
--offset_noise=0.1
|
163 |
+
```
|
164 |
+
Note that we use `--offset_noise=0.1` to learn solid color of the style image.
|
165 |
+
|
166 |
+
The inference is same as above.
|
167 |
+
|
168 |
+
## My Subject in My Style
|
169 |
+
DCO fine-tuned models can be easily merged without any post-processing. Simply, add following codes during inference:
|
170 |
+
```
|
171 |
+
pipe.load_lora_weights(subject_lora_dir, adapter_name="subject")
|
172 |
+
if args.text_encoder_ti:
|
173 |
+
state_dict = load_file(subject_lora_dir+"/learned_embeds.safetensors")
|
174 |
+
pipe.load_textual_inversion(state_dict["clip_l"], token=inserting_tokens, text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
|
175 |
+
pipe.load_textual_inversion(state_dict["clip_g"], token=inserting_tokens, text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
|
176 |
+
|
177 |
+
pipe.load_lora_weights(style_lora_dir, adapter_name="style")
|
178 |
+
pipe.set_adapters(["subject", "style"], adapter_weights=[1.0, 1.0])
|
179 |
+
```
|
180 |
+
|
181 |
+
|
182 |
+
## BibTex
|
183 |
+
```
|
184 |
+
@article{lee2024direct,
|
185 |
+
title={Direct Consistency Optimization for Compositional Text-to-Image Personalization},
|
186 |
+
author={Lee, Kyungmin and Kwak, Sangkyung and Sohn, Kihyuk and Shin, Jinwoo},
|
187 |
+
journal={arXiv preprint arXiv:2402.12004},
|
188 |
+
year={2024}
|
189 |
+
}
|
190 |
+
```
|
customize.py
ADDED
@@ -0,0 +1,1209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
import argparse
|
3 |
+
import itertools
|
4 |
+
import logging
|
5 |
+
import math
|
6 |
+
import os
|
7 |
+
import shutil
|
8 |
+
import warnings
|
9 |
+
from pathlib import Path
|
10 |
+
from typing import List, Optional
|
11 |
+
import json
|
12 |
+
import numpy as np
|
13 |
+
import torch
|
14 |
+
import torch.nn.functional as F
|
15 |
+
|
16 |
+
import torch.utils.checkpoint
|
17 |
+
import transformers
|
18 |
+
from accelerate import Accelerator
|
19 |
+
from accelerate.logging import get_logger
|
20 |
+
from accelerate.utils import DistributedDataParallelKwargs, ProjectConfiguration, set_seed
|
21 |
+
from peft import LoraConfig
|
22 |
+
from peft.utils import get_peft_model_state_dict
|
23 |
+
from PIL import Image
|
24 |
+
from PIL.ImageOps import exif_transpose
|
25 |
+
from safetensors.torch import save_file
|
26 |
+
from torch.utils.data import Dataset
|
27 |
+
from torchvision import transforms
|
28 |
+
from tqdm.auto import tqdm
|
29 |
+
from transformers import AutoTokenizer, PretrainedConfig
|
30 |
+
|
31 |
+
import diffusers
|
32 |
+
from diffusers import (
|
33 |
+
AutoencoderKL,
|
34 |
+
DDPMScheduler,
|
35 |
+
DPMSolverMultistepScheduler,
|
36 |
+
StableDiffusionXLPipeline,
|
37 |
+
UNet2DConditionModel,
|
38 |
+
)
|
39 |
+
from diffusers.loaders import LoraLoaderMixin
|
40 |
+
from diffusers.optimization import get_scheduler
|
41 |
+
from diffusers.training_utils import compute_snr
|
42 |
+
from diffusers.utils import (
|
43 |
+
convert_state_dict_to_diffusers,
|
44 |
+
is_wandb_available,
|
45 |
+
)
|
46 |
+
|
47 |
+
logger = get_logger(__name__)
|
48 |
+
|
49 |
+
def import_model_class_from_model_name_or_path(
|
50 |
+
pretrained_model_name_or_path: str, revision: str, subfolder: str = "text_encoder"
|
51 |
+
):
|
52 |
+
text_encoder_config = PretrainedConfig.from_pretrained(
|
53 |
+
pretrained_model_name_or_path, subfolder=subfolder, revision=revision
|
54 |
+
)
|
55 |
+
model_class = text_encoder_config.architectures[0]
|
56 |
+
|
57 |
+
if model_class == "CLIPTextModel":
|
58 |
+
from transformers import CLIPTextModel
|
59 |
+
|
60 |
+
return CLIPTextModel
|
61 |
+
elif model_class == "CLIPTextModelWithProjection":
|
62 |
+
from transformers import CLIPTextModelWithProjection
|
63 |
+
|
64 |
+
return CLIPTextModelWithProjection
|
65 |
+
else:
|
66 |
+
raise ValueError(f"{model_class} is not supported.")
|
67 |
+
|
68 |
+
|
69 |
+
def parse_args(input_args=None):
|
70 |
+
parser = argparse.ArgumentParser(description="Simple example of a training script.")
|
71 |
+
# pretrained model config
|
72 |
+
parser.add_argument("--pretrained_model_name_or_path", type=str, default="stabilityai/stable-diffusion-xl-base-1.0",)
|
73 |
+
parser.add_argument("--pretrained_vae_model_name_or_path", type=str, default="madebyollin/sdxl-vae-fp16-fix")
|
74 |
+
parser.add_argument("--revision", type=str, default=None)
|
75 |
+
parser.add_argument("--variant", type=str, default=None)
|
76 |
+
# data config
|
77 |
+
parser.add_argument("--config_dir", type=str, default="")
|
78 |
+
parser.add_argument("--config_name", type=str, default="")
|
79 |
+
# validation config
|
80 |
+
parser.add_argument("--validation_prompt", type=str, default=None, help="A prompt that is used during validation to verify that the model is learning.",)
|
81 |
+
parser.add_argument("--num_validation_images", type=int, default=0, help="Number of images that should be generated during validation with `validation_prompt`.",)
|
82 |
+
parser.add_argument("--validation_epochs", type=int, default=50000)
|
83 |
+
# use prior preservation
|
84 |
+
parser.add_argument("--with_prior_preservation", default=False, action="store_true", help="Flag to add prior preservation loss.",)
|
85 |
+
parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.")
|
86 |
+
# save config
|
87 |
+
parser.add_argument("--output_dir", type=str, default="outdir", help="The output directory where the model predictions and checkpoints will be written.",)
|
88 |
+
parser.add_argument("--checkpointing_steps", type=int, default=500, help="Save a checkpoint of the training state every X updates")
|
89 |
+
parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
|
90 |
+
# dataloader config
|
91 |
+
parser.add_argument("--resolution", type=int, default=1024, help="The resolution for input images, all the images in the train/validation dataset will be resized to this")
|
92 |
+
parser.add_argument("--crops_coords_top_left_h", type=int, default=0, help=("Coordinate for (the height) to be included in the crop coordinate embeddings needed by SDXL UNet."),)
|
93 |
+
parser.add_argument("--crops_coords_top_left_w", type=int, default=0, help=("Coordinate for (the height) to be included in the crop coordinate embeddings needed by SDXL UNet."),)
|
94 |
+
parser.add_argument("--center_crop", default=False, action="store_true", help=("Whether to center crop the input images to the resolution. If not set, the images will be randomly" " cropped. The images will be resized to the resolution first before cropping."),)
|
95 |
+
parser.add_argument("--train_batch_size", type=int, default=1, help="Batch size (per device) for the training dataloader.")
|
96 |
+
parser.add_argument("--dataloader_num_workers", type=int, default=0, help=("Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."),)
|
97 |
+
parser.add_argument("--num_train_epochs", type=int, default=1)
|
98 |
+
parser.add_argument("--max_train_steps", type=int, default=1000, help="Total number of training steps to perform. If provided, overrides num_train_epochs.",)
|
99 |
+
parser.add_argument("--checkpoints_total_limit", type=int, default=None, help=("Max number of checkpoints to store."),)
|
100 |
+
parser.add_argument("--resume_from_checkpoint", type=str, default=None, help="Whether training should be resumed from a previous checkpoint.")
|
101 |
+
# train config
|
102 |
+
parser.add_argument("--dcoloss_beta", type=float, default=1000, help="Sigloss value for DCO loss, use -1 if do not using dco loss")
|
103 |
+
parser.add_argument("--train_text_encoder_ti", action="store_true", help=("Whether to use textual inversion"),)
|
104 |
+
parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder. If set, the text encoder should be float32 precision.",)
|
105 |
+
# optimizer config
|
106 |
+
parser.add_argument("--gradient_accumulation_steps", type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.",)
|
107 |
+
parser.add_argument("--gradient_checkpointing", action="store_true", help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",)
|
108 |
+
parser.add_argument("--learning_rate", type=float, default=5e-5, help="Initial learning rate (after the potential warmup period) to use.",)
|
109 |
+
parser.add_argument("--text_encoder_lr", type=float, default=5e-6, help="Text encoder learning rate to use.",)
|
110 |
+
parser.add_argument("--scale_lr", action="store_true", default=False, help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",)
|
111 |
+
parser.add_argument("--lr_scheduler", type=str, default="constant", help=('The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' ' "constant", "constant_with_warmup"]'),)
|
112 |
+
parser.add_argument("--snr_gamma", type=float, default=None, help="SNR weighting gamma to be used if rebalancing the loss. Recommended value is 5.0. ""More details here: https://arxiv.org/abs/2303.09556.",)
|
113 |
+
parser.add_argument("--lr_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler.")
|
114 |
+
parser.add_argument("--lr_num_cycles", type=int, default=1, help="Number of hard resets of the lr in cosine_with_restarts scheduler.",)
|
115 |
+
parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.")
|
116 |
+
# optimizer config
|
117 |
+
parser.add_argument("--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes. Ignored if optimizer is not set to AdamW",)
|
118 |
+
parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam and Prodigy optimizers.")
|
119 |
+
parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam and Prodigy optimizers.")
|
120 |
+
parser.add_argument("--adam_weight_decay", type=float, default=1e-04, help="Weight decay to use for unet params")
|
121 |
+
parser.add_argument("--adam_weight_decay_text_encoder", type=float, default=None, help="Weight decay to use for text_encoder")
|
122 |
+
parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer and Prodigy optimizers.",)
|
123 |
+
parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
|
124 |
+
# save config
|
125 |
+
parser.add_argument("--logging_dir", type=str, default="logs", help=("[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."),)
|
126 |
+
parser.add_argument("--allow_tf32", action="store_true", help=("Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"),)
|
127 |
+
parser.add_argument("--report_to", type=str, default="tensorboard", help=('The integration to report the results and logs to. Supported platforms are `"tensorboard"`' ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'),)
|
128 |
+
parser.add_argument("--mixed_precision", type=str, default="fp16", choices=["no", "fp16", "bf16"], help=("Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."),)
|
129 |
+
parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
|
130 |
+
parser.add_argument("--rank", type=int, default=32, help=("The dimension of the LoRA update matrices."),)
|
131 |
+
parser.add_argument("--offset_noise", type=float, default=0.0)
|
132 |
+
|
133 |
+
if input_args is not None:
|
134 |
+
args = parser.parse_args(input_args)
|
135 |
+
else:
|
136 |
+
args = parser.parse_args()
|
137 |
+
|
138 |
+
env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
|
139 |
+
if env_local_rank != -1 and env_local_rank != args.local_rank:
|
140 |
+
args.local_rank = env_local_rank
|
141 |
+
|
142 |
+
return args
|
143 |
+
|
144 |
+
|
145 |
+
# Taken from https://github.com/replicate/cog-sdxl/blob/main/dataset_and_utils.py
|
146 |
+
class TokenEmbeddingsHandler:
|
147 |
+
def __init__(self, text_encoders, tokenizers):
|
148 |
+
self.text_encoders = text_encoders
|
149 |
+
self.tokenizers = tokenizers
|
150 |
+
|
151 |
+
self.train_ids = None
|
152 |
+
self.inserting_tokens = None
|
153 |
+
self.embeddings_settings = {}
|
154 |
+
|
155 |
+
def initialize_new_tokens(self, inserting_tokens, initializer_tokens):
|
156 |
+
idx = 0
|
157 |
+
for tokenizer, text_encoder in zip(self.tokenizers, self.text_encoders):
|
158 |
+
assert isinstance(inserting_tokens, list), "inserting_tokens should be a list of strings."
|
159 |
+
assert all(
|
160 |
+
isinstance(tok, str) for tok in inserting_tokens
|
161 |
+
), "All elements in inserting_tokens should be strings."
|
162 |
+
|
163 |
+
self.inserting_tokens = inserting_tokens
|
164 |
+
special_tokens_dict = {"additional_special_tokens": self.inserting_tokens}
|
165 |
+
tokenizer.add_special_tokens(special_tokens_dict)
|
166 |
+
text_encoder.resize_token_embeddings(len(tokenizer))
|
167 |
+
|
168 |
+
self.train_ids = tokenizer.convert_tokens_to_ids(self.inserting_tokens)
|
169 |
+
std_token_embedding = text_encoder.text_model.embeddings.token_embedding.weight.data.std()
|
170 |
+
self.embeddings_settings[f"std_token_embedding_{idx}"] = std_token_embedding
|
171 |
+
print(f"{idx} text encodedr's std_token_embedding: {std_token_embedding}")
|
172 |
+
|
173 |
+
embeddings = []
|
174 |
+
embeddings_norm = []
|
175 |
+
for initializer_token in initializer_tokens:
|
176 |
+
if initializer_token == "":
|
177 |
+
emb = torch.randn(1, text_encoder.text_model.config.hidden_size).to(device=self.device).to(dtype=self.dtype) * std_token_embedding
|
178 |
+
embeddings.append(emb)
|
179 |
+
embeddings_norm.append(std_token_embedding)
|
180 |
+
else:
|
181 |
+
initializer_token_id = tokenizer.encode(initializer_token, add_special_tokens=False)
|
182 |
+
emb = text_encoder.text_model.embeddings.token_embedding.weight.data[initializer_token_id]
|
183 |
+
embeddings.append(emb)
|
184 |
+
embeddings_norm.append(emb.norm().item())
|
185 |
+
|
186 |
+
embeddings = torch.cat(embeddings, dim=0)
|
187 |
+
text_encoder.text_model.embeddings.token_embedding.weight.data[self.train_ids] = embeddings
|
188 |
+
embeddings_norm = torch.tensor(embeddings_norm).unsqueeze(1)
|
189 |
+
self.embeddings_settings[f"token_embedding_norm_{idx}"] = embeddings_norm
|
190 |
+
|
191 |
+
self.embeddings_settings[
|
192 |
+
f"original_embeddings_{idx}"
|
193 |
+
] = text_encoder.text_model.embeddings.token_embedding.weight.data.clone()
|
194 |
+
|
195 |
+
inu = torch.ones((len(tokenizer),), dtype=torch.bool)
|
196 |
+
inu[self.train_ids] = False
|
197 |
+
|
198 |
+
self.embeddings_settings[f"index_no_updates_{idx}"] = inu
|
199 |
+
idx += 1
|
200 |
+
|
201 |
+
def save_embeddings(self, file_path: str):
|
202 |
+
assert self.train_ids is not None, "Initialize new tokens before saving embeddings."
|
203 |
+
tensors = {}
|
204 |
+
# text_encoder_0 - CLIP ViT-L/14, text_encoder_1 - CLIP ViT-G/14
|
205 |
+
idx_to_text_encoder_name = {0: "clip_l", 1: "clip_g"}
|
206 |
+
for idx, text_encoder in enumerate(self.text_encoders):
|
207 |
+
assert text_encoder.text_model.embeddings.token_embedding.weight.data.shape[0] == len(
|
208 |
+
self.tokenizers[0]
|
209 |
+
), "Tokenizers should be the same."
|
210 |
+
new_token_embeddings = text_encoder.text_model.embeddings.token_embedding.weight.data[self.train_ids]
|
211 |
+
tensors[idx_to_text_encoder_name[idx]] = new_token_embeddings
|
212 |
+
save_file(tensors, file_path)
|
213 |
+
|
214 |
+
@property
|
215 |
+
def dtype(self):
|
216 |
+
return self.text_encoders[0].dtype
|
217 |
+
|
218 |
+
@property
|
219 |
+
def device(self):
|
220 |
+
return self.text_encoders[0].device
|
221 |
+
|
222 |
+
@torch.no_grad()
|
223 |
+
def retract_embeddings(self):
|
224 |
+
for idx, text_encoder in enumerate(self.text_encoders):
|
225 |
+
index_no_updates = self.embeddings_settings[f"index_no_updates_{idx}"]
|
226 |
+
text_encoder.text_model.embeddings.token_embedding.weight.data[index_no_updates] = (
|
227 |
+
self.embeddings_settings[f"original_embeddings_{idx}"][index_no_updates]
|
228 |
+
.to(device=text_encoder.device)
|
229 |
+
.to(dtype=text_encoder.dtype)
|
230 |
+
)
|
231 |
+
|
232 |
+
index_updates = ~index_no_updates
|
233 |
+
new_embeddings = text_encoder.text_model.embeddings.token_embedding.weight.data[index_updates]
|
234 |
+
new_embeddings = F.normalize(new_embeddings, dim=-1) * self.embeddings_settings[f"token_embedding_norm_{idx}"].view(-1, 1).to(device=text_encoder.device)
|
235 |
+
text_encoder.text_model.embeddings.token_embedding.weight.data[index_updates] = new_embeddings.to(device=text_encoder.device).to(dtype=text_encoder.dtype)
|
236 |
+
|
237 |
+
|
238 |
+
class TrainDataset(Dataset):
|
239 |
+
def __init__(self, args):
|
240 |
+
self.size = args.resolution
|
241 |
+
self.center_crop = args.center_crop
|
242 |
+
self.config_dir = args.config_dir
|
243 |
+
self.config_name = args.config_name
|
244 |
+
self.train_with_dco_loss = (args.dcoloss_beta > 0.)
|
245 |
+
self.train_text_encoder_ti = args.train_text_encoder_ti
|
246 |
+
self.with_prior_preservation = args.with_prior_preservation
|
247 |
+
|
248 |
+
with open(self.config_dir, 'r') as data_config:
|
249 |
+
data_cfg = json.load(data_config)[self.config_name]
|
250 |
+
|
251 |
+
self.instance_images = [Image.open(path) for path in data_cfg["images"]]
|
252 |
+
self.instance_prompts = [prompt for prompt in data_cfg["prompts"]]
|
253 |
+
|
254 |
+
if self.train_text_encoder_ti and self.train_with_dco_loss:
|
255 |
+
self.base_prompts = [prompt for prompt in data_cfg["base_prompts"]]
|
256 |
+
|
257 |
+
self.num_instance_images = len(self.instance_images)
|
258 |
+
self._length = self.num_instance_images
|
259 |
+
|
260 |
+
if self.with_prior_preservation:
|
261 |
+
self.num_class_images = args.num_class_images
|
262 |
+
class_dir = data_cfg["class_images_dir"]
|
263 |
+
self.class_images = [Image.open(class_dir+f"/{i}.png") for i in range(self.num_class_images)]
|
264 |
+
self.class_prompts = [prompt for prompt in data_cfg["class_prompts"]]
|
265 |
+
self._length = max(self.num_class_images, self.num_instance_images)
|
266 |
+
|
267 |
+
self.image_transforms = transforms.Compose(
|
268 |
+
[
|
269 |
+
transforms.Resize(self.size, interpolation=transforms.InterpolationMode.BILINEAR),
|
270 |
+
transforms.CenterCrop(self.size) if self.center_crop else transforms.RandomCrop(self.size),
|
271 |
+
transforms.ToTensor(),
|
272 |
+
transforms.Normalize([0.5], [0.5]),
|
273 |
+
]
|
274 |
+
)
|
275 |
+
|
276 |
+
def __len__(self):
|
277 |
+
return self._length
|
278 |
+
|
279 |
+
def __getitem__(self, index):
|
280 |
+
example = {}
|
281 |
+
instance_image = self.instance_images[index % self.num_instance_images]
|
282 |
+
instance_image = exif_transpose(instance_image)
|
283 |
+
|
284 |
+
if not instance_image.mode == "RGB":
|
285 |
+
instance_image = instance_image.convert("RGB")
|
286 |
+
example["instance_images"] = self.image_transforms(instance_image)
|
287 |
+
|
288 |
+
prompt = self.instance_prompts[index % self.num_instance_images]
|
289 |
+
example["instance_prompt"] = prompt
|
290 |
+
if self.train_text_encoder_ti and self.train_with_dco_loss:
|
291 |
+
base_prompt = self.base_prompts[index % self.num_instance_images]
|
292 |
+
example["base_prompt"] = base_prompt
|
293 |
+
|
294 |
+
if self.with_prior_preservation:
|
295 |
+
class_image = self.class_images[index % self.num_class_images]
|
296 |
+
class_image = exif_transpose(class_image)
|
297 |
+
if not class_image.mode == "RGB":
|
298 |
+
class_image = class_image.convert("RGB")
|
299 |
+
example["class_images"] = self.image_transforms(class_image)
|
300 |
+
example["class_prompt"] = self.class_prompt
|
301 |
+
|
302 |
+
return example
|
303 |
+
|
304 |
+
|
305 |
+
def collate_fn(examples, args):
|
306 |
+
pixel_values = [example["instance_images"] for example in examples]
|
307 |
+
prompts = [example["instance_prompt"] for example in examples]
|
308 |
+
|
309 |
+
if args.train_text_encoder_ti and (args.dcoloss_beta > 0.):
|
310 |
+
base_prompts = [example["base_prompt"] for example in examples]
|
311 |
+
|
312 |
+
if args.with_prior_preservation:
|
313 |
+
pixel_values += [example["class_images"] for example in examples]
|
314 |
+
prompts += [example["class_prompt"] for example in examples]
|
315 |
+
if args.train_text_encoder_ti and (args.dcoloss_beta > 0.0):
|
316 |
+
base_prompts += [example["class_prompt"] for example in examples]
|
317 |
+
|
318 |
+
pixel_values = torch.stack(pixel_values)
|
319 |
+
pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
|
320 |
+
|
321 |
+
batch = {"pixel_values": pixel_values, "prompts": prompts}
|
322 |
+
if args.train_text_encoder_ti and (args.dcoloss_beta > 0.0):
|
323 |
+
batch.update({"base_prompts": base_prompts})
|
324 |
+
return batch
|
325 |
+
|
326 |
+
def tokenize_prompt(tokenizer, prompt):
|
327 |
+
text_inputs = tokenizer(
|
328 |
+
prompt,
|
329 |
+
padding="max_length",
|
330 |
+
max_length=tokenizer.model_max_length,
|
331 |
+
truncation=True,
|
332 |
+
return_tensors="pt",
|
333 |
+
)
|
334 |
+
text_input_ids = text_inputs.input_ids
|
335 |
+
return text_input_ids
|
336 |
+
|
337 |
+
|
338 |
+
# Adapted from pipelines.StableDiffusionXLPipeline.encode_prompt
|
339 |
+
def encode_prompt(text_encoders, tokenizers, prompt, text_input_ids_list=None):
|
340 |
+
prompt_embeds_list = []
|
341 |
+
|
342 |
+
for i, text_encoder in enumerate(text_encoders):
|
343 |
+
if tokenizers is not None:
|
344 |
+
tokenizer = tokenizers[i]
|
345 |
+
text_input_ids = tokenize_prompt(tokenizer, prompt)
|
346 |
+
else:
|
347 |
+
assert text_input_ids_list is not None
|
348 |
+
text_input_ids = text_input_ids_list[i]
|
349 |
+
|
350 |
+
prompt_embeds = text_encoder(
|
351 |
+
text_input_ids.to(text_encoder.device),
|
352 |
+
output_hidden_states=True,
|
353 |
+
)
|
354 |
+
|
355 |
+
# We are only ALWAYS interested in the pooled output of the final text encoder
|
356 |
+
pooled_prompt_embeds = prompt_embeds[0]
|
357 |
+
prompt_embeds = prompt_embeds.hidden_states[-2]
|
358 |
+
bs_embed, seq_len, _ = prompt_embeds.shape
|
359 |
+
prompt_embeds = prompt_embeds.view(bs_embed, seq_len, -1)
|
360 |
+
prompt_embeds_list.append(prompt_embeds)
|
361 |
+
|
362 |
+
prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
|
363 |
+
pooled_prompt_embeds = pooled_prompt_embeds.view(bs_embed, -1)
|
364 |
+
return prompt_embeds, pooled_prompt_embeds
|
365 |
+
|
366 |
+
|
367 |
+
def main(args):
|
368 |
+
logging_dir = Path(args.output_dir, args.logging_dir)
|
369 |
+
|
370 |
+
accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir)
|
371 |
+
kwargs = DistributedDataParallelKwargs(find_unused_parameters=True)
|
372 |
+
accelerator = Accelerator(
|
373 |
+
gradient_accumulation_steps=args.gradient_accumulation_steps,
|
374 |
+
mixed_precision=args.mixed_precision,
|
375 |
+
log_with=args.report_to,
|
376 |
+
project_config=accelerator_project_config,
|
377 |
+
kwargs_handlers=[kwargs],
|
378 |
+
)
|
379 |
+
|
380 |
+
if args.report_to == "wandb":
|
381 |
+
if not is_wandb_available():
|
382 |
+
raise ImportError("Make sure to install wandb if you want to use it for logging during training.")
|
383 |
+
import wandb
|
384 |
+
|
385 |
+
# Make one log on every process with the configuration for debugging.
|
386 |
+
logging.basicConfig(
|
387 |
+
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
388 |
+
datefmt="%m/%d/%Y %H:%M:%S",
|
389 |
+
level=logging.INFO,
|
390 |
+
)
|
391 |
+
logger.info(accelerator.state, main_process_only=False)
|
392 |
+
if accelerator.is_local_main_process:
|
393 |
+
transformers.utils.logging.set_verbosity_warning()
|
394 |
+
diffusers.utils.logging.set_verbosity_info()
|
395 |
+
else:
|
396 |
+
transformers.utils.logging.set_verbosity_error()
|
397 |
+
diffusers.utils.logging.set_verbosity_error()
|
398 |
+
|
399 |
+
# If passed along, set the training seed now.
|
400 |
+
if args.seed is not None:
|
401 |
+
set_seed(args.seed)
|
402 |
+
|
403 |
+
# Handle the repository creation
|
404 |
+
if accelerator.is_main_process:
|
405 |
+
if args.output_dir is not None:
|
406 |
+
os.makedirs(args.output_dir, exist_ok=True)
|
407 |
+
|
408 |
+
# Load the tokenizers
|
409 |
+
tokenizer_one = AutoTokenizer.from_pretrained(
|
410 |
+
args.pretrained_model_name_or_path,
|
411 |
+
subfolder="tokenizer",
|
412 |
+
revision=args.revision,
|
413 |
+
variant=args.variant,
|
414 |
+
use_fast=False,
|
415 |
+
)
|
416 |
+
tokenizer_two = AutoTokenizer.from_pretrained(
|
417 |
+
args.pretrained_model_name_or_path,
|
418 |
+
subfolder="tokenizer_2",
|
419 |
+
revision=args.revision,
|
420 |
+
variant=args.variant,
|
421 |
+
use_fast=False,
|
422 |
+
)
|
423 |
+
|
424 |
+
# import correct text encoder classes
|
425 |
+
text_encoder_cls_one = import_model_class_from_model_name_or_path(
|
426 |
+
args.pretrained_model_name_or_path, args.revision
|
427 |
+
)
|
428 |
+
text_encoder_cls_two = import_model_class_from_model_name_or_path(
|
429 |
+
args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2"
|
430 |
+
)
|
431 |
+
|
432 |
+
# Load scheduler and models
|
433 |
+
noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
|
434 |
+
text_encoder_one = text_encoder_cls_one.from_pretrained(
|
435 |
+
args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision, variant=args.variant
|
436 |
+
)
|
437 |
+
text_encoder_two = text_encoder_cls_two.from_pretrained(
|
438 |
+
args.pretrained_model_name_or_path, subfolder="text_encoder_2", revision=args.revision, variant=args.variant
|
439 |
+
)
|
440 |
+
vae_path = (
|
441 |
+
args.pretrained_model_name_or_path
|
442 |
+
if args.pretrained_vae_model_name_or_path is None
|
443 |
+
else args.pretrained_vae_model_name_or_path
|
444 |
+
)
|
445 |
+
vae = AutoencoderKL.from_pretrained(
|
446 |
+
vae_path,
|
447 |
+
subfolder="vae" if args.pretrained_vae_model_name_or_path is None else None,
|
448 |
+
revision=args.revision,
|
449 |
+
variant=args.variant,
|
450 |
+
)
|
451 |
+
vae_scaling_factor = vae.config.scaling_factor
|
452 |
+
unet = UNet2DConditionModel.from_pretrained(
|
453 |
+
args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision, variant=args.variant
|
454 |
+
)
|
455 |
+
|
456 |
+
if args.train_text_encoder_ti:
|
457 |
+
with open(args.config_dir, 'r') as data_config:
|
458 |
+
data_cfg = json.load(data_config)[args.config_name]
|
459 |
+
inserting_tokens = data_cfg["inserting_tokens"]
|
460 |
+
initializer_tokens = data_cfg["initializer_tokens"]
|
461 |
+
|
462 |
+
logger.info(f"List of token identifiers: {inserting_tokens}")
|
463 |
+
# initialize the new tokens for textual inversion
|
464 |
+
embedding_handler = TokenEmbeddingsHandler(
|
465 |
+
[text_encoder_one, text_encoder_two], [tokenizer_one, tokenizer_two]
|
466 |
+
)
|
467 |
+
embedding_handler.initialize_new_tokens(
|
468 |
+
inserting_tokens=inserting_tokens,
|
469 |
+
initializer_tokens=initializer_tokens
|
470 |
+
)
|
471 |
+
|
472 |
+
# We only train the additional adapter LoRA layers
|
473 |
+
vae.requires_grad_(False)
|
474 |
+
text_encoder_one.requires_grad_(False)
|
475 |
+
text_encoder_two.requires_grad_(False)
|
476 |
+
unet.requires_grad_(False)
|
477 |
+
|
478 |
+
# For mixed precision training we cast all non-trainable weights (vae, non-lora text_encoder and non-lora unet) to half-precision
|
479 |
+
# as these weights are only used for inference, keeping weights in full precision is not required.
|
480 |
+
weight_dtype = torch.float32
|
481 |
+
if accelerator.mixed_precision == "fp16":
|
482 |
+
weight_dtype = torch.float16
|
483 |
+
elif accelerator.mixed_precision == "bf16":
|
484 |
+
weight_dtype = torch.bfloat16
|
485 |
+
|
486 |
+
# Move unet, vae and text_encoder to device and cast to weight_dtype
|
487 |
+
unet.to(accelerator.device, dtype=weight_dtype)
|
488 |
+
|
489 |
+
# The VAE is always in float32 to avoid NaN losses.
|
490 |
+
vae.to(accelerator.device, dtype=torch.float32)
|
491 |
+
|
492 |
+
text_encoder_one.to(accelerator.device, dtype=weight_dtype)
|
493 |
+
text_encoder_two.to(accelerator.device, dtype=weight_dtype)
|
494 |
+
|
495 |
+
if args.gradient_checkpointing:
|
496 |
+
unet.enable_gradient_checkpointing()
|
497 |
+
if args.train_text_encoder:
|
498 |
+
text_encoder_one.gradient_checkpointing_enable()
|
499 |
+
text_encoder_two.gradient_checkpointing_enable()
|
500 |
+
|
501 |
+
# now we will add new LoRA weights to the attention layers
|
502 |
+
unet_lora_config = LoraConfig(
|
503 |
+
r=args.rank,
|
504 |
+
lora_alpha=args.rank,
|
505 |
+
init_lora_weights="gaussian",
|
506 |
+
target_modules=["to_k", "to_q", "to_v", "to_out.0"],
|
507 |
+
)
|
508 |
+
unet.add_adapter(unet_lora_config)
|
509 |
+
|
510 |
+
# The text encoder comes from 🤗 transformers, so we cannot directly modify it.
|
511 |
+
# So, instead, we monkey-patch the forward calls of its attention-blocks.
|
512 |
+
if args.train_text_encoder:
|
513 |
+
text_lora_config = LoraConfig(
|
514 |
+
r=args.rank,
|
515 |
+
lora_alpha=args.rank,
|
516 |
+
init_lora_weights="gaussian",
|
517 |
+
target_modules=["q_proj", "k_proj", "v_proj", "out_proj"],
|
518 |
+
)
|
519 |
+
text_encoder_one.add_adapter(text_lora_config)
|
520 |
+
text_encoder_two.add_adapter(text_lora_config)
|
521 |
+
# if we use textual inversion, we freeze all parameters except for the token embeddings
|
522 |
+
elif args.train_text_encoder_ti:
|
523 |
+
text_lora_parameters_one = []
|
524 |
+
for name, param in text_encoder_one.named_parameters():
|
525 |
+
if "token_embedding" in name:
|
526 |
+
# ensure that dtype is float32, even if rest of the model that isn't trained is loaded in fp16
|
527 |
+
param = param.to(dtype=torch.float32)
|
528 |
+
param.requires_grad = True
|
529 |
+
text_lora_parameters_one.append(param)
|
530 |
+
else:
|
531 |
+
param.requires_grad = False
|
532 |
+
text_lora_parameters_two = []
|
533 |
+
for name, param in text_encoder_two.named_parameters():
|
534 |
+
if "token_embedding" in name:
|
535 |
+
# ensure that dtype is float32, even if rest of the model that isn't trained is loaded in fp16
|
536 |
+
param = param.to(dtype=torch.float32)
|
537 |
+
param.requires_grad = True
|
538 |
+
text_lora_parameters_two.append(param)
|
539 |
+
else:
|
540 |
+
param.requires_grad = False
|
541 |
+
|
542 |
+
# Make sure the trainable params are in float32.
|
543 |
+
if args.mixed_precision == "fp16":
|
544 |
+
models = [unet]
|
545 |
+
if args.train_text_encoder:
|
546 |
+
models.extend([text_encoder_one, text_encoder_two])
|
547 |
+
for model in models:
|
548 |
+
for param in model.parameters():
|
549 |
+
# only upcast trainable parameters (LoRA) into fp32
|
550 |
+
if param.requires_grad:
|
551 |
+
param.data = param.to(torch.float32)
|
552 |
+
|
553 |
+
# create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format
|
554 |
+
def save_model_hook(models, weights, output_dir):
|
555 |
+
if accelerator.is_main_process:
|
556 |
+
# there are only two options here. Either are just the unet attn processor layers
|
557 |
+
# or there are the unet and text encoder atten layers
|
558 |
+
unet_lora_layers_to_save = None
|
559 |
+
text_encoder_one_lora_layers_to_save = None
|
560 |
+
text_encoder_two_lora_layers_to_save = None
|
561 |
+
|
562 |
+
for model in models:
|
563 |
+
if isinstance(model, type(accelerator.unwrap_model(unet))):
|
564 |
+
unet_lora_layers_to_save = convert_state_dict_to_diffusers(get_peft_model_state_dict(model))
|
565 |
+
elif isinstance(model, type(accelerator.unwrap_model(text_encoder_one))):
|
566 |
+
if args.train_text_encoder:
|
567 |
+
text_encoder_one_lora_layers_to_save = convert_state_dict_to_diffusers(
|
568 |
+
get_peft_model_state_dict(model)
|
569 |
+
)
|
570 |
+
elif isinstance(model, type(accelerator.unwrap_model(text_encoder_two))):
|
571 |
+
if args.train_text_encoder:
|
572 |
+
text_encoder_two_lora_layers_to_save = convert_state_dict_to_diffusers(
|
573 |
+
get_peft_model_state_dict(model)
|
574 |
+
)
|
575 |
+
else:
|
576 |
+
raise ValueError(f"unexpected save model: {model.__class__}")
|
577 |
+
|
578 |
+
# make sure to pop weight so that corresponding model is not saved again
|
579 |
+
weights.pop()
|
580 |
+
|
581 |
+
StableDiffusionXLPipeline.save_lora_weights(
|
582 |
+
output_dir,
|
583 |
+
unet_lora_layers=unet_lora_layers_to_save,
|
584 |
+
text_encoder_lora_layers=text_encoder_one_lora_layers_to_save,
|
585 |
+
text_encoder_2_lora_layers=text_encoder_two_lora_layers_to_save,
|
586 |
+
)
|
587 |
+
if args.train_text_encoder_ti:
|
588 |
+
embedding_handler.save_embeddings(f"{output_dir}/learned_embeds.safetensors")
|
589 |
+
|
590 |
+
def load_model_hook(models, input_dir):
|
591 |
+
unet_ = None
|
592 |
+
text_encoder_one_ = None
|
593 |
+
text_encoder_two_ = None
|
594 |
+
|
595 |
+
while len(models) > 0:
|
596 |
+
model = models.pop()
|
597 |
+
|
598 |
+
if isinstance(model, type(accelerator.unwrap_model(unet))):
|
599 |
+
unet_ = model
|
600 |
+
elif isinstance(model, type(accelerator.unwrap_model(text_encoder_one))):
|
601 |
+
text_encoder_one_ = model
|
602 |
+
elif isinstance(model, type(accelerator.unwrap_model(text_encoder_two))):
|
603 |
+
text_encoder_two_ = model
|
604 |
+
else:
|
605 |
+
raise ValueError(f"unexpected save model: {model.__class__}")
|
606 |
+
|
607 |
+
lora_state_dict, network_alphas = LoraLoaderMixin.lora_state_dict(input_dir)
|
608 |
+
LoraLoaderMixin.load_lora_into_unet(lora_state_dict, network_alphas=network_alphas, unet=unet_)
|
609 |
+
|
610 |
+
text_encoder_state_dict = {k: v for k, v in lora_state_dict.items() if "text_encoder." in k}
|
611 |
+
LoraLoaderMixin.load_lora_into_text_encoder(
|
612 |
+
text_encoder_state_dict, network_alphas=network_alphas, text_encoder=text_encoder_one_
|
613 |
+
)
|
614 |
+
|
615 |
+
text_encoder_2_state_dict = {k: v for k, v in lora_state_dict.items() if "text_encoder_2." in k}
|
616 |
+
LoraLoaderMixin.load_lora_into_text_encoder(
|
617 |
+
text_encoder_2_state_dict, network_alphas=network_alphas, text_encoder=text_encoder_two_
|
618 |
+
)
|
619 |
+
|
620 |
+
accelerator.register_save_state_pre_hook(save_model_hook)
|
621 |
+
accelerator.register_load_state_pre_hook(load_model_hook)
|
622 |
+
|
623 |
+
# Enable TF32 for faster training on Ampere GPUs,
|
624 |
+
# cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
|
625 |
+
if args.allow_tf32:
|
626 |
+
torch.backends.cuda.matmul.allow_tf32 = True
|
627 |
+
|
628 |
+
if args.scale_lr:
|
629 |
+
args.learning_rate = (
|
630 |
+
args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
|
631 |
+
)
|
632 |
+
|
633 |
+
unet_lora_parameters = list(filter(lambda p: p.requires_grad, unet.parameters()))
|
634 |
+
|
635 |
+
if args.train_text_encoder:
|
636 |
+
text_lora_parameters_one = list(filter(lambda p: p.requires_grad, text_encoder_one.parameters()))
|
637 |
+
text_lora_parameters_two = list(filter(lambda p: p.requires_grad, text_encoder_two.parameters()))
|
638 |
+
|
639 |
+
# If neither --train_text_encoder nor --train_text_encoder_ti, text_encoders remain frozen during training
|
640 |
+
freeze_text_encoder = not (args.train_text_encoder or args.train_text_encoder_ti)
|
641 |
+
|
642 |
+
# Optimization parameters
|
643 |
+
unet_lora_parameters_with_lr = {"params": unet_lora_parameters, "lr": args.learning_rate}
|
644 |
+
|
645 |
+
if not freeze_text_encoder:
|
646 |
+
# different learning rate for text encoder and unet
|
647 |
+
text_lora_parameters_one_with_lr = {
|
648 |
+
"params": text_lora_parameters_one,
|
649 |
+
"weight_decay": args.adam_weight_decay_text_encoder
|
650 |
+
if args.adam_weight_decay_text_encoder
|
651 |
+
else args.adam_weight_decay,
|
652 |
+
"lr": args.text_encoder_lr if args.text_encoder_lr else args.learning_rate,
|
653 |
+
}
|
654 |
+
text_lora_parameters_two_with_lr = {
|
655 |
+
"params": text_lora_parameters_two,
|
656 |
+
"weight_decay": args.adam_weight_decay_text_encoder
|
657 |
+
if args.adam_weight_decay_text_encoder
|
658 |
+
else args.adam_weight_decay,
|
659 |
+
"lr": args.text_encoder_lr if args.text_encoder_lr else args.learning_rate,
|
660 |
+
}
|
661 |
+
params_to_optimize = [
|
662 |
+
unet_lora_parameters_with_lr,
|
663 |
+
text_lora_parameters_one_with_lr,
|
664 |
+
text_lora_parameters_two_with_lr,
|
665 |
+
]
|
666 |
+
else:
|
667 |
+
params_to_optimize = [unet_lora_parameters_with_lr]
|
668 |
+
|
669 |
+
# Optimizer creation
|
670 |
+
if args.use_8bit_adam:
|
671 |
+
try:
|
672 |
+
import bitsandbytes as bnb
|
673 |
+
except ImportError:
|
674 |
+
raise ImportError(
|
675 |
+
"To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`."
|
676 |
+
)
|
677 |
+
optimizer_class = bnb.optim.AdamW8bit
|
678 |
+
else:
|
679 |
+
optimizer_class = torch.optim.AdamW
|
680 |
+
|
681 |
+
optimizer = optimizer_class(
|
682 |
+
params_to_optimize,
|
683 |
+
betas=(args.adam_beta1, args.adam_beta2),
|
684 |
+
weight_decay=args.adam_weight_decay,
|
685 |
+
eps=args.adam_epsilon,
|
686 |
+
)
|
687 |
+
|
688 |
+
# Dataset and DataLoaders creation:
|
689 |
+
train_dataset = TrainDataset(args)
|
690 |
+
train_dataloader = torch.utils.data.DataLoader(
|
691 |
+
train_dataset,
|
692 |
+
batch_size=args.train_batch_size,
|
693 |
+
shuffle=True,
|
694 |
+
collate_fn=lambda examples: collate_fn(examples, args),
|
695 |
+
num_workers=args.dataloader_num_workers,
|
696 |
+
)
|
697 |
+
|
698 |
+
# Computes additional embeddings/ids required by the SDXL UNet.
|
699 |
+
def compute_time_ids():
|
700 |
+
# Adapted from pipeline.StableDiffusionXLPipeline._get_add_time_ids
|
701 |
+
original_size = (args.resolution, args.resolution)
|
702 |
+
target_size = (args.resolution, args.resolution)
|
703 |
+
crops_coords_top_left = (args.crops_coords_top_left_h, args.crops_coords_top_left_w)
|
704 |
+
add_time_ids = list(original_size + crops_coords_top_left + target_size)
|
705 |
+
add_time_ids = torch.tensor([add_time_ids])
|
706 |
+
add_time_ids = add_time_ids.to(accelerator.device, dtype=weight_dtype)
|
707 |
+
return add_time_ids
|
708 |
+
|
709 |
+
tokenizers = [tokenizer_one, tokenizer_two]
|
710 |
+
text_encoders = [text_encoder_one, text_encoder_two]
|
711 |
+
|
712 |
+
def compute_text_embeddings(prompt, text_encoders, tokenizers):
|
713 |
+
with torch.no_grad():
|
714 |
+
prompt_embeds, pooled_prompt_embeds = encode_prompt(text_encoders, tokenizers, prompt)
|
715 |
+
prompt_embeds = prompt_embeds.to(accelerator.device)
|
716 |
+
pooled_prompt_embeds = pooled_prompt_embeds.to(accelerator.device)
|
717 |
+
return prompt_embeds, pooled_prompt_embeds
|
718 |
+
|
719 |
+
# Handle instance prompt.
|
720 |
+
instance_time_ids = compute_time_ids()
|
721 |
+
add_time_ids = instance_time_ids
|
722 |
+
if args.with_prior_preservation:
|
723 |
+
class_time_ids = compute_time_ids()
|
724 |
+
add_time_ids = torch.cat([add_time_ids, class_time_ids], dim=0)
|
725 |
+
|
726 |
+
# Scheduler and math around the number of training steps.
|
727 |
+
overrode_max_train_steps = False
|
728 |
+
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
|
729 |
+
if args.max_train_steps is None:
|
730 |
+
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
|
731 |
+
overrode_max_train_steps = True
|
732 |
+
|
733 |
+
lr_scheduler = get_scheduler(
|
734 |
+
args.lr_scheduler,
|
735 |
+
optimizer=optimizer,
|
736 |
+
num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes,
|
737 |
+
num_training_steps=args.max_train_steps * accelerator.num_processes,
|
738 |
+
num_cycles=args.lr_num_cycles,
|
739 |
+
power=args.lr_power,
|
740 |
+
)
|
741 |
+
|
742 |
+
# Prepare everything with our `accelerator`.
|
743 |
+
if not freeze_text_encoder:
|
744 |
+
unet, text_encoder_one, text_encoder_two, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
|
745 |
+
unet, text_encoder_one, text_encoder_two, optimizer, train_dataloader, lr_scheduler
|
746 |
+
)
|
747 |
+
else:
|
748 |
+
unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
|
749 |
+
unet, optimizer, train_dataloader, lr_scheduler
|
750 |
+
)
|
751 |
+
|
752 |
+
# We need to recalculate our total training steps as the size of the training dataloader may have changed.
|
753 |
+
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
|
754 |
+
if overrode_max_train_steps:
|
755 |
+
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
|
756 |
+
# Afterwards we recalculate our number of training epochs
|
757 |
+
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
|
758 |
+
|
759 |
+
# We need to initialize the trackers we use, and also store our configuration.
|
760 |
+
# The trackers initializes automatically on the main process.
|
761 |
+
if accelerator.is_main_process:
|
762 |
+
accelerator.init_trackers("fine-tune sdxl", config=vars(args))
|
763 |
+
|
764 |
+
# Train!
|
765 |
+
total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
|
766 |
+
|
767 |
+
logger.info("***** Running training *****")
|
768 |
+
logger.info(f" Num examples = {len(train_dataset)}")
|
769 |
+
logger.info(f" Num batches each epoch = {len(train_dataloader)}")
|
770 |
+
logger.info(f" Num Epochs = {args.num_train_epochs}")
|
771 |
+
logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
|
772 |
+
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
|
773 |
+
logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
|
774 |
+
logger.info(f" Total optimization steps = {args.max_train_steps}")
|
775 |
+
global_step = 0
|
776 |
+
first_epoch = 0
|
777 |
+
|
778 |
+
# Potentially load in the weights and states from a previous save
|
779 |
+
if args.resume_from_checkpoint:
|
780 |
+
if args.resume_from_checkpoint != "latest":
|
781 |
+
path = os.path.basename(args.resume_from_checkpoint)
|
782 |
+
else:
|
783 |
+
# Get the mos recent checkpoint
|
784 |
+
dirs = os.listdir(args.output_dir)
|
785 |
+
dirs = [d for d in dirs if d.startswith("checkpoint")]
|
786 |
+
dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
|
787 |
+
path = dirs[-1] if len(dirs) > 0 else None
|
788 |
+
|
789 |
+
if path is None:
|
790 |
+
accelerator.print(
|
791 |
+
f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
|
792 |
+
)
|
793 |
+
args.resume_from_checkpoint = None
|
794 |
+
initial_global_step = 0
|
795 |
+
else:
|
796 |
+
accelerator.print(f"Resuming from checkpoint {path}")
|
797 |
+
accelerator.load_state(os.path.join(args.output_dir, path))
|
798 |
+
global_step = int(path.split("-")[1])
|
799 |
+
|
800 |
+
initial_global_step = global_step
|
801 |
+
first_epoch = global_step // num_update_steps_per_epoch
|
802 |
+
|
803 |
+
else:
|
804 |
+
initial_global_step = 0
|
805 |
+
|
806 |
+
progress_bar = tqdm(
|
807 |
+
range(0, args.max_train_steps),
|
808 |
+
initial=initial_global_step,
|
809 |
+
desc="Steps",
|
810 |
+
# Only show the progress bar once on each machine.
|
811 |
+
disable=not accelerator.is_local_main_process,
|
812 |
+
)
|
813 |
+
|
814 |
+
for epoch in range(first_epoch, args.num_train_epochs):
|
815 |
+
# if performing any kind of optimization of text_encoder params
|
816 |
+
if args.train_text_encoder or args.train_text_encoder_ti:
|
817 |
+
text_encoder_one.train()
|
818 |
+
text_encoder_two.train()
|
819 |
+
# set top parameter requires_grad = True for gradient checkpointing works
|
820 |
+
if args.train_text_encoder:
|
821 |
+
text_encoder_one.text_model.embeddings.requires_grad_(True)
|
822 |
+
text_encoder_two.text_model.embeddings.requires_grad_(True)
|
823 |
+
|
824 |
+
unet.train()
|
825 |
+
for step, batch in enumerate(train_dataloader):
|
826 |
+
with accelerator.accumulate(unet):
|
827 |
+
prompts = batch["prompts"]
|
828 |
+
if args.train_text_encoder_ti and (args.dcoloss_beta > 0.0):
|
829 |
+
base_prompts = batch["base_prompts"]
|
830 |
+
base_prompt_embeds, base_add_embeds = compute_text_embeddings(
|
831 |
+
base_prompts, text_encoders, tokenizers
|
832 |
+
)
|
833 |
+
# encode batch prompts when custom prompts are provided for each image -
|
834 |
+
# if train_dataset.custom_instance_prompts:
|
835 |
+
if freeze_text_encoder:
|
836 |
+
prompt_embeds, unet_add_text_embeds = compute_text_embeddings(
|
837 |
+
prompts, text_encoders, tokenizers
|
838 |
+
)
|
839 |
+
else:
|
840 |
+
tokens_one = tokenize_prompt(tokenizer_one, prompts)
|
841 |
+
tokens_two = tokenize_prompt(tokenizer_two, prompts)
|
842 |
+
|
843 |
+
pixel_values = batch["pixel_values"].to(dtype=vae.dtype)
|
844 |
+
model_input = vae.encode(pixel_values).latent_dist.sample()
|
845 |
+
|
846 |
+
model_input = model_input * vae_scaling_factor
|
847 |
+
if args.pretrained_vae_model_name_or_path is None:
|
848 |
+
model_input = model_input.to(weight_dtype)
|
849 |
+
|
850 |
+
# Sample noise that we'll add to the latents
|
851 |
+
noise = torch.randn_like(model_input)
|
852 |
+
noise = noise + args.offset_noise * torch.randn(model_input.shape[0], model_input.shape[1], 1, 1, device=model_input.device)
|
853 |
+
bsz = model_input.shape[0]
|
854 |
+
# Sample a random timestep for each image
|
855 |
+
timesteps = torch.randint(
|
856 |
+
0, noise_scheduler.config.num_train_timesteps, (bsz,), device=model_input.device
|
857 |
+
)
|
858 |
+
timesteps = timesteps.long()
|
859 |
+
|
860 |
+
# Add noise to the model input according to the noise magnitude at each timestep
|
861 |
+
noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps)
|
862 |
+
|
863 |
+
# Calculate the elements to repeat depending on the use of prior-preservation and custom captions.
|
864 |
+
elems_to_repeat_text_embeds = 1
|
865 |
+
elems_to_repeat_time_ids = bsz // 2 if args.with_prior_preservation else bsz
|
866 |
+
|
867 |
+
# Predict the noise residual
|
868 |
+
if freeze_text_encoder:
|
869 |
+
unet_added_conditions = {
|
870 |
+
"time_ids": add_time_ids.repeat(elems_to_repeat_time_ids, 1),
|
871 |
+
"text_embeds": unet_add_text_embeds.repeat(elems_to_repeat_text_embeds, 1),
|
872 |
+
}
|
873 |
+
prompt_embeds_input = prompt_embeds.repeat(elems_to_repeat_text_embeds, 1, 1)
|
874 |
+
model_pred = unet(
|
875 |
+
noisy_model_input,
|
876 |
+
timesteps,
|
877 |
+
prompt_embeds_input,
|
878 |
+
added_cond_kwargs=unet_added_conditions,
|
879 |
+
).sample
|
880 |
+
if args.dcoloss_beta > 0.0:
|
881 |
+
with torch.no_grad():
|
882 |
+
cross_attention_kwargs = {"scale": 0.0}
|
883 |
+
refer_pred = unet(
|
884 |
+
noisy_model_input,
|
885 |
+
timesteps,
|
886 |
+
prompt_embeds_input,
|
887 |
+
added_cond_kwargs=unet_added_conditions,
|
888 |
+
cross_attention_kwargs=cross_attention_kwargs,
|
889 |
+
).sample
|
890 |
+
else:
|
891 |
+
unet_added_conditions = {"time_ids": add_time_ids.repeat(elems_to_repeat_time_ids, 1)}
|
892 |
+
prompt_embeds, pooled_prompt_embeds = encode_prompt(
|
893 |
+
text_encoders=[text_encoder_one, text_encoder_two],
|
894 |
+
tokenizers=None,
|
895 |
+
prompt=None,
|
896 |
+
text_input_ids_list=[tokens_one, tokens_two],
|
897 |
+
)
|
898 |
+
unet_added_conditions.update(
|
899 |
+
{"text_embeds": pooled_prompt_embeds.repeat(elems_to_repeat_text_embeds, 1)}
|
900 |
+
)
|
901 |
+
prompt_embeds_input = prompt_embeds.repeat(elems_to_repeat_text_embeds, 1, 1)
|
902 |
+
model_pred = unet(
|
903 |
+
noisy_model_input, timesteps, prompt_embeds_input, added_cond_kwargs=unet_added_conditions
|
904 |
+
).sample
|
905 |
+
if args.dcoloss_beta > 0.0:
|
906 |
+
base_prompts = batch["base_prompts"]
|
907 |
+
with torch.no_grad():
|
908 |
+
base_prompt_embeds, base_add_embeds = compute_text_embeddings(
|
909 |
+
base_prompts, text_encoders, tokenizers
|
910 |
+
)
|
911 |
+
cross_attention_kwargs = {"scale": 0.0}
|
912 |
+
base_added_conditions = {"time_ids": add_time_ids, "text_embeds": base_add_embeds}
|
913 |
+
refer_pred = unet(
|
914 |
+
noisy_model_input,
|
915 |
+
timesteps,
|
916 |
+
base_prompt_embeds,
|
917 |
+
added_cond_kwargs=base_added_conditions,
|
918 |
+
cross_attention_kwargs=cross_attention_kwargs
|
919 |
+
).sample
|
920 |
+
|
921 |
+
# Get the target for loss depending on the prediction type
|
922 |
+
if noise_scheduler.config.prediction_type == "epsilon":
|
923 |
+
target = noise
|
924 |
+
elif noise_scheduler.config.prediction_type == "v_prediction":
|
925 |
+
target = noise_scheduler.get_velocity(model_input, noise, timesteps)
|
926 |
+
else:
|
927 |
+
raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
|
928 |
+
|
929 |
+
if args.with_prior_preservation:
|
930 |
+
# Chunk the noise and model_pred into two parts and compute the loss on each part separately.
|
931 |
+
model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0)
|
932 |
+
target, target_prior = torch.chunk(target, 2, dim=0)
|
933 |
+
prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean")
|
934 |
+
|
935 |
+
if args.snr_gamma is None:
|
936 |
+
if args.dcoloss_beta > 0.0:
|
937 |
+
loss_model = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
|
938 |
+
loss_refer = F.mse_loss(refer_pred.float(), target.float(), reduction="mean")
|
939 |
+
diff = loss_model - loss_refer
|
940 |
+
inside_term = -1 * args.dcoloss_beta * diff
|
941 |
+
loss = -1 * torch.nn.LogSigmoid()(inside_term)
|
942 |
+
else:
|
943 |
+
loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
|
944 |
+
else:
|
945 |
+
# Compute loss-weights as per Section 3.4 of https://arxiv.org/abs/2303.09556.
|
946 |
+
# Since we predict the noise instead of x_0, the original formulation is slightly changed.
|
947 |
+
# This is discussed in Section 4.2 of the same paper.
|
948 |
+
if args.with_prior_preservation:
|
949 |
+
# if we're using prior preservation, we calc snr for instance loss only -
|
950 |
+
# and hence only need timesteps corresponding to instance images
|
951 |
+
snr_timesteps, _ = torch.chunk(timesteps, 2, dim=0)
|
952 |
+
else:
|
953 |
+
snr_timesteps = timesteps
|
954 |
+
|
955 |
+
snr = compute_snr(noise_scheduler, snr_timesteps)
|
956 |
+
base_weight = (
|
957 |
+
torch.stack([snr, args.snr_gamma * torch.ones_like(snr_timesteps)], dim=1).min(dim=1)[0] / snr
|
958 |
+
)
|
959 |
+
|
960 |
+
if noise_scheduler.config.prediction_type == "v_prediction":
|
961 |
+
# Velocity objective needs to be floored to an SNR weight of one.
|
962 |
+
mse_loss_weights = base_weight + 1
|
963 |
+
else:
|
964 |
+
# Epsilon and sample both use the same loss weights.
|
965 |
+
mse_loss_weights = base_weight
|
966 |
+
|
967 |
+
if args.dcoloss_beta > 0.0:
|
968 |
+
loss_model = F.mse_loss(model_pred.float(), target.float(), reduction="none")
|
969 |
+
loss_model = loss_model.mean(dim=list(range(1, len(loss_model.shape)))) * mse_loss_weights
|
970 |
+
loss_model = loss_model.mean()
|
971 |
+
loss_refer = F.mse_loss(refer_pred.float(), target.float(), reduction="none")
|
972 |
+
loss_refer = loss_refer.mean(dim=list(range(1, len(loss_refer.shape)))) * mse_loss_weights
|
973 |
+
loss_refer = loss_refer.mean()
|
974 |
+
diff = loss_model - loss_refer
|
975 |
+
inside_term = -1 * args.dcoloss_beta * diff
|
976 |
+
loss = -1 * torch.nn.LogSigmoid()(inside_term)
|
977 |
+
else:
|
978 |
+
loss = F.mse_loss(model_pred.float(), target.float(), reduction="none")
|
979 |
+
loss = loss.mean(dim=list(range(1, len(loss.shape)))) * mse_loss_weights
|
980 |
+
loss = loss.mean()
|
981 |
+
|
982 |
+
if args.with_prior_preservation:
|
983 |
+
# Add the prior loss to the instance loss.
|
984 |
+
loss = loss + args.prior_loss_weight * prior_loss
|
985 |
+
|
986 |
+
accelerator.backward(loss)
|
987 |
+
if accelerator.sync_gradients:
|
988 |
+
params_to_clip = (
|
989 |
+
itertools.chain(unet_lora_parameters, text_lora_parameters_one, text_lora_parameters_two)
|
990 |
+
if (args.train_text_encoder or args.train_text_encoder_ti)
|
991 |
+
else unet_lora_parameters
|
992 |
+
)
|
993 |
+
accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
|
994 |
+
optimizer.step()
|
995 |
+
lr_scheduler.step()
|
996 |
+
optimizer.zero_grad()
|
997 |
+
|
998 |
+
# every step, we reset the embeddings to the original embeddings.
|
999 |
+
if args.train_text_encoder_ti:
|
1000 |
+
embedding_handler.retract_embeddings()
|
1001 |
+
|
1002 |
+
# Checks if the accelerator has performed an optimization step behind the scenes
|
1003 |
+
if accelerator.sync_gradients:
|
1004 |
+
progress_bar.update(1)
|
1005 |
+
global_step += 1
|
1006 |
+
|
1007 |
+
if accelerator.is_main_process:
|
1008 |
+
if global_step % args.checkpointing_steps == 0:
|
1009 |
+
# _before_ saving state, check if this save would set us over the `checkpoints_total_limit`
|
1010 |
+
if args.checkpoints_total_limit is not None:
|
1011 |
+
checkpoints = os.listdir(args.output_dir)
|
1012 |
+
checkpoints = [d for d in checkpoints if d.startswith("checkpoint")]
|
1013 |
+
checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1]))
|
1014 |
+
|
1015 |
+
# before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints
|
1016 |
+
if len(checkpoints) >= args.checkpoints_total_limit:
|
1017 |
+
num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1
|
1018 |
+
removing_checkpoints = checkpoints[0:num_to_remove]
|
1019 |
+
|
1020 |
+
logger.info(
|
1021 |
+
f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints"
|
1022 |
+
)
|
1023 |
+
logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}")
|
1024 |
+
|
1025 |
+
for removing_checkpoint in removing_checkpoints:
|
1026 |
+
removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint)
|
1027 |
+
shutil.rmtree(removing_checkpoint)
|
1028 |
+
|
1029 |
+
save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
|
1030 |
+
accelerator.save_state(save_path)
|
1031 |
+
logger.info(f"Saved state to {save_path}")
|
1032 |
+
|
1033 |
+
logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
|
1034 |
+
progress_bar.set_postfix(**logs)
|
1035 |
+
accelerator.log(logs, step=global_step)
|
1036 |
+
|
1037 |
+
if global_step >= args.max_train_steps:
|
1038 |
+
break
|
1039 |
+
|
1040 |
+
if accelerator.is_main_process:
|
1041 |
+
if args.validation_prompt is not None and epoch % args.validation_epochs == 0:
|
1042 |
+
logger.info(
|
1043 |
+
f"Running validation... \n Generating {args.num_validation_images} images with prompt:"
|
1044 |
+
f" {args.validation_prompt}."
|
1045 |
+
)
|
1046 |
+
# create pipeline
|
1047 |
+
if freeze_text_encoder:
|
1048 |
+
text_encoder_one = text_encoder_cls_one.from_pretrained(
|
1049 |
+
args.pretrained_model_name_or_path,
|
1050 |
+
subfolder="text_encoder",
|
1051 |
+
revision=args.revision,
|
1052 |
+
variant=args.variant,
|
1053 |
+
)
|
1054 |
+
text_encoder_two = text_encoder_cls_two.from_pretrained(
|
1055 |
+
args.pretrained_model_name_or_path,
|
1056 |
+
subfolder="text_encoder_2",
|
1057 |
+
revision=args.revision,
|
1058 |
+
variant=args.variant,
|
1059 |
+
)
|
1060 |
+
pipeline = StableDiffusionXLPipeline.from_pretrained(
|
1061 |
+
args.pretrained_model_name_or_path,
|
1062 |
+
vae=vae,
|
1063 |
+
text_encoder=accelerator.unwrap_model(text_encoder_one),
|
1064 |
+
text_encoder_2=accelerator.unwrap_model(text_encoder_two),
|
1065 |
+
unet=accelerator.unwrap_model(unet),
|
1066 |
+
revision=args.revision,
|
1067 |
+
variant=args.variant,
|
1068 |
+
torch_dtype=weight_dtype,
|
1069 |
+
)
|
1070 |
+
|
1071 |
+
# We train on the simplified learning objective. If we were previously predicting a variance, we need the scheduler to ignore it
|
1072 |
+
scheduler_args = {}
|
1073 |
+
|
1074 |
+
if "variance_type" in pipeline.scheduler.config:
|
1075 |
+
variance_type = pipeline.scheduler.config.variance_type
|
1076 |
+
|
1077 |
+
if variance_type in ["learned", "learned_range"]:
|
1078 |
+
variance_type = "fixed_small"
|
1079 |
+
|
1080 |
+
scheduler_args["variance_type"] = variance_type
|
1081 |
+
|
1082 |
+
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(
|
1083 |
+
pipeline.scheduler.config, **scheduler_args
|
1084 |
+
)
|
1085 |
+
|
1086 |
+
pipeline = pipeline.to(accelerator.device)
|
1087 |
+
pipeline.set_progress_bar_config(disable=True)
|
1088 |
+
|
1089 |
+
# run inference
|
1090 |
+
generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed else None
|
1091 |
+
pipeline_args = {"prompt": args.validation_prompt}
|
1092 |
+
|
1093 |
+
with torch.cuda.amp.autocast():
|
1094 |
+
images = [
|
1095 |
+
pipeline(**pipeline_args, generator=generator).images[0]
|
1096 |
+
for _ in range(args.num_validation_images)
|
1097 |
+
]
|
1098 |
+
|
1099 |
+
for tracker in accelerator.trackers:
|
1100 |
+
if tracker.name == "tensorboard":
|
1101 |
+
np_images = np.stack([np.asarray(img) for img in images])
|
1102 |
+
tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC")
|
1103 |
+
if tracker.name == "wandb":
|
1104 |
+
tracker.log(
|
1105 |
+
{
|
1106 |
+
"validation": [
|
1107 |
+
wandb.Image(image, caption=f"{i}: {args.validation_prompt}")
|
1108 |
+
for i, image in enumerate(images)
|
1109 |
+
]
|
1110 |
+
}
|
1111 |
+
)
|
1112 |
+
|
1113 |
+
del pipeline
|
1114 |
+
torch.cuda.empty_cache()
|
1115 |
+
|
1116 |
+
# Save the lora layers
|
1117 |
+
accelerator.wait_for_everyone()
|
1118 |
+
if accelerator.is_main_process:
|
1119 |
+
unet = accelerator.unwrap_model(unet)
|
1120 |
+
unet = unet.to(torch.float32)
|
1121 |
+
unet_lora_layers = convert_state_dict_to_diffusers(get_peft_model_state_dict(unet))
|
1122 |
+
|
1123 |
+
if args.train_text_encoder:
|
1124 |
+
text_encoder_one = accelerator.unwrap_model(text_encoder_one)
|
1125 |
+
text_encoder_lora_layers = convert_state_dict_to_diffusers(
|
1126 |
+
get_peft_model_state_dict(text_encoder_one.to(torch.float32))
|
1127 |
+
)
|
1128 |
+
text_encoder_two = accelerator.unwrap_model(text_encoder_two)
|
1129 |
+
text_encoder_2_lora_layers = convert_state_dict_to_diffusers(
|
1130 |
+
get_peft_model_state_dict(text_encoder_two.to(torch.float32))
|
1131 |
+
)
|
1132 |
+
else:
|
1133 |
+
text_encoder_lora_layers = None
|
1134 |
+
text_encoder_2_lora_layers = None
|
1135 |
+
|
1136 |
+
StableDiffusionXLPipeline.save_lora_weights(
|
1137 |
+
save_directory=args.output_dir,
|
1138 |
+
unet_lora_layers=unet_lora_layers,
|
1139 |
+
text_encoder_lora_layers=text_encoder_lora_layers,
|
1140 |
+
text_encoder_2_lora_layers=text_encoder_2_lora_layers,
|
1141 |
+
)
|
1142 |
+
images = []
|
1143 |
+
if args.validation_prompt and args.num_validation_images > 0:
|
1144 |
+
# Final inference
|
1145 |
+
# Load previous pipeline
|
1146 |
+
vae = AutoencoderKL.from_pretrained(
|
1147 |
+
vae_path,
|
1148 |
+
subfolder="vae" if args.pretrained_vae_model_name_or_path is None else None,
|
1149 |
+
revision=args.revision,
|
1150 |
+
variant=args.variant,
|
1151 |
+
torch_dtype=weight_dtype,
|
1152 |
+
)
|
1153 |
+
pipeline = StableDiffusionXLPipeline.from_pretrained(
|
1154 |
+
args.pretrained_model_name_or_path,
|
1155 |
+
vae=vae,
|
1156 |
+
revision=args.revision,
|
1157 |
+
variant=args.variant,
|
1158 |
+
torch_dtype=weight_dtype,
|
1159 |
+
)
|
1160 |
+
|
1161 |
+
# We train on the simplified learning objective. If we were previously predicting a variance, we need the scheduler to ignore it
|
1162 |
+
scheduler_args = {}
|
1163 |
+
|
1164 |
+
if "variance_type" in pipeline.scheduler.config:
|
1165 |
+
variance_type = pipeline.scheduler.config.variance_type
|
1166 |
+
|
1167 |
+
if variance_type in ["learned", "learned_range"]:
|
1168 |
+
variance_type = "fixed_small"
|
1169 |
+
|
1170 |
+
scheduler_args["variance_type"] = variance_type
|
1171 |
+
|
1172 |
+
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, **scheduler_args)
|
1173 |
+
|
1174 |
+
# load attention processors
|
1175 |
+
pipeline.load_lora_weights(args.output_dir)
|
1176 |
+
|
1177 |
+
# run inference
|
1178 |
+
pipeline = pipeline.to(accelerator.device)
|
1179 |
+
generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed else None
|
1180 |
+
images = [
|
1181 |
+
pipeline(args.validation_prompt, num_inference_steps=25, generator=generator).images[0]
|
1182 |
+
for _ in range(args.num_validation_images)
|
1183 |
+
]
|
1184 |
+
|
1185 |
+
for tracker in accelerator.trackers:
|
1186 |
+
if tracker.name == "tensorboard":
|
1187 |
+
np_images = np.stack([np.asarray(img) for img in images])
|
1188 |
+
tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC")
|
1189 |
+
if tracker.name == "wandb":
|
1190 |
+
tracker.log(
|
1191 |
+
{
|
1192 |
+
"test": [
|
1193 |
+
wandb.Image(image, caption=f"{i}: {args.validation_prompt}")
|
1194 |
+
for i, image in enumerate(images)
|
1195 |
+
]
|
1196 |
+
}
|
1197 |
+
)
|
1198 |
+
|
1199 |
+
if args.train_text_encoder_ti:
|
1200 |
+
embedding_handler.save_embeddings(
|
1201 |
+
f"{args.output_dir}/learned_embeds.safetensors",
|
1202 |
+
)
|
1203 |
+
|
1204 |
+
accelerator.end_training()
|
1205 |
+
|
1206 |
+
|
1207 |
+
if __name__ == "__main__":
|
1208 |
+
args = parse_args()
|
1209 |
+
main(args)
|
requirements.txt
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
-
accelerate
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
|
|
|
1 |
+
accelerate>=0.16.0
|
2 |
+
torchvision
|
3 |
+
transformers>=4.25.1
|
4 |
+
ftfy
|
5 |
+
tensorboard
|
6 |
+
Jinja2
|
7 |
+
peft==0.7.0
|
reward_guidance.py
ADDED
@@ -0,0 +1,576 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
import torch
|
3 |
+
from diffusers import StableDiffusionXLPipeline
|
4 |
+
import os
|
5 |
+
import argparse
|
6 |
+
|
7 |
+
from diffusers import DiffusionPipeline, UNet2DConditionModel
|
8 |
+
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
|
9 |
+
|
10 |
+
import torch
|
11 |
+
from diffusers.image_processor import PipelineImageInput
|
12 |
+
from diffusers.models.embeddings import ImageProjection
|
13 |
+
from diffusers.utils import (
|
14 |
+
deprecate,
|
15 |
+
is_torch_xla_available,
|
16 |
+
)
|
17 |
+
import json
|
18 |
+
from safetensors.torch import load_file
|
19 |
+
from diffusers.pipelines.stable_diffusion_xl.pipeline_output import StableDiffusionXLPipelineOutput
|
20 |
+
|
21 |
+
if is_torch_xla_available():
|
22 |
+
import torch_xla.core.xla_model as xm
|
23 |
+
|
24 |
+
XLA_AVAILABLE = True
|
25 |
+
else:
|
26 |
+
XLA_AVAILABLE = False
|
27 |
+
|
28 |
+
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg
|
29 |
+
def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0):
|
30 |
+
"""
|
31 |
+
Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and
|
32 |
+
Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4
|
33 |
+
"""
|
34 |
+
std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True)
|
35 |
+
std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True)
|
36 |
+
# rescale the results from guidance (fixes overexposure)
|
37 |
+
noise_pred_rescaled = noise_cfg * (std_text / std_cfg)
|
38 |
+
# mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images
|
39 |
+
noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg
|
40 |
+
return noise_cfg
|
41 |
+
|
42 |
+
|
43 |
+
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.retrieve_timesteps
|
44 |
+
def retrieve_timesteps(
|
45 |
+
scheduler,
|
46 |
+
num_inference_steps: Optional[int] = None,
|
47 |
+
device: Optional[Union[str, torch.device]] = None,
|
48 |
+
timesteps: Optional[List[int]] = None,
|
49 |
+
**kwargs,
|
50 |
+
):
|
51 |
+
"""
|
52 |
+
Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles
|
53 |
+
custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`.
|
54 |
+
|
55 |
+
Args:
|
56 |
+
scheduler (`SchedulerMixin`):
|
57 |
+
The scheduler to get timesteps from.
|
58 |
+
num_inference_steps (`int`):
|
59 |
+
The number of diffusion steps used when generating samples with a pre-trained model. If used,
|
60 |
+
`timesteps` must be `None`.
|
61 |
+
device (`str` or `torch.device`, *optional*):
|
62 |
+
The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
|
63 |
+
timesteps (`List[int]`, *optional*):
|
64 |
+
Custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default
|
65 |
+
timestep spacing strategy of the scheduler is used. If `timesteps` is passed, `num_inference_steps`
|
66 |
+
must be `None`.
|
67 |
+
|
68 |
+
Returns:
|
69 |
+
`Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the
|
70 |
+
second element is the number of inference steps.
|
71 |
+
"""
|
72 |
+
if timesteps is not None:
|
73 |
+
accepts_timesteps = "timesteps" in set(inspect.signature(scheduler.set_timesteps).parameters.keys())
|
74 |
+
if not accepts_timesteps:
|
75 |
+
raise ValueError(
|
76 |
+
f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom"
|
77 |
+
f" timestep schedules. Please check whether you are using the correct scheduler."
|
78 |
+
)
|
79 |
+
scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs)
|
80 |
+
timesteps = scheduler.timesteps
|
81 |
+
num_inference_steps = len(timesteps)
|
82 |
+
else:
|
83 |
+
scheduler.set_timesteps(num_inference_steps, device=device, **kwargs)
|
84 |
+
timesteps = scheduler.timesteps
|
85 |
+
return timesteps, num_inference_steps
|
86 |
+
|
87 |
+
class RGPipe(StableDiffusionXLPipeline):
|
88 |
+
@torch.no_grad()
|
89 |
+
def my_gen(
|
90 |
+
self,
|
91 |
+
prompt: Union[str, List[str]] = None,
|
92 |
+
prompt_2: Optional[Union[str, List[str]]] = None,
|
93 |
+
prompt_ti: Union[str, List[str]] = None,
|
94 |
+
prompt_2_ti: Optional[Union[str, List[str]]] = None,
|
95 |
+
height: Optional[int] = None,
|
96 |
+
width: Optional[int] = None,
|
97 |
+
num_inference_steps: int = 50,
|
98 |
+
timesteps: List[int] = None,
|
99 |
+
denoising_end: Optional[float] = None,
|
100 |
+
guidance_scale: float = 5.0,
|
101 |
+
guidance_scale_lora: float = 5.0,
|
102 |
+
negative_prompt: Optional[Union[str, List[str]]] = None,
|
103 |
+
negative_prompt_2: Optional[Union[str, List[str]]] = None,
|
104 |
+
num_images_per_prompt: Optional[int] = 1,
|
105 |
+
eta: float = 0.0,
|
106 |
+
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
|
107 |
+
latents: Optional[torch.FloatTensor] = None,
|
108 |
+
prompt_embeds: Optional[torch.FloatTensor] = None,
|
109 |
+
negative_prompt_embeds: Optional[torch.FloatTensor] = None,
|
110 |
+
pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
|
111 |
+
negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
|
112 |
+
ip_adapter_image: Optional[PipelineImageInput] = None,
|
113 |
+
output_type: Optional[str] = "pil",
|
114 |
+
return_dict: bool = True,
|
115 |
+
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
|
116 |
+
guidance_rescale: float = 0.0,
|
117 |
+
original_size: Optional[Tuple[int, int]] = None,
|
118 |
+
crops_coords_top_left: Tuple[int, int] = (0, 0),
|
119 |
+
target_size: Optional[Tuple[int, int]] = None,
|
120 |
+
negative_original_size: Optional[Tuple[int, int]] = None,
|
121 |
+
negative_crops_coords_top_left: Tuple[int, int] = (0, 0),
|
122 |
+
negative_target_size: Optional[Tuple[int, int]] = None,
|
123 |
+
clip_skip: Optional[int] = None,
|
124 |
+
callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
|
125 |
+
callback_on_step_end_tensor_inputs: List[str] = ["latents"],
|
126 |
+
**kwargs,
|
127 |
+
):
|
128 |
+
r"""
|
129 |
+
Function invoked when calling the pipeline for generation.
|
130 |
+
|
131 |
+
Args:
|
132 |
+
prompt (`str` or `List[str]`, *optional*):
|
133 |
+
The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
|
134 |
+
instead.
|
135 |
+
prompt_2 (`str` or `List[str]`, *optional*):
|
136 |
+
The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
|
137 |
+
used in both text-encoders
|
138 |
+
height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
|
139 |
+
The height in pixels of the generated image. This is set to 1024 by default for the best results.
|
140 |
+
Anything below 512 pixels won't work well for
|
141 |
+
[stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
|
142 |
+
and checkpoints that are not specifically fine-tuned on low resolutions.
|
143 |
+
width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
|
144 |
+
The width in pixels of the generated image. This is set to 1024 by default for the best results.
|
145 |
+
Anything below 512 pixels won't work well for
|
146 |
+
[stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
|
147 |
+
and checkpoints that are not specifically fine-tuned on low resolutions.
|
148 |
+
num_inference_steps (`int`, *optional*, defaults to 50):
|
149 |
+
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
150 |
+
expense of slower inference.
|
151 |
+
timesteps (`List[int]`, *optional*):
|
152 |
+
Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
|
153 |
+
in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
|
154 |
+
passed will be used. Must be in descending order.
|
155 |
+
denoising_end (`float`, *optional*):
|
156 |
+
When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
|
157 |
+
completed before it is intentionally prematurely terminated. As a result, the returned sample will
|
158 |
+
still retain a substantial amount of noise as determined by the discrete timesteps selected by the
|
159 |
+
scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
|
160 |
+
"Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
|
161 |
+
Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
|
162 |
+
guidance_scale (`float`, *optional*, defaults to 5.0):
|
163 |
+
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
|
164 |
+
`guidance_scale` is defined as `w` of equation 2. of [Imagen
|
165 |
+
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
|
166 |
+
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
|
167 |
+
usually at the expense of lower image quality.
|
168 |
+
negative_prompt (`str` or `List[str]`, *optional*):
|
169 |
+
The prompt or prompts not to guide the image generation. If not defined, one has to pass
|
170 |
+
`negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
|
171 |
+
less than `1`).
|
172 |
+
negative_prompt_2 (`str` or `List[str]`, *optional*):
|
173 |
+
The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
|
174 |
+
`text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
|
175 |
+
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
176 |
+
The number of images to generate per prompt.
|
177 |
+
eta (`float`, *optional*, defaults to 0.0):
|
178 |
+
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
|
179 |
+
[`schedulers.DDIMScheduler`], will be ignored for others.
|
180 |
+
generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
|
181 |
+
One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
|
182 |
+
to make generation deterministic.
|
183 |
+
latents (`torch.FloatTensor`, *optional*):
|
184 |
+
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
|
185 |
+
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
186 |
+
tensor will ge generated by sampling using the supplied random `generator`.
|
187 |
+
prompt_embeds (`torch.FloatTensor`, *optional*):
|
188 |
+
Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
|
189 |
+
provided, text embeddings will be generated from `prompt` input argument.
|
190 |
+
negative_prompt_embeds (`torch.FloatTensor`, *optional*):
|
191 |
+
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
|
192 |
+
weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
|
193 |
+
argument.
|
194 |
+
pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
|
195 |
+
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
|
196 |
+
If not provided, pooled text embeddings will be generated from `prompt` input argument.
|
197 |
+
negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
|
198 |
+
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
|
199 |
+
weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
|
200 |
+
input argument.
|
201 |
+
ip_adapter_image: (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
|
202 |
+
output_type (`str`, *optional*, defaults to `"pil"`):
|
203 |
+
The output format of the generate image. Choose between
|
204 |
+
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
|
205 |
+
return_dict (`bool`, *optional*, defaults to `True`):
|
206 |
+
Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead
|
207 |
+
of a plain tuple.
|
208 |
+
cross_attention_kwargs (`dict`, *optional*):
|
209 |
+
A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
|
210 |
+
`self.processor` in
|
211 |
+
[diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
|
212 |
+
guidance_rescale (`float`, *optional*, defaults to 0.0):
|
213 |
+
Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
|
214 |
+
Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of
|
215 |
+
[Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
|
216 |
+
Guidance rescale factor should fix overexposure when using zero terminal SNR.
|
217 |
+
original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
|
218 |
+
If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
|
219 |
+
`original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
|
220 |
+
explained in section 2.2 of
|
221 |
+
[https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
|
222 |
+
crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
|
223 |
+
`crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
|
224 |
+
`crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
|
225 |
+
`crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
|
226 |
+
[https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
|
227 |
+
target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
|
228 |
+
For most cases, `target_size` should be set to the desired height and width of the generated image. If
|
229 |
+
not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
|
230 |
+
section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
|
231 |
+
negative_original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
|
232 |
+
To negatively condition the generation process based on a specific image resolution. Part of SDXL's
|
233 |
+
micro-conditioning as explained in section 2.2 of
|
234 |
+
[https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
|
235 |
+
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
|
236 |
+
negative_crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
|
237 |
+
To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
|
238 |
+
micro-conditioning as explained in section 2.2 of
|
239 |
+
[https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
|
240 |
+
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
|
241 |
+
negative_target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
|
242 |
+
To negatively condition the generation process based on a target image resolution. It should be as same
|
243 |
+
as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
|
244 |
+
[https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
|
245 |
+
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
|
246 |
+
callback_on_step_end (`Callable`, *optional*):
|
247 |
+
A function that calls at the end of each denoising steps during the inference. The function is called
|
248 |
+
with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
|
249 |
+
callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
|
250 |
+
`callback_on_step_end_tensor_inputs`.
|
251 |
+
callback_on_step_end_tensor_inputs (`List`, *optional*):
|
252 |
+
The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
|
253 |
+
will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
|
254 |
+
`._callback_tensor_inputs` attribute of your pipeline class.
|
255 |
+
|
256 |
+
Examples:
|
257 |
+
|
258 |
+
Returns:
|
259 |
+
[`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] or `tuple`:
|
260 |
+
[`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] if `return_dict` is True, otherwise a
|
261 |
+
`tuple`. When returning a tuple, the first element is a list with the generated images.
|
262 |
+
"""
|
263 |
+
|
264 |
+
callback = kwargs.pop("callback", None)
|
265 |
+
callback_steps = kwargs.pop("callback_steps", None)
|
266 |
+
|
267 |
+
if callback is not None:
|
268 |
+
deprecate(
|
269 |
+
"callback",
|
270 |
+
"1.0.0",
|
271 |
+
"Passing `callback` as an input argument to `__call__` is deprecated, consider use `callback_on_step_end`",
|
272 |
+
)
|
273 |
+
if callback_steps is not None:
|
274 |
+
deprecate(
|
275 |
+
"callback_steps",
|
276 |
+
"1.0.0",
|
277 |
+
"Passing `callback_steps` as an input argument to `__call__` is deprecated, consider use `callback_on_step_end`",
|
278 |
+
)
|
279 |
+
|
280 |
+
# 0. Default height and width to unet
|
281 |
+
height = height or self.default_sample_size * self.vae_scale_factor
|
282 |
+
width = width or self.default_sample_size * self.vae_scale_factor
|
283 |
+
|
284 |
+
original_size = original_size or (height, width)
|
285 |
+
target_size = target_size or (height, width)
|
286 |
+
|
287 |
+
# 1. Check inputs. Raise error if not correct
|
288 |
+
self.check_inputs(
|
289 |
+
prompt,
|
290 |
+
prompt_2,
|
291 |
+
height,
|
292 |
+
width,
|
293 |
+
callback_steps,
|
294 |
+
negative_prompt,
|
295 |
+
negative_prompt_2,
|
296 |
+
prompt_embeds,
|
297 |
+
negative_prompt_embeds,
|
298 |
+
pooled_prompt_embeds,
|
299 |
+
negative_pooled_prompt_embeds,
|
300 |
+
callback_on_step_end_tensor_inputs,
|
301 |
+
)
|
302 |
+
|
303 |
+
self._guidance_scale = guidance_scale
|
304 |
+
self._guidance_scale_lora = guidance_scale_lora
|
305 |
+
self._guidance_rescale = guidance_rescale
|
306 |
+
self._clip_skip = clip_skip
|
307 |
+
self._cross_attention_kwargs = cross_attention_kwargs
|
308 |
+
self._denoising_end = denoising_end
|
309 |
+
|
310 |
+
# 2. Define call parameters
|
311 |
+
if prompt is not None and isinstance(prompt, str):
|
312 |
+
batch_size = 1
|
313 |
+
elif prompt is not None and isinstance(prompt, list):
|
314 |
+
batch_size = len(prompt)
|
315 |
+
else:
|
316 |
+
batch_size = prompt_embeds.shape[0]
|
317 |
+
|
318 |
+
device = self._execution_device
|
319 |
+
|
320 |
+
# 3. Encode input prompt
|
321 |
+
lora_scale = (
|
322 |
+
self.cross_attention_kwargs.get("scale", None) if self.cross_attention_kwargs is not None else None
|
323 |
+
)
|
324 |
+
|
325 |
+
(
|
326 |
+
prompt_embeds_ti,
|
327 |
+
negative_prompt_embeds_ti,
|
328 |
+
pooled_prompt_embeds_ti,
|
329 |
+
negative_pooled_prompt_embeds_ti,
|
330 |
+
) = self.encode_prompt(
|
331 |
+
prompt=prompt_ti,
|
332 |
+
prompt_2=prompt_2_ti,
|
333 |
+
device=device,
|
334 |
+
num_images_per_prompt=num_images_per_prompt,
|
335 |
+
do_classifier_free_guidance=self.do_classifier_free_guidance,
|
336 |
+
negative_prompt=negative_prompt,
|
337 |
+
negative_prompt_2=negative_prompt_2,
|
338 |
+
prompt_embeds=prompt_embeds,
|
339 |
+
negative_prompt_embeds=negative_prompt_embeds,
|
340 |
+
pooled_prompt_embeds=pooled_prompt_embeds,
|
341 |
+
negative_pooled_prompt_embeds=negative_pooled_prompt_embeds,
|
342 |
+
lora_scale=lora_scale,
|
343 |
+
clip_skip=self.clip_skip,
|
344 |
+
)
|
345 |
+
|
346 |
+
(
|
347 |
+
prompt_embeds,
|
348 |
+
negative_prompt_embeds,
|
349 |
+
pooled_prompt_embeds,
|
350 |
+
negative_pooled_prompt_embeds,
|
351 |
+
) = self.encode_prompt(
|
352 |
+
prompt=prompt,
|
353 |
+
prompt_2=prompt_2,
|
354 |
+
device=device,
|
355 |
+
num_images_per_prompt=num_images_per_prompt,
|
356 |
+
do_classifier_free_guidance=self.do_classifier_free_guidance,
|
357 |
+
negative_prompt=negative_prompt,
|
358 |
+
negative_prompt_2=negative_prompt_2,
|
359 |
+
prompt_embeds=prompt_embeds,
|
360 |
+
negative_prompt_embeds=negative_prompt_embeds,
|
361 |
+
pooled_prompt_embeds=pooled_prompt_embeds,
|
362 |
+
negative_pooled_prompt_embeds=negative_pooled_prompt_embeds,
|
363 |
+
lora_scale=lora_scale,
|
364 |
+
clip_skip=self.clip_skip,
|
365 |
+
)
|
366 |
+
|
367 |
+
|
368 |
+
|
369 |
+
# 4. Prepare timesteps
|
370 |
+
timesteps, num_inference_steps = retrieve_timesteps(self.scheduler, num_inference_steps, device, timesteps)
|
371 |
+
|
372 |
+
# 5. Prepare latent variables
|
373 |
+
num_channels_latents = self.unet.config.in_channels
|
374 |
+
latents = self.prepare_latents(
|
375 |
+
batch_size * num_images_per_prompt,
|
376 |
+
num_channels_latents,
|
377 |
+
height,
|
378 |
+
width,
|
379 |
+
prompt_embeds.dtype,
|
380 |
+
device,
|
381 |
+
generator,
|
382 |
+
latents,
|
383 |
+
)
|
384 |
+
|
385 |
+
# 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
|
386 |
+
extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
|
387 |
+
|
388 |
+
# 7. Prepare added time ids & embeddings
|
389 |
+
add_text_embeds = pooled_prompt_embeds
|
390 |
+
add_text_embeds_ti = pooled_prompt_embeds_ti
|
391 |
+
if self.text_encoder_2 is None:
|
392 |
+
text_encoder_projection_dim = int(pooled_prompt_embeds.shape[-1])
|
393 |
+
else:
|
394 |
+
text_encoder_projection_dim = self.text_encoder_2.config.projection_dim
|
395 |
+
|
396 |
+
add_time_ids = self._get_add_time_ids(
|
397 |
+
original_size,
|
398 |
+
crops_coords_top_left,
|
399 |
+
target_size,
|
400 |
+
dtype=prompt_embeds.dtype,
|
401 |
+
text_encoder_projection_dim=text_encoder_projection_dim,
|
402 |
+
)
|
403 |
+
if negative_original_size is not None and negative_target_size is not None:
|
404 |
+
negative_add_time_ids = self._get_add_time_ids(
|
405 |
+
negative_original_size,
|
406 |
+
negative_crops_coords_top_left,
|
407 |
+
negative_target_size,
|
408 |
+
dtype=prompt_embeds.dtype,
|
409 |
+
text_encoder_projection_dim=text_encoder_projection_dim,
|
410 |
+
)
|
411 |
+
else:
|
412 |
+
negative_add_time_ids = add_time_ids
|
413 |
+
|
414 |
+
lora_prompt_embeds = prompt_embeds_ti
|
415 |
+
lora_add_text_embeds = add_text_embeds_ti
|
416 |
+
lora_add_time_ids = add_time_ids
|
417 |
+
if self.do_classifier_free_guidance:
|
418 |
+
prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
|
419 |
+
add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0)
|
420 |
+
add_time_ids = torch.cat([negative_add_time_ids, add_time_ids], dim=0)
|
421 |
+
|
422 |
+
lora_prompt_embeds = lora_prompt_embeds.to(device)
|
423 |
+
lora_add_text_embeds = lora_add_text_embeds.to(device)
|
424 |
+
lora_add_time_ids = lora_add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1)
|
425 |
+
|
426 |
+
prompt_embeds = prompt_embeds.to(device)
|
427 |
+
add_text_embeds = add_text_embeds.to(device)
|
428 |
+
add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1)
|
429 |
+
|
430 |
+
if ip_adapter_image is not None:
|
431 |
+
output_hidden_state = False if isinstance(self.unet.encoder_hid_proj, ImageProjection) else True
|
432 |
+
image_embeds, negative_image_embeds = self.encode_image(
|
433 |
+
ip_adapter_image, device, num_images_per_prompt, output_hidden_state
|
434 |
+
)
|
435 |
+
if self.do_classifier_free_guidance:
|
436 |
+
image_embeds = torch.cat([negative_image_embeds, image_embeds])
|
437 |
+
image_embeds = image_embeds.to(device)
|
438 |
+
|
439 |
+
# 8. Denoising loop
|
440 |
+
num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0)
|
441 |
+
|
442 |
+
# 8.1 Apply denoising_end
|
443 |
+
if (
|
444 |
+
self.denoising_end is not None
|
445 |
+
and isinstance(self.denoising_end, float)
|
446 |
+
and self.denoising_end > 0
|
447 |
+
and self.denoising_end < 1
|
448 |
+
):
|
449 |
+
discrete_timestep_cutoff = int(
|
450 |
+
round(
|
451 |
+
self.scheduler.config.num_train_timesteps
|
452 |
+
- (self.denoising_end * self.scheduler.config.num_train_timesteps)
|
453 |
+
)
|
454 |
+
)
|
455 |
+
num_inference_steps = len(list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps)))
|
456 |
+
timesteps = timesteps[:num_inference_steps]
|
457 |
+
|
458 |
+
# 9. Optionally get Guidance Scale Embedding
|
459 |
+
timestep_cond = None
|
460 |
+
if self.unet.config.time_cond_proj_dim is not None:
|
461 |
+
guidance_scale_tensor = torch.tensor(self.guidance_scale - 1).repeat(batch_size * num_images_per_prompt)
|
462 |
+
timestep_cond = self.get_guidance_scale_embedding(
|
463 |
+
guidance_scale_tensor, embedding_dim=self.unet.config.time_cond_proj_dim
|
464 |
+
).to(device=device, dtype=latents.dtype)
|
465 |
+
|
466 |
+
self._num_timesteps = len(timesteps)
|
467 |
+
with self.progress_bar(total=num_inference_steps) as progress_bar:
|
468 |
+
for i, t in enumerate(timesteps):
|
469 |
+
# expand the latents if we are doing classifier free guidance
|
470 |
+
latent_model_input = torch.cat([latents] * 2) if self.do_classifier_free_guidance else latents
|
471 |
+
latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
|
472 |
+
|
473 |
+
lora_latent_model_input = latent_model_input[0].view(1, 4, 128, 128)
|
474 |
+
# print(latent_model_input.size())
|
475 |
+
# lora_latent_model_input = latents
|
476 |
+
# lora_latent_model_input = self.scheduler.scale_model_input(lora_latent_model_input, t)
|
477 |
+
# print(lora_latent_model_input.size())
|
478 |
+
# predict the noise residual
|
479 |
+
added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
|
480 |
+
lora_added_cond_kwargs = {"text_embeds": lora_add_text_embeds, "time_ids": lora_add_time_ids}
|
481 |
+
if ip_adapter_image is not None:
|
482 |
+
added_cond_kwargs["image_embeds"] = image_embeds
|
483 |
+
|
484 |
+
cross_attention_kwargs_pre = {"scale": 0.0}
|
485 |
+
noise_pred = self.unet(
|
486 |
+
latent_model_input,
|
487 |
+
t,
|
488 |
+
encoder_hidden_states=prompt_embeds,
|
489 |
+
timestep_cond=timestep_cond,
|
490 |
+
cross_attention_kwargs=cross_attention_kwargs_pre,
|
491 |
+
added_cond_kwargs=added_cond_kwargs,
|
492 |
+
return_dict=False,
|
493 |
+
)[0]
|
494 |
+
|
495 |
+
noise_pred_lora_text = self.unet(
|
496 |
+
lora_latent_model_input,
|
497 |
+
t,
|
498 |
+
encoder_hidden_states=lora_prompt_embeds,
|
499 |
+
timestep_cond=timestep_cond,
|
500 |
+
cross_attention_kwargs=self.cross_attention_kwargs,
|
501 |
+
added_cond_kwargs=lora_added_cond_kwargs,
|
502 |
+
return_dict=False,
|
503 |
+
)[0]
|
504 |
+
|
505 |
+
# perform guidance
|
506 |
+
if self.do_classifier_free_guidance:
|
507 |
+
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
|
508 |
+
# noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond)
|
509 |
+
noise_pred = self.guidance_scale * (noise_pred_text - noise_pred_uncond) + noise_pred_uncond
|
510 |
+
noise_pred += guidance_scale_lora * (noise_pred_lora_text - noise_pred_text)
|
511 |
+
# noise_pred_lora_text +
|
512 |
+
|
513 |
+
if self.do_classifier_free_guidance and self.guidance_rescale > 0.0:
|
514 |
+
# Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
|
515 |
+
noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=self.guidance_rescale)
|
516 |
+
|
517 |
+
# compute the previous noisy sample x_t -> x_t-1
|
518 |
+
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
|
519 |
+
|
520 |
+
if callback_on_step_end is not None:
|
521 |
+
callback_kwargs = {}
|
522 |
+
for k in callback_on_step_end_tensor_inputs:
|
523 |
+
callback_kwargs[k] = locals()[k]
|
524 |
+
callback_outputs = callback_on_step_end(self, i, t, callback_kwargs)
|
525 |
+
|
526 |
+
latents = callback_outputs.pop("latents", latents)
|
527 |
+
prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds)
|
528 |
+
negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds)
|
529 |
+
add_text_embeds = callback_outputs.pop("add_text_embeds", add_text_embeds)
|
530 |
+
negative_pooled_prompt_embeds = callback_outputs.pop(
|
531 |
+
"negative_pooled_prompt_embeds", negative_pooled_prompt_embeds
|
532 |
+
)
|
533 |
+
add_time_ids = callback_outputs.pop("add_time_ids", add_time_ids)
|
534 |
+
negative_add_time_ids = callback_outputs.pop("negative_add_time_ids", negative_add_time_ids)
|
535 |
+
|
536 |
+
# call the callback, if provided
|
537 |
+
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
|
538 |
+
progress_bar.update()
|
539 |
+
if callback is not None and i % callback_steps == 0:
|
540 |
+
step_idx = i // getattr(self.scheduler, "order", 1)
|
541 |
+
callback(step_idx, t, latents)
|
542 |
+
|
543 |
+
if XLA_AVAILABLE:
|
544 |
+
xm.mark_step()
|
545 |
+
|
546 |
+
if not output_type == "latent":
|
547 |
+
# make sure the VAE is in float32 mode, as it overflows in float16
|
548 |
+
needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast
|
549 |
+
|
550 |
+
if needs_upcasting:
|
551 |
+
self.upcast_vae()
|
552 |
+
latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype)
|
553 |
+
|
554 |
+
image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
|
555 |
+
|
556 |
+
# cast back to fp16 if needed
|
557 |
+
if needs_upcasting:
|
558 |
+
self.vae.to(dtype=torch.float16)
|
559 |
+
else:
|
560 |
+
image = latents
|
561 |
+
|
562 |
+
if not output_type == "latent":
|
563 |
+
# apply watermark if available
|
564 |
+
if self.watermark is not None:
|
565 |
+
image = self.watermark.apply_watermark(image)
|
566 |
+
|
567 |
+
image = self.image_processor.postprocess(image, output_type=output_type)
|
568 |
+
|
569 |
+
# Offload all models
|
570 |
+
self.maybe_free_model_hooks()
|
571 |
+
|
572 |
+
if not return_dict:
|
573 |
+
return (image,)
|
574 |
+
|
575 |
+
return StableDiffusionXLPipelineOutput(images=image)
|
576 |
+
|