Update README.md
Browse files
README.md
CHANGED
@@ -36,27 +36,53 @@ Existing literature typically treats style-driven and subject-driven generation
|
|
36 |
|
37 |
### π§ Requirements and Installation
|
38 |
|
39 |
-
Clone our [Github repo](https://github.com/bytedance/UNO)
|
40 |
-
|
41 |
-
|
42 |
Install the requirements
|
43 |
```bash
|
44 |
## create a virtual environment with python >= 3.10 <= 3.12, like
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
|
|
|
|
|
|
49 |
```
|
50 |
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
-
|
|
|
|
|
|
|
|
|
57 |
|
|
|
|
|
58 |
```bash
|
59 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
```
|
61 |
|
62 |
## π Disclaimer
|
|
|
36 |
|
37 |
### π§ Requirements and Installation
|
38 |
|
|
|
|
|
|
|
39 |
Install the requirements
|
40 |
```bash
|
41 |
## create a virtual environment with python >= 3.10 <= 3.12, like
|
42 |
+
python -m venv uso_env
|
43 |
+
source uso_env/bin/activate
|
44 |
+
## or
|
45 |
+
conda create -n uso_env python=3.10 -y
|
46 |
+
conda activate uso_env
|
47 |
+
## then install the requirements by you need
|
48 |
+
pip install -r requirements.txt # legacy installation command
|
49 |
```
|
50 |
|
51 |
+
Then download checkpoints in one of the following ways:
|
52 |
+
- **Suppose you already have some of the checkpoints**
|
53 |
+
```bash
|
54 |
+
# 1. download USO official checkpoints
|
55 |
+
pip install huggingface_hub
|
56 |
+
huggingface-cli download bytedance-research/USO --local-dir <YOU_SAVE_DIR> --local-dir-use-symlinks False
|
57 |
+
|
58 |
+
# 2. Then set the environment variable for FLUX.1 base model
|
59 |
+
export AE="YOUR_AE_PATH"
|
60 |
+
export FLUX_DEV="YOUR_FLUX_DEV_PATH"
|
61 |
+
export T5="YOUR_T5_PATH"
|
62 |
+
export CLIP="YOUR_CLIP_PATH"
|
63 |
+
# or export HF_HOME="YOUR_HF_HOME"
|
64 |
|
65 |
+
# 3. Then set the environment variable for USO
|
66 |
+
export LORA="<YOU_SAVE_DIR>/uso_flux_v1.0/dit_lora.safetensors"
|
67 |
+
export PROJECTION_MODEL="<YOU_SAVE_DIR>/uso_flux_v1.0/projector.safetensors"
|
68 |
+
```
|
69 |
+
- Directly run the inference scripts, the checkpoints will be downloaded automatically by the `hf_hub_download` function in the code.
|
70 |
|
71 |
+
### βοΈ Inference
|
72 |
+
Start from the examples below to explore and spark your creativity. β¨
|
73 |
```bash
|
74 |
+
# the first image is a content reference, and the rest are style references.
|
75 |
+
|
76 |
+
# for subject-driven generation
|
77 |
+
python inference.py --prompt "The man in flower shops carefully match bouquets, conveying beautiful emotions and blessings with flowers. " --image_paths "assets/gradio_examples/identity1.jpg" --width 1024 --height 1024
|
78 |
+
# for style-driven generation
|
79 |
+
# please keep the first image path empty
|
80 |
+
python inference.py --prompt "A cat sleeping on a chair." --image_paths "" "assets/gradio_examples/style1.webp" --width 1024 --height 1024
|
81 |
+
# for ip-style generation
|
82 |
+
python inference.py --prompt "The woman gave an impassioned speech on the podium." --image_paths "assets/gradio_examples/identity2.webp" "assets/gradio_examples/style2.webp" --width 1024 --height 1024
|
83 |
+
# for multi-style generation
|
84 |
+
# please keep the first image path empty
|
85 |
+
python inference.py --prompt "A handsome man." --image_paths "" "assets/gradio_examples/style3.webp" "assets/gradio_examples/style4.webp" --width 1024 --height 1024
|
86 |
```
|
87 |
|
88 |
## π Disclaimer
|