Text-to-Image
lora
File size: 8,682 Bytes
f737c5e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cdd503f
f737c5e
 
cdd503f
f737c5e
cdd503f
f737c5e
cdd503f
f737c5e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cdd503f
f737c5e
 
 
 
 
 
 
 
 
 
cdd503f
f737c5e
 
 
 
 
 
 
 
 
cdd503f
f737c5e
 
 
 
 
cdd503f
f737c5e
 
 
 
7bc531c
 
 
 
 
f737c5e
cdd503f
f737c5e
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: apache-2.0
base_model:
  - black-forest-labs/FLUX.1-dev
tags:
  - lora
  - text-to-image
---
# EliGen: Entity-Level Controlled Image Generation

## Introduction

We propose EliGen, a novel approach that leverages fine-grained entity-level information to enable precise and controllable text-to-image generation. EliGen excels in tasks such as entity-level controlled image generation and image inpainting, while its applicability is not limited to these areas. Additionally, it can be seamlessly integrated with existing community models, such as the IP-Adpater and In-Cotext LoRA.

* Paper: [EliGen: Entity-Level Controlled Image Generation with Regional Attention](https://arxiv.org/abs/2501.01097)
* Github: [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio)
* Model: 
  * [ModelScope](https://www.modelscope.cn/models/DiffSynth-Studio/Eligen)
  * [HuggingFace](https://huggingface.co/modelscope/EliGen)
* Online Demo: [ModelScope EliGen Studio](https://www.modelscope.cn/studios/DiffSynth-Studio/EliGen)
* Training dataset: [ModelScope Dataset](https://www.modelscope.cn/datasets/DiffSynth-Studio/EliGenTrainSet)

## Methodology

![regional-attention](./samples/regional_attention.jpg)

We introduce a regional attention mechanism within the DiT framework to effectively process the conditions of each entity. This mechanism enables the local prompt associated with each entity to semantically influence specific regions through regional attention. To further enhance the layout control capabilities of EliGen, we meticulously contribute an entity-annotated dataset and fine-tune the model using the LoRA framework. 

1. **Regional Attention**: Regional attention is shown in above figure, which can be easily applied to other text-to-image models. Its core principle involves transforming the positional information of each entity into an attention mask, ensuring that the mechanism only affects the designated regions.
   
2. **Dataset with Entity Annotation**: To construct a dedicated entity control dataset, we start by randomly selecting captions from DiffusionDB and generating the corresponding source image using Flux. Next, we employ Qwen2-VL 72B, recognized for its advanced grounding capabilities among MLLMs, to randomly identify entities within the image. These entities are annotated with local prompts and bounding boxes for precise localization, forming the foundation of our dataset for further training.

3. **Training**: We utilize LoRA (Low-Rank Adaptation) and DeepSpeed to fine-tune regional attention mechanisms using a curated dataset, enabling our EliGen model to achieve effective entity-level control.

## Usage
This model was trained using [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio). We recommend using DiffSynth-Studio for generation.
```shell
git clone https://github.com/modelscope/DiffSynth-Studio.git
cd DiffSynth-Studio
pip install -e .
```
1. **Entity-Level Controlled Image Generation**
   EliGen achieves effective entity-level control results. See [entity_control.py](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/EntityControl/entity_control.py) for usage.
2. **Image Inpainting**
   To apply EliGen to image inpainting task, we propose a inpainting fusion pipeline to preserve the non-painting areas while enabling precise, entity-level modifications over inpaining regions.
   See [entity_inpaint.py](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/EntityControl/entity_inpaint.py) for usage.
3. **Styled Entity Control**
   EliGen can be seamlessly integrated with existing community models. We have provided an example of how to integrate it with the IP-Adpater. See [entity_control_ipadapter.py](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/EntityControl/entity_control_ipadapter.py) for usage.
4. **Entity Transfer**
   We have provided an example of how to integrate EliGen with In-Cotext LoRA, which achieves interesting entity transfer results. See [entity_transfer.py](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/EntityControl/entity_transfer.py) for usage.
5. **Play with EliGen using UI**
   Download the checkpoint of EliGen from [ModelScope](https://www.modelscope.cn/models/DiffSynth-Studio/Eligen) to `models/lora/entity_control` and run the following command to try interactive UI: 
   ```bash
   python apps/gradio/entity_level_control.py
   ```
## Examples
### Entity-Level Controlled Image Generation

1. The effect of generating images with continuously changing entity positions.

<div align="center">
    <video width="80%" controls>
        <source src="https://github.com/user-attachments/assets/54a048c8-b663-4262-8c40-43c87c266d4b" type="video/mp4">
        Your browser does not support the video tag.
    </video>
</div>

1. The image generation effect of complex Entity combinations, demonstrating the strong generalization of EliGen. See [entity_control.py](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/EntityControl/entity_control.py) `example_1-6` for generation prompts.

|Entity Conditions|Generated Image|
|-|-|
|![eligen_example_1_mask_0](./samples/e1_m.png)|![eligen_example_1_0](./samples/e1.png)|
|![eligen_example_2_mask_0](./samples/e2_m.png)|![eligen_example_2_0](./samples/e2.png)|
|![eligen_example_3_mask_27](./samples/e3_m.png)|![eligen_example_3_27](./samples/e3.png)|
|![eligen_example_4_mask_21](./samples/e4_m.png)|![eligen_example_4_21](./samples/e4.png)|
|![eligen_example_5_mask_0](./samples/e5_m.png)|![eligen_example_5_0](./samples/e5.png)|
|![eligen_example_6_mask_8](./samples/e6_m.png)|![eligen_example_6_8](./samples/e6.png)|

1. Demonstration of the robustness of EliGen. The following examples are generated using the same prompt but different seeds. Refer to [entity_control.py](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/EntityControl/entity_control.py) `example_7` for the prompts.

|Entity Conditions|Generated Image|
|-|-|
|![eligen_example_7_mask_5](./samples/e7_m.png)|![eligen_example_7_5](./samples/e7_1.png)|
|![eligen_example_7_mask_5](./samples/e7_m.png)|![eligen_example_7_6](./samples/e7_2.png)|
![eligen_example_7_mask_5](./samples/e7_m.png)|![eligen_example_7_7](./samples/e7_3.png)|
|![eligen_example_7_mask_5](./samples/e7_m.png)|![eligen_example_7_8](./samples/e7_4.png)|

### Image Inpainting
Demonstration of the inpainting mode of EliGen, see [entity_inpaint.py](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/EntityControl/entity_inpaint.py) for generation prompts.
|Inpainting Input|Inpainting Output|
|-|-|
|![inpaint_i1](./samples/inpaint_i1.jpg)|![inpaint_o1](./samples/inpaint_o1.png)|
|![inpaint_i2](./samples/inpaint_i2.png)|![inpaint_o2](./samples/inpaint_o2.png)|
### Styled Entity Control
Demonstration of the styled entity control results with EliGen and IP-Adapter, see [entity_control_ipadapter.py](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/EntityControl/entity_control_ipadapter.py) for generation prompts.
|Style Reference|Entity Control Variance 1|Entity Control Variance 2|Entity Control Variance 3|
|-|-|-|-|
|![ip_ref](./samples/ip_ref.png)|![ip_1](./samples/ip_1.png)|![ip_2](./samples/ip_2.png)|![ip_3](./samples/ip_3.png)|

We also provide a demo of the styled entity control results with EliGen and specific styled lora, see [./styled_entity_control.py](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/EntityControl/styled_entity_control.py) for details. Here is the visualization of EliGen with [Lego dreambooth lora](https://huggingface.co/merve/flux-lego-lora-dreambooth).
|![image_1_base](./samples/styled_entity_control_example_1_mask_0.png)|![result1](./samples/styled_entity_control_example_2_mask_0.png)|![result2](./samples/styled_entity_control_example_3_mask_27.png)|![result3](./samples/styled_entity_control_example_4_mask_21.png)|
|-|-|-|-|
|![image_1_base](./samples/styled_entity_control_example_5_mask_0.png)|![result1](./samples/styled_entity_control_example_6_mask_8.png)|![result2](./samples/styled_entity_control_example_7_mask_5.png)|![result3](./samples/styled_entity_control_example_7_mask_6.png)|

### Entity Transfer
Demonstration of the entity transfer results with EliGen and In-Context LoRA, see [entity_transfer.py](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/EntityControl/entity_transfer.py) for generation prompts.

|Entity to Transfer|Transfer Target Image|Transfer Example 1|Transfer Example 2|
|-|-|-|-|
|![ic_logo](./samples/ic_logo.jpg)|![ic_target](./samples/ic_target.png)|![ic_1](./samples/ic_1.jpg)|![ic_2](./samples/ic_2.jpg)|