ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.
With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.
The abstract from the paper is:
We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.
You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the 🤗 Diffusers Hub organization, and browse community-trained checkpoints on the Hub.
🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve!
If you don’t see a checkpoint you’re interested in, you can train your own SDXL ControlNet with our training script.
Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.
( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None )
Parameters
CLIPTokenizer
to tokenize text. CLIPTokenizer
to tokenize text. UNet2DConditionModel
to denoise the encoded image latents. List[ControlNetModel]
) —
Provides additional conditioning to the unet
during the denoising process. If you set multiple
ControlNets as a list, the outputs from each ControlNet are added together to create one combined
additional conditioning. unet
to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. bool
, optional, defaults to "True"
) —
Whether the negative prompt embeddings should always be set to 0. Also see the config of
stabilityai/stable-diffusion-xl-base-1-0
. bool
, optional) —
Whether to use the invisible_watermark library to
watermark output images. If not defined, it defaults to True
if the package is installed; otherwise no
watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
The pipeline also inherits the following loading methods:
.ckpt
files( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None sigmas: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Union = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple
Parameters
str
or List[str]
, optional) —
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds
. str
or List[str]
, optional) —
The prompt or prompts to be sent to tokenizer_2
and text_encoder_2
. If not defined, prompt
is
used in both text-encoders. torch.Tensor
, PIL.Image.Image
, np.ndarray
, List[torch.Tensor]
, List[PIL.Image.Image]
, List[np.ndarray]
, —
List[List[torch.Tensor]]
, List[List[np.ndarray]]
or List[List[PIL.Image.Image]]
):
The ControlNet input condition to provide guidance to the unet
for generation. If the type is
specified as torch.Tensor
, it is passed to ControlNet as is. PIL.Image.Image
can also be accepted
as an image. The dimensions of the output image defaults to image
’s dimensions. If height and/or
width are passed, image
is resized accordingly. If multiple ControlNets are specified in init
,
images must be passed as a list such that each element of the list can be correctly batched for input
to a single ControlNet. int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor
) —
The height in pixels of the generated image. Anything below 512 pixels won’t work well for
stabilityai/stable-diffusion-xl-base-1.0
and checkpoints that are not specifically fine-tuned on low resolutions. int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor
) —
The width in pixels of the generated image. Anything below 512 pixels won’t work well for
stabilityai/stable-diffusion-xl-base-1.0
and checkpoints that are not specifically fine-tuned on low resolutions. int
, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. List[int]
, optional) —
Custom timesteps to use for the denoising process with schedulers which support a timesteps
argument
in their set_timesteps
method. If not defined, the default behavior when num_inference_steps
is
passed will be used. Must be in descending order. List[float]
, optional) —
Custom sigmas to use for the denoising process with schedulers which support a sigmas
argument in
their set_timesteps
method. If not defined, the default behavior when num_inference_steps
is passed
will be used. float
, optional) —
When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
completed before it is intentionally prematurely terminated. As a result, the returned sample will
still retain a substantial amount of noise as determined by the discrete timesteps selected by the
scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image
Output float
, optional, defaults to 5.0) —
A higher guidance scale value encourages the model to generate images closely linked to the text
prompt
at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1
. str
or List[str]
, optional) —
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
pass negative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
). str
or List[str]
, optional) —
The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2
and text_encoder_2
. If not defined, negative_prompt
is used in both text-encoders. int
, optional, defaults to 1) —
The number of images to generate per prompt. float
, optional, defaults to 0.0) —
Corresponds to parameter eta (η) from the DDIM paper. Only applies
to the DDIMScheduler, and is ignored in other schedulers. torch.Generator
or List[torch.Generator]
, optional) —
A torch.Generator
to make
generation deterministic. torch.Tensor
, optional) —
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor is generated by sampling using the supplied random generator
. torch.Tensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
provided, text embeddings are generated from the prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
not provided, negative_prompt_embeds
are generated from the negative_prompt
input argument. torch.Tensor
, optional) —
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
not provided, pooled text embeddings are generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt
weighting). If not provided, pooled negative_prompt_embeds
are generated from negative_prompt
input
argument.
ip_adapter_image — (PipelineImageInput
, optional): Optional image input to work with IP Adapters. List[torch.Tensor]
, optional) —
Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
IP-adapters. Each element should be a tensor of shape (batch_size, num_images, emb_dim)
. It should
contain the negative image embedding if do_classifier_free_guidance
is set to True
. If not
provided, embeddings are computed from the ip_adapter_image
input argument. str
, optional, defaults to "pil"
) —
The output format of the generated image. Choose between PIL.Image
or np.array
. bool
, optional, defaults to True
) —
Whether or not to return a StableDiffusionPipelineOutput instead of a
plain tuple. dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined in
self.processor
. float
or List[float]
, optional, defaults to 1.0) —
The outputs of the ControlNet are multiplied by controlnet_conditioning_scale
before they are added
to the residual in the original unet
. If multiple ControlNets are specified in init
, you can set
the corresponding scale as a list. bool
, optional, defaults to False
) —
The ControlNet encoder tries to recognize the content of the input image even if you remove all
prompts. A guidance_scale
value between 3.0 and 5.0 is recommended. float
or List[float]
, optional, defaults to 0.0) —
The percentage of total steps at which the ControlNet starts applying. float
or List[float]
, optional, defaults to 1.0) —
The percentage of total steps at which the ControlNet stops applying. Tuple[int]
, optional, defaults to (1024, 1024)) —
If original_size
is not the same as target_size
the image will appear to be down- or upsampled.
original_size
defaults to (height, width)
if not specified. Part of SDXL’s micro-conditioning as
explained in section 2.2 of
https://huggingface.co/papers/2307.01952. Tuple[int]
, optional, defaults to (0, 0)) —
crops_coords_top_left
can be used to generate an image that appears to be “cropped” from the position
crops_coords_top_left
downwards. Favorable, well-centered images are usually achieved by setting
crops_coords_top_left
to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. Tuple[int]
, optional, defaults to (1024, 1024)) —
For most cases, target_size
should be set to the desired height and width of the generated image. If
not specified it will default to (height, width)
. Part of SDXL’s micro-conditioning as explained in
section 2.2 of https://huggingface.co/papers/2307.01952. Tuple[int]
, optional, defaults to (1024, 1024)) —
To negatively condition the generation process based on a specific image resolution. Part of SDXL’s
micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. For more
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. Tuple[int]
, optional, defaults to (0, 0)) —
To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s
micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. For more
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. Tuple[int]
, optional, defaults to (1024, 1024)) —
To negatively condition the generation process based on a target image resolution. It should be as same
as the target_size
for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. For more
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Callable
, PipelineCallback
, MultiPipelineCallbacks
, optional) —
A function or a subclass of PipelineCallback
or MultiPipelineCallbacks
that is called at the end of
each denoising step during the inference. with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)
. callback_kwargs
will include a
list of all tensors as specified by callback_on_step_end_tensor_inputs
. List
, optional) —
The list of tensor inputs for the callback_on_step_end
function. The tensors specified in the list
will be passed as callback_kwargs
argument. You will only be able to include variables listed in the
._callback_tensor_inputs
attribute of your pipeline class. Returns
StableDiffusionPipelineOutput or tuple
If return_dict
is True
, StableDiffusionPipelineOutput is returned,
otherwise a tuple
is returned containing the output images.
The call function to the pipeline for generation.
Examples:
>>> # !pip install opencv-python transformers accelerate
>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL
>>> from diffusers.utils import load_image
>>> import numpy as np
>>> import torch
>>> import cv2
>>> from PIL import Image
>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
>>> negative_prompt = "low quality, bad quality, sketches"
>>> # download an image
>>> image = load_image(
... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
... )
>>> # initialize the models and pipeline
>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization
>>> controlnet = ControlNetModel.from_pretrained(
... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16
... )
>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16
... )
>>> pipe.enable_model_cpu_offload()
>>> # get canny image
>>> image = np.array(image)
>>> image = cv2.Canny(image, 100, 200)
>>> image = image[:, :, None]
>>> image = np.concatenate([image, image, image], axis=2)
>>> canny_image = Image.fromarray(image)
>>> # generate image
>>> image = pipe(
... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image
... ).images[0]
( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None )
Parameters
str
or List[str]
, optional) —
prompt to be encoded str
or List[str]
, optional) —
The prompt or prompts to be sent to the tokenizer_2
and text_encoder_2
. If not defined, prompt
is
used in both text-encoders
device — (torch.device
):
torch device int
) —
number of images that should be generated per prompt bool
) —
whether to use classifier free guidance or not str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation to be sent to tokenizer_2
and
text_encoder_2
. If not defined, negative_prompt
is used in both text-encoders torch.Tensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument. torch.Tensor
, optional) —
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting.
If not provided, pooled text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt
input argument. float
, optional) —
A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states.
( w: Tensor embedding_dim: int = 512 dtype: dtype = torch.float32 ) → torch.Tensor
Parameters
torch.Tensor
) —
Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings. int
, optional, defaults to 512) —
Dimension of the embeddings to generate. torch.dtype
, optional, defaults to torch.float32
) —
Data type of the generated embeddings. Returns
torch.Tensor
Embedding vectors with shape (len(w), embedding_dim)
.
( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None )
Parameters
CLIPTextModel
) —
Frozen text-encoder. Stable Diffusion uses the text portion of
CLIP, specifically
the clip-vit-large-patch14 variant. CLIPTextModelWithProjection
) —
Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
CLIP,
specifically the
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
variant. CLIPTokenizer
) —
Tokenizer of class
CLIPTokenizer. CLIPTokenizer
) —
Second Tokenizer of class
CLIPTokenizer. List[ControlNetModel]
) —
Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets
as a list, the outputs from each ControlNet are added together to create one combined additional
conditioning. unet
to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. bool
, optional, defaults to "False"
) —
Whether the unet
requires an aesthetic_score
condition to be passed during inference. Also see the
config of stabilityai/stable-diffusion-xl-refiner-1-0
. bool
, optional, defaults to "True"
) —
Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
stabilityai/stable-diffusion-xl-base-1-0
. bool
, optional) —
Whether to use the invisible_watermark library to
watermark output images. If not defined, it will default to True if the package is installed, otherwise no
watermarker will be used. CLIPImageProcessor
to extract features from generated images; used as inputs to the safety_checker
. Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
The pipeline also inherits the following loading methods:
( prompt: Union = None prompt_2: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Union = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple
Parameters
str
or List[str]
, optional) —
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds
.
instead. str
or List[str]
, optional) —
The prompt or prompts to be sent to the tokenizer_2
and text_encoder_2
. If not defined, prompt
is
used in both text-encoders torch.Tensor
, PIL.Image.Image
, np.ndarray
, List[torch.Tensor]
, List[PIL.Image.Image]
, List[np.ndarray]
, —
List[List[torch.Tensor]]
, List[List[np.ndarray]]
or List[List[PIL.Image.Image]]
):
The initial image will be used as the starting point for the image generation process. Can also accept
image latents as image
, if passing latents directly, it will not be encoded again. torch.Tensor
, PIL.Image.Image
, np.ndarray
, List[torch.Tensor]
, List[PIL.Image.Image]
, List[np.ndarray]
, —
List[List[torch.Tensor]]
, List[List[np.ndarray]]
or List[List[PIL.Image.Image]]
):
The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If
the type is specified as torch.Tensor
, it is passed to ControlNet as is. PIL.Image.Image
can also
be accepted as an image. The dimensions of the output image defaults to image
’s dimensions. If height
and/or width are passed, image
is resized according to them. If multiple ControlNets are specified in
init, images must be passed as a list such that each element of the list can be correctly batched for
input to a single controlnet. int
, optional, defaults to the size of control_image) —
The height in pixels of the generated image. Anything below 512 pixels won’t work well for
stabilityai/stable-diffusion-xl-base-1.0
and checkpoints that are not specifically fine-tuned on low resolutions. int
, optional, defaults to the size of control_image) —
The width in pixels of the generated image. Anything below 512 pixels won’t work well for
stabilityai/stable-diffusion-xl-base-1.0
and checkpoints that are not specifically fine-tuned on low resolutions. float
, optional, defaults to 0.8) —
Indicates extent to transform the reference image
. Must be between 0 and 1. image
is used as a
starting point and more noise is added the higher the strength
. The number of denoising steps depends
on the amount of noise initially added. When strength
is 1, added noise is maximum and the denoising
process runs for the full number of iterations specified in num_inference_steps
. A value of 1
essentially ignores image
. int
, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. float
, optional, defaults to 7.5) —
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale
is defined as w
of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1
. Higher guidance scale encourages to generate images that are closely linked to the text prompt
,
usually at the expense of lower image quality. str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation to be sent to tokenizer_2
and
text_encoder_2
. If not defined, negative_prompt
is used in both text-encoders int
, optional, defaults to 1) —
The number of images to generate per prompt. float
, optional, defaults to 0.0) —
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
schedulers.DDIMScheduler, will be ignored for others. torch.Generator
or List[torch.Generator]
, optional) —
One or a list of torch generator(s)
to make generation deterministic. torch.Tensor
, optional) —
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will ge generated by sampling using the supplied random generator
. torch.Tensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument. torch.Tensor
, optional) —
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting.
If not provided, pooled text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt
input argument.
ip_adapter_image — (PipelineImageInput
, optional): Optional image input to work with IP Adapters. List[torch.Tensor]
, optional) —
Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
IP-adapters. Each element should be a tensor of shape (batch_size, num_images, emb_dim)
. It should
contain the negative image embedding if do_classifier_free_guidance
is set to True
. If not
provided, embeddings are computed from the ip_adapter_image
input argument. str
, optional, defaults to "pil"
) —
The output format of the generate image. Choose between
PIL: PIL.Image.Image
or np.array
. bool
, optional, defaults to True
) —
Whether or not to return a StableDiffusionPipelineOutput instead of a
plain tuple. dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined under
self.processor
in
diffusers.models.attention_processor. float
or List[float]
, optional, defaults to 1.0) —
The outputs of the controlnet are multiplied by controlnet_conditioning_scale
before they are added
to the residual in the original unet. If multiple ControlNets are specified in init, you can set the
corresponding scale as a list. bool
, optional, defaults to False
) —
In this mode, the ControlNet encoder will try best to recognize the content of the input image even if
you remove all prompts. The guidance_scale
between 3.0 and 5.0 is recommended. float
or List[float]
, optional, defaults to 0.0) —
The percentage of total steps at which the controlnet starts applying. float
or List[float]
, optional, defaults to 1.0) —
The percentage of total steps at which the controlnet stops applying. Tuple[int]
, optional, defaults to (1024, 1024)) —
If original_size
is not the same as target_size
the image will appear to be down- or upsampled.
original_size
defaults to (height, width)
if not specified. Part of SDXL’s micro-conditioning as
explained in section 2.2 of
https://huggingface.co/papers/2307.01952. Tuple[int]
, optional, defaults to (0, 0)) —
crops_coords_top_left
can be used to generate an image that appears to be “cropped” from the position
crops_coords_top_left
downwards. Favorable, well-centered images are usually achieved by setting
crops_coords_top_left
to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. Tuple[int]
, optional, defaults to (1024, 1024)) —
For most cases, target_size
should be set to the desired height and width of the generated image. If
not specified it will default to (height, width)
. Part of SDXL’s micro-conditioning as explained in
section 2.2 of https://huggingface.co/papers/2307.01952. Tuple[int]
, optional, defaults to (1024, 1024)) —
To negatively condition the generation process based on a specific image resolution. Part of SDXL’s
micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. For more
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. Tuple[int]
, optional, defaults to (0, 0)) —
To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s
micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. For more
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. Tuple[int]
, optional, defaults to (1024, 1024)) —
To negatively condition the generation process based on a target image resolution. It should be as same
as the target_size
for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. For more
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. float
, optional, defaults to 6.0) —
Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
Part of SDXL’s micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. float
, optional, defaults to 2.5) —
Part of SDXL’s micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. Can be used to
simulate an aesthetic score of the generated image by influencing the negative text condition. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Callable
, PipelineCallback
, MultiPipelineCallbacks
, optional) —
A function or a subclass of PipelineCallback
or MultiPipelineCallbacks
that is called at the end of
each denoising step during the inference. with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)
. callback_kwargs
will include a
list of all tensors as specified by callback_on_step_end_tensor_inputs
. List
, optional) —
The list of tensor inputs for the callback_on_step_end
function. The tensors specified in the list
will be passed as callback_kwargs
argument. You will only be able to include variables listed in the
._callback_tensor_inputs
attribute of your pipeline class. Returns
StableDiffusionPipelineOutput or tuple
StableDiffusionPipelineOutput if return_dict
is True, otherwise a tuple
containing the output images.
Function invoked when calling the pipeline for generation.
Examples:
>>> # pip install accelerate transformers safetensors diffusers
>>> import torch
>>> import numpy as np
>>> from PIL import Image
>>> from transformers import DPTImageProcessor, DPTForDepthEstimation
>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL
>>> from diffusers.utils import load_image
>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda")
>>> feature_extractor = DPTImageProcessor.from_pretrained("Intel/dpt-hybrid-midas")
>>> controlnet = ControlNetModel.from_pretrained(
... "diffusers/controlnet-depth-sdxl-1.0-small",
... variant="fp16",
... use_safetensors=True,
... torch_dtype=torch.float16,
... )
>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
>>> pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained(
... "stabilityai/stable-diffusion-xl-base-1.0",
... controlnet=controlnet,
... vae=vae,
... variant="fp16",
... use_safetensors=True,
... torch_dtype=torch.float16,
... )
>>> pipe.enable_model_cpu_offload()
>>> def get_depth_map(image):
... image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda")
... with torch.no_grad(), torch.autocast("cuda"):
... depth_map = depth_estimator(image).predicted_depth
... depth_map = torch.nn.functional.interpolate(
... depth_map.unsqueeze(1),
... size=(1024, 1024),
... mode="bicubic",
... align_corners=False,
... )
... depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True)
... depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True)
... depth_map = (depth_map - depth_min) / (depth_max - depth_min)
... image = torch.cat([depth_map] * 3, dim=1)
... image = image.permute(0, 2, 3, 1).cpu().numpy()[0]
... image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8))
... return image
>>> prompt = "A robot, 4k photo"
>>> image = load_image(
... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
... "/kandinsky/cat.png"
... ).resize((1024, 1024))
>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization
>>> depth_image = get_depth_map(image)
>>> images = pipe(
... prompt,
... image=image,
... control_image=depth_image,
... strength=0.99,
... num_inference_steps=50,
... controlnet_conditioning_scale=controlnet_conditioning_scale,
... ).images
>>> images[0].save(f"robot_cat.png")
( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None )
Parameters
str
or List[str]
, optional) —
prompt to be encoded str
or List[str]
, optional) —
The prompt or prompts to be sent to the tokenizer_2
and text_encoder_2
. If not defined, prompt
is
used in both text-encoders
device — (torch.device
):
torch device int
) —
number of images that should be generated per prompt bool
) —
whether to use classifier free guidance or not str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation to be sent to tokenizer_2
and
text_encoder_2
. If not defined, negative_prompt
is used in both text-encoders torch.Tensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument. torch.Tensor
, optional) —
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting.
If not provided, pooled text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt
input argument. float
, optional) —
A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states.
( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: Optional = None image_encoder: Optional = None )
Parameters
CLIPTextModel
) —
Frozen text-encoder. Stable Diffusion XL uses the text portion of
CLIP, specifically
the clip-vit-large-patch14 variant. CLIPTextModelWithProjection
) —
Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
CLIP,
specifically the
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
variant. CLIPTokenizer
) —
Tokenizer of class
CLIPTokenizer. CLIPTokenizer
) —
Second Tokenizer of class
CLIPTokenizer. unet
to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Stable Diffusion XL.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
The pipeline also inherits the following loading methods:
.ckpt
files( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Union = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput
or tuple
Parameters
str
or List[str]
, optional) —
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds
.
instead. str
or List[str]
, optional) —
The prompt or prompts to be sent to the tokenizer_2
and text_encoder_2
. If not defined, prompt
is
used in both text-encoders PIL.Image.Image
) —
Image
, or tensor representing an image batch which will be inpainted, i.e. parts of the image will
be masked out with mask_image
and repainted according to prompt
. PIL.Image.Image
) —
Image
, or tensor representing an image batch, to mask image
. White pixels in the mask will be
repainted, while black pixels will be preserved. If mask_image
is a PIL image, it will be converted
to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L)
instead of 3, so the expected shape would be (B, H, W, 1)
. int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The height in pixels of the generated image. int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The width in pixels of the generated image. int
, optional, defaults to None
) —
The size of margin in the crop to be applied to the image and masking. If None
, no crop is applied to
image and mask_image. If padding_mask_crop
is not None
, it will first find a rectangular region
with the same aspect ration of the image and contains all masked area, and then expand that area based
on padding_mask_crop
. The image and mask_image will then be cropped based on the expanded area before
resizing to the original image size for inpainting. This is useful when the masked area is small while
the image is large and contain information irrelevant for inpainting, such as background. float
, optional, defaults to 0.9999) —
Conceptually, indicates how much to transform the masked portion of the reference image
. Must be
between 0 and 1. image
will be used as a starting point, adding more noise to it the larger the
strength
. The number of denoising steps depends on the amount of noise initially added. When
strength
is 1, added noise will be maximum and the denoising process will run for the full number of
iterations specified in num_inference_steps
. A value of 1, therefore, essentially ignores the masked
portion of the reference image
. Note that in the case of denoising_start
being declared as an
integer, the value of strength
will be ignored. int
, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. float
, optional) —
When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be
bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and
it is assumed that the passed image
is a partly denoised image. Note that when this is specified,
strength will be ignored. The denoising_start
parameter is particularly beneficial when this pipeline
is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image
Output. float
, optional) —
When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
completed before it is intentionally prematurely terminated. As a result, the returned sample will
still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be
denoised by a successor pipeline that has denoising_start
set to 0.8 so that it only denoises the
final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline
forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image
Output. float
, optional, defaults to 7.5) —
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale
is defined as w
of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1
. Higher guidance scale encourages to generate images that are closely linked to the text prompt
,
usually at the expense of lower image quality. str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation to be sent to tokenizer_2
and
text_encoder_2
. If not defined, negative_prompt
is used in both text-encoders torch.Tensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument.
ip_adapter_image — (PipelineImageInput
, optional): Optional image input to work with IP Adapters. List[torch.Tensor]
, optional) —
Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
IP-adapters. Each element should be a tensor of shape (batch_size, num_images, emb_dim)
. It should
contain the negative image embedding if do_classifier_free_guidance
is set to True
. If not
provided, embeddings are computed from the ip_adapter_image
input argument. torch.Tensor
, optional) —
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting.
If not provided, pooled text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt
input argument. int
, optional, defaults to 1) —
The number of images to generate per prompt. float
, optional, defaults to 0.0) —
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
schedulers.DDIMScheduler, will be ignored for others. torch.Generator
, optional) —
One or a list of torch generator(s)
to make generation deterministic. torch.Tensor
, optional) —
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will ge generated by sampling using the supplied random generator
. str
, optional, defaults to "pil"
) —
The output format of the generate image. Choose between
PIL: PIL.Image.Image
or np.array
. bool
, optional, defaults to True
) —
Whether or not to return a StableDiffusionPipelineOutput instead of a
plain tuple. dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined under
self.processor
in
diffusers.models.attention_processor. Tuple[int]
, optional, defaults to (1024, 1024)) —
If original_size
is not the same as target_size
the image will appear to be down- or upsampled.
original_size
defaults to (width, height)
if not specified. Part of SDXL’s micro-conditioning as
explained in section 2.2 of
https://huggingface.co/papers/2307.01952. Tuple[int]
, optional, defaults to (0, 0)) —
crops_coords_top_left
can be used to generate an image that appears to be “cropped” from the position
crops_coords_top_left
downwards. Favorable, well-centered images are usually achieved by setting
crops_coords_top_left
to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. Tuple[int]
, optional, defaults to (1024, 1024)) —
For most cases, target_size
should be set to the desired height and width of the generated image. If
not specified it will default to (width, height)
. Part of SDXL’s micro-conditioning as explained in
section 2.2 of https://huggingface.co/papers/2307.01952. float
, optional, defaults to 6.0) —
Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
Part of SDXL’s micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. float
, optional, defaults to 2.5) —
Part of SDXL’s micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. Can be used to
simulate an aesthetic score of the generated image by influencing the negative text condition. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Callable
, PipelineCallback
, MultiPipelineCallbacks
, optional) —
A function or a subclass of PipelineCallback
or MultiPipelineCallbacks
that is called at the end of
each denoising step during the inference. with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)
. callback_kwargs
will include a
list of all tensors as specified by callback_on_step_end_tensor_inputs
. List
, optional) —
The list of tensor inputs for the callback_on_step_end
function. The tensors specified in the list
will be passed as callback_kwargs
argument. You will only be able to include variables listed in the
._callback_tensor_inputs
attribute of your pipeline class. Returns
~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput
or tuple
~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput
if return_dict
is True, otherwise a
tuple.
tuple. When returning a tuple, the first element is a list with the generated images.
Function invoked when calling the pipeline for generation.
Examples:
>>> # !pip install transformers accelerate
>>> from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel, DDIMScheduler
>>> from diffusers.utils import load_image
>>> from PIL import Image
>>> import numpy as np
>>> import torch
>>> init_image = load_image(
... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png"
... )
>>> init_image = init_image.resize((1024, 1024))
>>> generator = torch.Generator(device="cpu").manual_seed(1)
>>> mask_image = load_image(
... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png"
... )
>>> mask_image = mask_image.resize((1024, 1024))
>>> def make_canny_condition(image):
... image = np.array(image)
... image = cv2.Canny(image, 100, 200)
... image = image[:, :, None]
... image = np.concatenate([image, image, image], axis=2)
... image = Image.fromarray(image)
... return image
>>> control_image = make_canny_condition(init_image)
>>> controlnet = ControlNetModel.from_pretrained(
... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16
... )
>>> pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained(
... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16
... )
>>> pipe.enable_model_cpu_offload()
>>> # generate image
>>> image = pipe(
... "a handsome man with ray-ban sunglasses",
... num_inference_steps=20,
... generator=generator,
... eta=1.0,
... image=init_image,
... mask_image=mask_image,
... control_image=control_image,
... ).images[0]
( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None )
Parameters
str
or List[str]
, optional) —
prompt to be encoded str
or List[str]
, optional) —
The prompt or prompts to be sent to the tokenizer_2
and text_encoder_2
. If not defined, prompt
is
used in both text-encoders
device — (torch.device
):
torch device int
) —
number of images that should be generated per prompt bool
) —
whether to use classifier free guidance or not str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation to be sent to tokenizer_2
and
text_encoder_2
. If not defined, negative_prompt
is used in both text-encoders torch.Tensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument. torch.Tensor
, optional) —
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting.
If not provided, pooled text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt
input argument. float
, optional) —
A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states.
( images: Union nsfw_content_detected: Optional )
Parameters
List[PIL.Image.Image]
or np.ndarray
) —
List of denoised PIL images of length batch_size
or NumPy array of shape (batch_size, height, width, num_channels)
. List[bool]
) —
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or
None
if safety checking could not be performed. Output class for Stable Diffusion pipelines.