LEDITS++ was proposed in LEDITS++: Limitless Image Editing using Text-to-Image Models by Manuel Brack, Felix Friedrich, Katharina Kornmeier, Linoy Tsaban, Patrick Schramowski, Kristian Kersting, Apolinário Passos.
The abstract from the paper is:
Text-to-image diffusion models have recently received increasing interest for their astonishing ability to produce high-fidelity images from solely text inputs. Subsequent research efforts aim to exploit and apply their capabilities to real image editing. However, existing image-to-image methods are often inefficient, imprecise, and of limited versatility. They either require time-consuming fine-tuning, deviate unnecessarily strongly from the input image, and/or lack support for multiple, simultaneous edits. To address these issues, we introduce LEDITS++, an efficient yet versatile and precise textual image manipulation technique. LEDITS++‘s novel inversion approach requires no tuning nor optimization and produces high-fidelity results with a few diffusion steps. Second, our methodology supports multiple simultaneous edits and is architecture-agnostic. Third, we use a novel implicit masking technique that limits changes to relevant image regions. We propose the novel TEdBench++ benchmark as part of our exhaustive evaluation. Our results demonstrate the capabilities of LEDITS++ and its improvements over previous methods. The project page is available at https://leditsplusplus-project.static.hf.space .
You can find additional information about LEDITS++ on the project page and try it out in a demo.
We provide two distinct pipelines based on different pre-trained models.
( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler] safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True )
Parameters
unet
to denoise the encoded image latens. Can be one of
DPMSolverMultistepScheduler or DDIMScheduler. If any other scheduler is passed it will
automatically be set to DPMSolverMultistepScheduler. StableDiffusionSafetyChecker
) —
Classification module that estimates whether generated images could be considered offensive or harmful.
Please, refer to the model card for details. safety_checker
. Pipeline for textual image editing using LEDits++ with Stable Diffusion.
This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
( negative_prompt: typing.Union[str, typing.List[str], NoneType] = None generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True editing_prompt: typing.Union[str, typing.List[str], NoneType] = None editing_prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None reverse_editing_direction: typing.Union[bool, typing.List[bool], NoneType] = False edit_guidance_scale: typing.Union[float, typing.List[float], NoneType] = 5 edit_warmup_steps: typing.Union[int, typing.List[int], NoneType] = 0 edit_cooldown_steps: typing.Union[int, typing.List[int], NoneType] = None edit_threshold: typing.Union[float, typing.List[float], NoneType] = 0.9 user_mask: typing.Optional[torch.Tensor] = None sem_guidance: typing.Optional[typing.List[torch.Tensor]] = None use_cross_attn_mask: bool = False use_intersect_mask: bool = True attn_store_steps: typing.Optional[typing.List[int]] = [] store_averaged_over_steps: bool = True cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None guidance_rescale: float = 0.0 clip_skip: typing.Optional[int] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] **kwargs ) → LEditsPPDiffusionPipelineOutput or tuple
Parameters
str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
if guidance_scale
is less than 1
). torch.Generator
, optional) —
One or a list of torch generator(s)
to make generation deterministic. str
, optional, defaults to "pil"
) —
The output format of the generate image. Choose between
PIL: PIL.Image.Image
or np.array
. bool
, optional, defaults to True
) —
Whether or not to return a LEditsPPDiffusionPipelineOutput instead of a plain
tuple. str
or List[str]
, optional) —
The prompt or prompts to guide the image generation. The image is reconstructed by setting
editing_prompt = None
. Guidance direction of prompt should be specified via
reverse_editing_direction
. torch.Tensor>
, optional) —
Pre-computed embeddings to use for guiding the image generation. Guidance direction of embedding should
be specified via reverse_editing_direction
. torch.Tensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
not provided, negative_prompt_embeds
are generated from the negative_prompt
input argument. bool
or List[bool]
, optional, defaults to False
) —
Whether the corresponding prompt in editing_prompt
should be increased or decreased. float
or List[float]
, optional, defaults to 5) —
Guidance scale for guiding the image generation. If provided as list values should correspond to
editing_prompt
. edit_guidance_scale
is defined as s_e
of equation 12 of LEDITS++
Paper. float
or List[float]
, optional, defaults to 10) —
Number of diffusion steps (for each prompt) for which guidance will not be applied. float
or List[float]
, optional, defaults to None
) —
Number of diffusion steps (for each prompt) after which guidance will no longer be applied. float
or List[float]
, optional, defaults to 0.9) —
Masking threshold of guidance. Threshold should be proportional to the image region that is modified.
‘edit_threshold’ is defined as ‘λ’ of equation 12 of LEDITS++
Paper. torch.Tensor
, optional) —
User-provided mask for even better control over the editing process. This is helpful when LEDITS++‘s
implicit masks do not meet user preferences. List[torch.Tensor]
, optional) —
List of pre-generated guidance vectors to be applied at generation. Length of the list has to
correspond to num_inference_steps
. bool
, defaults to False
) —
Whether cross-attention masks are used. Cross-attention masks are always used when use_intersect_mask
is set to true. Cross-attention masks are defined as ‘M^1’ of equation 12 of LEDITS++
paper. bool
, defaults to True
) —
Whether the masking term is calculated as intersection of cross-attention masks and masks derived from
the noise estimate. Cross-attention mask are defined as ‘M^1’ and masks derived from the noise estimate
are defined as ‘M^2’ of equation 12 of LEDITS++ paper. List[int]
, optional) —
Steps for which the attention maps are stored in the AttentionStore. Just for visualization purposes. bool
, defaults to True
) —
Whether the attention maps for the ‘attn_store_steps’ are stored averaged over the diffusion steps. If
False, attention maps for each step are stores separately. Just for visualization purposes. dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined in
self.processor
. float
, optional, defaults to 0.0) —
Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are
Flawed. Guidance rescale factor should fix overexposure when
using zero terminal SNR. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Callable
, optional) —
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)
. callback_kwargs
will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs
. List
, optional) —
The list of tensor inputs for the callback_on_step_end
function. The tensors specified in the list
will be passed as callback_kwargs
argument. You will only be able to include variables listed in the
._callback_tensor_inputs
attribute of your pipeline class. Returns
LEditsPPDiffusionPipelineOutput or tuple
LEditsPPDiffusionPipelineOutput if return_dict
is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of
bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the
safety_checker`.
The call function to the pipeline for editing. The invert() method has to be called beforehand. Edits will always be performed for the last inverted image(s).
Examples:
>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO
>>> from diffusers import LEditsPPPipelineStableDiffusion
>>> from diffusers.utils import load_image
>>> pipe = LEditsPPPipelineStableDiffusion.from_pretrained(
... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> img_url = "https://www.aiml.informatik.tu-darmstadt.de/people/mbrack/cherry_blossom.png"
>>> image = load_image(img_url).convert("RGB")
>>> _ = pipe.invert(image=image, num_inversion_steps=50, skip=0.1)
>>> edited_image = pipe(
... editing_prompt=["cherry blossom"], edit_guidance_scale=10.0, edit_threshold=0.75
... ).images[0]
( image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] source_prompt: str = '' source_guidance_scale: float = 3.5 num_inversion_steps: int = 30 skip: float = 0.15 generator: typing.Optional[torch._C.Generator] = None cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None clip_skip: typing.Optional[int] = None height: typing.Optional[int] = None width: typing.Optional[int] = None resize_mode: typing.Optional[str] = 'default' crops_coords: typing.Optional[typing.Tuple[int, int, int, int]] = None ) → LEditsPPInversionPipelineOutput
Parameters
PipelineImageInput
) —
Input for the image(s) that are to be edited. Multiple input images have to default to the same aspect
ratio. str
, defaults to ""
) —
Prompt describing the input image that will be used for guidance during inversion. Guidance is disabled
if the source_prompt
is ""
. float
, defaults to 3.5
) —
Strength of guidance during inversion. int
, defaults to 30
) —
Number of total performed inversion steps after discarding the initial skip
steps. float
, defaults to 0.15
) —
Portion of initial steps that will be ignored for inversion and subsequent generation. Lower values
will lead to stronger changes to the input image. skip
has to be between 0
and 1
. torch.Generator
, optional) —
A torch.Generator
to make inversion
deterministic. dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined in
self.processor
. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. int
, optional, defaults to None
) —
The height in preprocessed image. If None
, will use the get_default_height_width()
to get default
height. int
, optional, defaults to
None) -- The width in preprocessed. If
None, will use get_default_height_width()
to get the default width. str
, optional, defaults to default
) —
The resize mode, can be one of default
or fill
. If default
, will resize the image to fit within
the specified width and height, and it may not maintaining the original aspect ratio. If fill
, will
resize the image to fit within the specified width and height, maintaining the aspect ratio, and then
center the image within the dimensions, filling empty with data from image. If crop
, will resize the
image to fit within the specified width and height, maintaining the aspect ratio, and then center the
image within the dimensions, cropping the excess. Note that resize_mode fill
and crop
are only
supported for PIL image input. List[Tuple[int, int, int, int]]
, optional, defaults to None
) —
The crop coordinates for each image in the batch. If None
, will not crop the image. Returns
Output will contain the resized input image(s) and respective VAE reconstruction(s).
The function to the pipeline for image inversion as described by the LEDITS++ Paper. If the scheduler is set to DDIMScheduler the inversion proposed by edit-friendly DPDM will be performed instead.
( device num_images_per_prompt enable_edit_guidance negative_prompt = None editing_prompt = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None editing_prompt_embeds: typing.Optional[torch.Tensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None )
Parameters
torch.device
):
torch device int
) —
number of images that should be generated per prompt bool
) —
whether to perform any editing or reconstruct the input image instead str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). str
or List[str]
, optional) —
Editing prompt(s) to be encoded. If not defined, one has to pass editing_prompt_embeds
instead. torch.Tensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument. float
, optional) —
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states.
( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, diffusers.schedulers.scheduling_ddim.DDIMScheduler] image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: typing.Optional[bool] = None )
Parameters
unet
to denoise the encoded image latens. Can be one of
DPMSolverMultistepScheduler or DDIMScheduler. If any other scheduler is passed it will
automatically be set to DPMSolverMultistepScheduler. bool
, optional, defaults to "True"
) —
Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
stabilityai/stable-diffusion-xl-base-1-0
. bool
, optional) —
Whether to use the invisible_watermark library to
watermark output images. If not defined, it will default to True if the package is installed, otherwise no
watermarker will be used. Pipeline for textual image editing using LEDits++ with Stable Diffusion XL.
This model inherits from DiffusionPipeline and builds on the StableDiffusionXLPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
In addition the pipeline inherits the following loading methods:
as well as the following saving methods:
loaders.StableDiffusionXLPipeline.save_lora_weights
( denoising_end: typing.Optional[float] = None negative_prompt: typing.Union[str, typing.List[str], NoneType] = None negative_prompt_2: typing.Union[str, typing.List[str], NoneType] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None negative_pooled_prompt_embeds: typing.Optional[torch.Tensor] = None ip_adapter_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None guidance_rescale: float = 0.0 crops_coords_top_left: typing.Tuple[int, int] = (0, 0) target_size: typing.Optional[typing.Tuple[int, int]] = None editing_prompt: typing.Union[str, typing.List[str], NoneType] = None editing_prompt_embeddings: typing.Optional[torch.Tensor] = None editing_pooled_prompt_embeds: typing.Optional[torch.Tensor] = None reverse_editing_direction: typing.Union[bool, typing.List[bool], NoneType] = False edit_guidance_scale: typing.Union[float, typing.List[float], NoneType] = 5 edit_warmup_steps: typing.Union[int, typing.List[int], NoneType] = 0 edit_cooldown_steps: typing.Union[int, typing.List[int], NoneType] = None edit_threshold: typing.Union[float, typing.List[float], NoneType] = 0.9 sem_guidance: typing.Optional[typing.List[torch.Tensor]] = None use_cross_attn_mask: bool = False use_intersect_mask: bool = False user_mask: typing.Optional[torch.Tensor] = None attn_store_steps: typing.Optional[typing.List[int]] = [] store_averaged_over_steps: bool = True clip_skip: typing.Optional[int] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] **kwargs ) → LEditsPPDiffusionPipelineOutput or tuple
Parameters
float
, optional) —
When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
completed before it is intentionally prematurely terminated. As a result, the returned sample will
still retain a substantial amount of noise as determined by the discrete timesteps selected by the
scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
“Mixture of Denoisers” multi-pipeline setup, as elaborated in [**Refining the Image str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation to be sent to tokenizer_2
and
text_encoder_2
. If not defined, negative_prompt
is used in both text-encoders torch.Tensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument. torch.Tensor
, optional) —
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt
input argument. PipelineImageInput
, optional):
Optional image input to work with IP Adapters. str
, optional, defaults to "pil"
) —
The output format of the generate image. Choose between
PIL: PIL.Image.Image
or np.array
. bool
, optional, defaults to True
) —
Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput
instead
of a plain tuple. Callable
, optional) —
A function that will be called every callback_steps
steps during inference. The function will be
called with the following arguments: callback(step: int, timestep: int, latents: torch.Tensor)
. int
, optional, defaults to 1) —
The frequency at which the callback
function will be called. If not specified, the callback will be
called at every step. dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined under
self.processor
in
diffusers.models.attention_processor. float
, optional, defaults to 0.7) —
Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are
Flawed guidance_scale
is defined as φ
in equation 16. of
Common Diffusion Noise Schedules and Sample Steps are Flawed.
Guidance rescale factor should fix overexposure when using zero terminal SNR. Tuple[int]
, optional, defaults to (0, 0)) —
crops_coords_top_left
can be used to generate an image that appears to be “cropped” from the position
crops_coords_top_left
downwards. Favorable, well-centered images are usually achieved by setting
crops_coords_top_left
to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. Tuple[int]
, optional, defaults to (1024, 1024)) —
For most cases, target_size
should be set to the desired height and width of the generated image. If
not specified it will default to (width, height)
. Part of SDXL’s micro-conditioning as explained in
section 2.2 of https://huggingface.co/papers/2307.01952. str
or List[str]
, optional) —
The prompt or prompts to guide the image generation. The image is reconstructed by setting
editing_prompt = None
. Guidance direction of prompt should be specified via
reverse_editing_direction
. torch.Tensor
, optional) —
Pre-generated edit text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting.
If not provided, editing_prompt_embeddings will be generated from editing_prompt
input argument. torch.Tensor
, optional) —
Pre-generated pooled edit text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, editing_prompt_embeddings will be generated from editing_prompt
input
argument. bool
or List[bool]
, optional, defaults to False
) —
Whether the corresponding prompt in editing_prompt
should be increased or decreased. float
or List[float]
, optional, defaults to 5) —
Guidance scale for guiding the image generation. If provided as list values should correspond to
editing_prompt
. edit_guidance_scale
is defined as s_e
of equation 12 of LEDITS++
Paper. float
or List[float]
, optional, defaults to 10) —
Number of diffusion steps (for each prompt) for which guidance is not applied. float
or List[float]
, optional, defaults to None
) —
Number of diffusion steps (for each prompt) after which guidance is no longer applied. float
or List[float]
, optional, defaults to 0.9) —
Masking threshold of guidance. Threshold should be proportional to the image region that is modified.
‘edit_threshold’ is defined as ‘λ’ of equation 12 of LEDITS++
Paper. List[torch.Tensor]
, optional) —
List of pre-generated guidance vectors to be applied at generation. Length of the list has to
correspond to num_inference_steps
. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Callable
, optional) —
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)
. callback_kwargs
will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs
. List
, optional) —
The list of tensor inputs for the callback_on_step_end
function. The tensors specified in the list
will be passed as callback_kwargs
argument. You will only be able to include variables listed in the
._callback_tensor_inputs
attribute of your pipeline class. Returns
LEditsPPDiffusionPipelineOutput or tuple
LEditsPPDiffusionPipelineOutput if return_dict
is True, otherwise a `tuple. When
returning a tuple, the first element is a list with the generated images.
The call function to the pipeline for editing. The invert() method has to be called beforehand. Edits will always be performed for the last inverted image(s).
Examples:
>>> import torch
>>> import PIL
>>> import requests
>>> from io import BytesIO
>>> from diffusers import LEditsPPPipelineStableDiffusionXL
>>> pipe = LEditsPPPipelineStableDiffusionXL.from_pretrained(
... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> def download_image(url):
... response = requests.get(url)
... return PIL.Image.open(BytesIO(response.content)).convert("RGB")
>>> img_url = "https://www.aiml.informatik.tu-darmstadt.de/people/mbrack/tennis.jpg"
>>> image = download_image(img_url)
>>> _ = pipe.invert(image=image, num_inversion_steps=50, skip=0.2)
>>> edited_image = pipe(
... editing_prompt=["tennis ball", "tomato"],
... reverse_editing_direction=[True, False],
... edit_guidance_scale=[5.0, 10.0],
... edit_threshold=[0.9, 0.85],
... ).images[0]
( image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] source_prompt: str = '' source_guidance_scale = 3.5 negative_prompt: str = None negative_prompt_2: str = None num_inversion_steps: int = 50 skip: float = 0.15 generator: typing.Optional[torch._C.Generator] = None crops_coords_top_left: typing.Tuple[int, int] = (0, 0) num_zero_noise_steps: int = 3 cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None ) → LEditsPPInversionPipelineOutput
Parameters
PipelineImageInput
) —
Input for the image(s) that are to be edited. Multiple input images have to default to the same aspect
ratio. str
, defaults to ""
) —
Prompt describing the input image that will be used for guidance during inversion. Guidance is disabled
if the source_prompt
is ""
. float
, defaults to 3.5
) —
Strength of guidance during inversion. str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation to be sent to tokenizer_2
and
text_encoder_2
. If not defined, negative_prompt
is used in both text-encoders int
, defaults to 50
) —
Number of total performed inversion steps after discarding the initial skip
steps. float
, defaults to 0.15
) —
Portion of initial steps that will be ignored for inversion and subsequent generation. Lower values
will lead to stronger changes to the input image. skip
has to be between 0
and 1
. torch.Generator
, optional) —
A torch.Generator
to make inversion
deterministic. Tuple[int]
, optional, defaults to (0, 0)) —
crops_coords_top_left
can be used to generate an image that appears to be “cropped” from the position
crops_coords_top_left
downwards. Favorable, well-centered images are usually achieved by setting
crops_coords_top_left
to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of
https://huggingface.co/papers/2307.01952. int
, defaults to 3
) —
Number of final diffusion steps that will not renoise the current image. If no steps are set to zero
SD-XL in combination with DPMSolverMultistepScheduler will produce noise artifacts. dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined under
self.processor
in
diffusers.models.attention_processor. Returns
Output will contain the resized input image(s) and respective VAE reconstruction(s).
The function to the pipeline for image inversion as described by the LEDITS++ Paper. If the scheduler is set to DDIMScheduler the inversion proposed by edit-friendly DPDM will be performed instead.
( device: typing.Optional[torch.device] = None num_images_per_prompt: int = 1 negative_prompt: typing.Optional[str] = None negative_prompt_2: typing.Optional[str] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None negative_pooled_prompt_embeds: typing.Optional[torch.Tensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None enable_edit_guidance: bool = True editing_prompt: typing.Optional[str] = None editing_prompt_embeds: typing.Optional[torch.Tensor] = None editing_pooled_prompt_embeds: typing.Optional[torch.Tensor] = None )
Parameters
torch.device
):
torch device int
) —
number of images that should be generated per prompt str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation to be sent to tokenizer_2
and
text_encoder_2
. If not defined, negative_prompt
is used in both text-encoders torch.Tensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument. torch.Tensor
, optional) —
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt
input argument. float
, optional) —
A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. bool
) —
Whether to guide towards an editing prompt or not. str
or List[str]
, optional) —
Editing prompt(s) to be encoded. If not defined and ‘enable_edit_guidance’ is True, one has to pass
editing_prompt_embeds
instead. torch.Tensor
, optional) —
Pre-generated edit text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting.
If not provided and ‘enable_edit_guidance’ is True, editing_prompt_embeds will be generated from
editing_prompt
input argument. torch.Tensor
, optional) —
Pre-generated edit pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, pooled editing_pooled_prompt_embeds will be generated from editing_prompt
input argument. Encodes the prompt into text encoder hidden states.
( w: Tensor embedding_dim: int = 512 dtype: dtype = torch.float32 ) → torch.Tensor
Parameters
torch.Tensor
) —
Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings. int
, optional, defaults to 512) —
Dimension of the embeddings to generate. torch.dtype
, optional, defaults to torch.float32
) —
Data type of the generated embeddings. Returns
torch.Tensor
Embedding vectors with shape (len(w), embedding_dim)
.
( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] )
Parameters
List[PIL.Image.Image]
or np.ndarray
) —
List of denoised PIL images of length batch_size
or NumPy array of shape (batch_size, height, width, num_channels)
. List[bool]
) —
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or
None
if safety checking could not be performed. Output class for LEdits++ Diffusion pipelines.
( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] vae_reconstruction_images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] )
Parameters
List[PIL.Image.Image]
or np.ndarray
) —
List of the cropped and resized input images as PIL images of length batch_size
or NumPy array of shape (batch_size, height, width, num_channels)
. List[PIL.Image.Image]
or np.ndarray
) —
List of VAE reconstruction of all input images as PIL images of length batch_size
or NumPy array of shape
(batch_size, height, width, num_channels)
. Output class for LEdits++ Diffusion pipelines.