DiffEdit

DiffEdit: Diffusion-based semantic image editing with mask guidance is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord.

The abstract from the paper is:

Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images.

The original codebase can be found at Xiang-cd/DiffEdit-stable-diffusion, and you can try it out in this demo.

This pipeline was contributed by clarencechen. ❤️

Tips

StableDiffusionDiffEditPipeline

class diffusers.StableDiffusionDiffEditPipeline

< >

( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor inverse_scheduler: DDIMInverseScheduler requires_safety_checker: bool = True )

Parameters

  • vae (AutoencoderKL) — Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
  • text_encoder (CLIPTextModel) — Frozen text-encoder (clip-vit-large-patch14).
  • tokenizer (CLIPTokenizer) — A CLIPTokenizer to tokenize text.
  • unet (UNet2DConditionModel) — A UNet2DConditionModel to denoise the encoded image latents.
  • scheduler (SchedulerMixin) — A scheduler to be used in combination with unet to denoise the encoded image latents.
  • inverse_scheduler (DDIMInverseScheduler) — A scheduler to be used in combination with unet to fill in the unmasked part of the input latents.
  • safety_checker (StableDiffusionSafetyChecker) — Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the model card for more details about a model’s potential harms.
  • feature_extractor (CLIPImageProcessor) — A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker.

This is an experimental feature!

Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading and saving methods:

generate_mask

< >

( image: Union = None target_prompt: Union = None target_negative_prompt: Union = None target_prompt_embeds: Optional = None target_negative_prompt_embeds: Optional = None source_prompt: Union = None source_negative_prompt: Union = None source_prompt_embeds: Optional = None source_negative_prompt_embeds: Optional = None num_maps_per_mask: Optional = 10 mask_encode_strength: Optional = 0.5 mask_thresholding_ratio: Optional = 3.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 generator: Union = None output_type: Optional = 'np' cross_attention_kwargs: Optional = None ) List[PIL.Image.Image] or np.array

Parameters

  • image (PIL.Image.Image) — Image or tensor representing an image batch to be used for computing the mask.
  • target_prompt (str or List[str], optional) — The prompt or prompts to guide semantic mask generation. If not defined, you need to pass prompt_embeds.
  • target_negative_prompt (str or List[str], optional) — The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1).
  • target_prompt_embeds (torch.Tensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the prompt input argument.
  • target_negative_prompt_embeds (torch.Tensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, negative_prompt_embeds are generated from the negative_prompt input argument.
  • source_prompt (str or List[str], optional) — The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to pass source_prompt_embeds or source_image instead.
  • source_negative_prompt (str or List[str], optional) — The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you need to pass source_negative_prompt_embeds or source_image instead.
  • source_prompt_embeds (torch.Tensor, optional) — Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from source_prompt input argument.
  • source_negative_prompt_embeds (torch.Tensor, optional) — Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from source_negative_prompt input argument.
  • num_maps_per_mask (int, optional, defaults to 10) — The number of noise maps sampled to generate the semantic mask using DiffEdit.
  • mask_encode_strength (float, optional, defaults to 0.5) — The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0 and 1.
  • mask_thresholding_ratio (float, optional, defaults to 3.0) — The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before mask binarization.
  • num_inference_steps (int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • guidance_scale (float, optional, defaults to 7.5) — A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1.
  • generator (torch.Generator or List[torch.Generator], optional) — A torch.Generator to make generation deterministic.
  • output_type (str, optional, defaults to "pil") — The output format of the generated image. Choose between PIL.Image or np.array.
  • cross_attention_kwargs (dict, optional) — A kwargs dictionary that if specified is passed along to the AttnProcessor as defined in self.processor.

Returns

List[PIL.Image.Image] or np.array

When returning a List[PIL.Image.Image], the list consists of a batch of single-channel binary images with dimensions (height // self.vae_scale_factor, width // self.vae_scale_factor). If it’s np.array, the shape is (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor).

Generate a latent mask given a mask prompt, a target prompt, and an image.

>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from diffusers import StableDiffusionDiffEditPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"

>>> init_image = download_image(img_url).resize((768, 768))

>>> pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
... )

>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.enable_model_cpu_offload()

>>> mask_prompt = "A bowl of fruits"
>>> prompt = "A bowl of pears"

>>> mask_image = pipeline.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt)
>>> image_latents = pipeline.invert(image=init_image, prompt=mask_prompt).latents
>>> image = pipeline(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0]

invert

< >

( prompt: Union = None image: Union = None num_inference_steps: int = 50 inpaint_strength: float = 0.8 guidance_scale: float = 7.5 negative_prompt: Union = None generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None decode_latents: bool = False output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 0 num_auto_corr_rolls: int = 5 )

Parameters

  • prompt (str or List[str], optional) — The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds.
  • image (PIL.Image.Image) — Image or tensor representing an image batch to produce the inverted latents guided by prompt.
  • inpaint_strength (float, optional, defaults to 0.8) — Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When inpaint_strength is 1, the inversion process is run for the full number of iterations specified in num_inference_steps. image is used as a reference for the inversion process, and adding more noise increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs.
  • num_inference_steps (int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • guidance_scale (float, optional, defaults to 7.5) — A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1.
  • negative_prompt (str or List[str], optional) — The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1).
  • generator (torch.Generator, optional) — A torch.Generator to make generation deterministic.
  • prompt_embeds (torch.Tensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the prompt input argument.
  • negative_prompt_embeds (torch.Tensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, negative_prompt_embeds are generated from the negative_prompt input argument.
  • decode_latents (bool, optional, defaults to False) — Whether or not to decode the inverted latents into a generated image. Setting this argument to True decodes all inverted latents for each timestep into a list of generated images.
  • output_type (str, optional, defaults to "pil") — The output format of the generated image. Choose between PIL.Image or np.array.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a plain tuple.
  • callback (Callable, optional) — A function that calls every callback_steps steps during inference. The function is called with the following arguments: callback(step: int, timestep: int, latents: torch.Tensor).
  • callback_steps (int, optional, defaults to 1) — The frequency at which the callback function is called. If not specified, the callback is called at every step.
  • cross_attention_kwargs (dict, optional) — A kwargs dictionary that if specified is passed along to the AttnProcessor as defined in self.processor.
  • lambda_auto_corr (float, optional, defaults to 20.0) — Lambda parameter to control auto correction.
  • lambda_kl (float, optional, defaults to 20.0) — Lambda parameter to control Kullback-Leibler divergence output.
  • num_reg_steps (int, optional, defaults to 0) — Number of regularization loss steps.
  • num_auto_corr_rolls (int, optional, defaults to 5) — Number of auto correction roll steps.

Generate inverted latents given a prompt and image.

>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from diffusers import StableDiffusionDiffEditPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"

>>> init_image = download_image(img_url).resize((768, 768))

>>> pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
... )

>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.enable_model_cpu_offload()

>>> prompt = "A bowl of fruits"

>>> inverted_latents = pipeline.invert(image=init_image, prompt=prompt).latents

__call__

< >

( prompt: Union = None mask_image: Union = None image_latents: Union = None inpaint_strength: Optional = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: int = None ) StableDiffusionPipelineOutput or tuple

Parameters

  • prompt (str or List[str], optional) — The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds.
  • mask_image (PIL.Image.Image) — Image or tensor representing an image batch to mask the generated image. White pixels in the mask are repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the expected shape would be (B, 1, H, W).
  • image_latents (PIL.Image.Image or torch.Tensor) — Partially noised image latents from the inversion process to be used as inputs for image generation.
  • inpaint_strength (float, optional, defaults to 0.8) — Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the denoising process is run on the masked area for the full number of iterations specified in num_inference_steps. image_latents is used as a reference for the masked area, and adding more noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs.
  • num_inference_steps (int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • guidance_scale (float, optional, defaults to 7.5) — A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1.
  • negative_prompt (str or List[str], optional) — The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1).
  • num_images_per_prompt (int, optional, defaults to 1) — The number of images to generate per prompt.
  • eta (float, optional, defaults to 0.0) — Corresponds to parameter eta (η) from the DDIM paper. Only applies to the DDIMScheduler, and is ignored in other schedulers.
  • generator (torch.Generator, optional) — A torch.Generator to make generation deterministic.
  • latents (torch.Tensor, optional) — Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random generator.
  • prompt_embeds (torch.Tensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the prompt input argument.
  • negative_prompt_embeds (torch.Tensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, negative_prompt_embeds are generated from the negative_prompt input argument.
  • output_type (str, optional, defaults to "pil") — The output format of the generated image. Choose between PIL.Image or np.array.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple.
  • callback (Callable, optional) — A function that calls every callback_steps steps during inference. The function is called with the following arguments: callback(step: int, timestep: int, latents: torch.Tensor).
  • callback_steps (int, optional, defaults to 1) — The frequency at which the callback function is called. If not specified, the callback is called at every step.
  • cross_attention_kwargs (dict, optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in self.processor.
  • clip_skip (int, optional) — Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings.

Returns

StableDiffusionPipelineOutput or tuple

If return_dict is True, StableDiffusionPipelineOutput is returned, otherwise a tuple is returned where the first element is a list with the generated images and the second element is a list of bools indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content.

The call function to the pipeline for generation.

>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from diffusers import StableDiffusionDiffEditPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"

>>> init_image = download_image(img_url).resize((768, 768))

>>> pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
... )

>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.enable_model_cpu_offload()

>>> mask_prompt = "A bowl of fruits"
>>> prompt = "A bowl of pears"

>>> mask_image = pipeline.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt)
>>> image_latents = pipeline.invert(image=init_image, prompt=mask_prompt).latents
>>> image = pipeline(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0]

encode_prompt

< >

( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None )

Parameters

  • prompt (str or List[str], optional) — prompt to be encoded device — (torch.device): torch device
  • num_images_per_prompt (int) — number of images that should be generated per prompt
  • do_classifier_free_guidance (bool) — whether to use classifier free guidance or not
  • negative_prompt (str or List[str], optional) — The prompt or prompts not to guide the image generation. If not defined, one has to pass negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
  • prompt_embeds (torch.Tensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • negative_prompt_embeds (torch.Tensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.
  • lora_scale (float, optional) — A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
  • clip_skip (int, optional) — Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings.

Encodes the prompt into text encoder hidden states.

StableDiffusionPipelineOutput

class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput

< >

( images: Union nsfw_content_detected: Optional )

Parameters

  • images (List[PIL.Image.Image] or np.ndarray) — List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels).
  • nsfw_content_detected (List[bool]) — List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or None if safety checking could not be performed.

Output class for Stable Diffusion pipelines.

< > Update on GitHub