The VaeImageProcessor
provides a unified API for StableDiffusionPipelines to prepare image inputs for VAE encoding and post-processing outputs once they’re decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.
All pipelines with VaeImageProcessor
accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the output_type
argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the output_type
argument (for example output_type="latent"
). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines.
( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_rgb: bool = False do_convert_grayscale: bool = False )
Parameters
bool
, optional, defaults to True
) —
Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor
. Can accept
height
and width
arguments from image_processor.VaeImageProcessor.preprocess() method. int
, optional, defaults to 8
) —
VAE scale factor. If do_resize
is True
, the image is automatically resized to multiples of this factor. str
, optional, defaults to lanczos
) —
Resampling filter to use when resizing the image. bool
, optional, defaults to True
) —
Whether to normalize the image to [-1,1]. bool
, optional, defaults to False
) —
Whether to binarize the image to 0/1. bool
, optional, defaults to be False
) —
Whether to convert the images to RGB format. bool
, optional, defaults to be False
) —
Whether to convert the images to grayscale format. Image processor for VAE.
( mask: Image init_image: Image image: Image crop_coords: Optional = None )
overlay the inpaint output to the original image
( image: Image ) → PIL.Image.Image
Create a mask.
Applies Gaussian blur to an image.
Converts a PIL image to grayscale format.
Converts a PIL image to RGB format.
Denormalize an image array to [0,1].
( mask_image: Image width: int height: int pad = 0 ) → tuple
Parameters
Returns
tuple
(x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and matches the original aspect ratio.
Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect ratio of the original image; for example, if user drew mask in a 128x32 region, and the dimensions for processing are 512x512, the region will be expanded to 128x128.
( image: Union height: Optional = None width: Optional = None )
Parameters
PIL.Image.Image
, np.ndarray
or torch.Tensor
) —
The image input, can be a PIL image, numpy array or pytorch tensor. if it is a numpy array, should have
shape [batch, height, width]
or [batch, height, width, channel]
if it is a pytorch tensor, should
have shape [batch, channel, height, width]
. int
, optional, defaults to None
) —
The height in preprocessed image. If None
, will use the height of image
input. int
, optional, defaults to
None) -- The width in preprocessed. If
None, will use the width of the
image` input. This function return the height and width that are downscaled to the next integer multiple of
vae_scale_factor
.
Normalize an image array to [-1,1].
Convert a numpy image or a batch of images to a PIL image.
Convert a NumPy image to a PyTorch tensor.
Convert a PIL image or a list of PIL images to NumPy arrays.
( image: Tensor output_type: str = 'pil' do_denormalize: Optional = None ) → PIL.Image.Image
, np.ndarray
or torch.Tensor
Parameters
torch.Tensor
) —
The image input, should be a pytorch tensor with shape B x C x H x W
. str
, optional, defaults to pil
) —
The output type of the image, can be one of pil
, np
, pt
, latent
. List[bool]
, optional, defaults to None
) —
Whether to denormalize the image to [0,1]. If None
, will use the value of do_normalize
in the
VaeImageProcessor
config. Returns
PIL.Image.Image
, np.ndarray
or torch.Tensor
The postprocessed image.
Postprocess the image output from tensor to output_type
.
( image: Union height: Optional = None width: Optional = None resize_mode: str = 'default' crops_coords: Optional = None )
Parameters
pipeline_image_input
) —
The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of
supported formats. int
, optional, defaults to None
) —
The height in preprocessed image. If None
, will use the get_default_height_width()
to get default
height. int
, optional, defaults to
None) -- The width in preprocessed. If
None, will use get_default_height_width()
to get the default width. str
, optional, defaults to default
) —
The resize mode, can be one of default
or fill
. If default
, will resize the image to fit within
the specified width and height, and it may not maintaining the original aspect ratio. If fill
, will
resize the image to fit within the specified width and height, maintaining the aspect ratio, and then
center the image within the dimensions, filling empty with data from image. If crop
, will resize the
image to fit within the specified width and height, maintaining the aspect ratio, and then center the
image within the dimensions, cropping the excess. Note that resize_mode fill
and crop
are only
supported for PIL image input. List[Tuple[int, int, int, int]]
, optional, defaults to None
) —
The crop coordinates for each image in the batch. If None
, will not crop the image. Preprocess the image input.
Convert a PyTorch tensor to a NumPy image.
( image: Union height: int width: int resize_mode: str = 'default' ) → PIL.Image.Image
, np.ndarray
or torch.Tensor
Parameters
PIL.Image.Image
, np.ndarray
or torch.Tensor
) —
The image input, can be a PIL image, numpy array or pytorch tensor. int
) —
The height to resize to. int
) —
The width to resize to. str
, optional, defaults to default
) —
The resize mode to use, can be one of default
or fill
. If default
, will resize the image to fit
within the specified width and height, and it may not maintaining the original aspect ratio. If fill
,
will resize the image to fit within the specified width and height, maintaining the aspect ratio, and
then center the image within the dimensions, filling empty with data from image. If crop
, will resize
the image to fit within the specified width and height, maintaining the aspect ratio, and then center
the image within the dimensions, cropping the excess. Note that resize_mode fill
and crop
are only
supported for PIL image input. Returns
PIL.Image.Image
, np.ndarray
or torch.Tensor
The resized image.
Resize image.
## VaeImageProcessorLDM3D[[diffusers.image_processor.VaeImageProcessorLDM3D]]
The VaeImageProcessorLDM3D
accepts RGB and depth inputs and returns RGB and depth outputs.
( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True )
Parameters
bool
, optional, defaults to True
) —
Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor
. int
, optional, defaults to 8
) —
VAE scale factor. If do_resize
is True
, the image is automatically resized to multiples of this factor. str
, optional, defaults to lanczos
) —
Resampling filter to use when resizing the image. bool
, optional, defaults to True
) —
Whether to normalize the image to [-1,1]. Image processor for VAE LDM3D.
Convert a PIL image or a list of PIL images to NumPy arrays.
Convert a NumPy depth image or a batch of images to a PIL image.
Convert a NumPy image or a batch of images to a PIL image.
( rgb: Union depth: Union height: Optional = None width: Optional = None target_res: Optional = None )
Preprocess the image input. Accepted formats are PIL images, NumPy arrays or PyTorch tensors.
Returns: depth map
## PixArtImageProcessor[[diffusers.image_processor.PixArtImageProcessor]]
( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_grayscale: bool = False )
Parameters
bool
, optional, defaults to True
) —
Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor
. Can accept
height
and width
arguments from image_processor.VaeImageProcessor.preprocess() method. int
, optional, defaults to 8
) —
VAE scale factor. If do_resize
is True
, the image is automatically resized to multiples of this factor. str
, optional, defaults to lanczos
) —
Resampling filter to use when resizing the image. bool
, optional, defaults to True
) —
Whether to normalize the image to [-1,1]. bool
, optional, defaults to False
) —
Whether to binarize the image to 0/1. bool
, optional, defaults to be False
) —
Whether to convert the images to RGB format. bool
, optional, defaults to be False
) —
Whether to convert the images to grayscale format. Image processor for PixArt image resize and crop.
Returns binned height and width.
## IPAdapterMaskProcessor[[diffusers.image_processor.IPAdapterMaskProcessor]]
( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = False do_binarize: bool = True do_convert_grayscale: bool = True )
Parameters
bool
, optional, defaults to True
) —
Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor
. int
, optional, defaults to 8
) —
VAE scale factor. If do_resize
is True
, the image is automatically resized to multiples of this factor. str
, optional, defaults to lanczos
) —
Resampling filter to use when resizing the image. bool
, optional, defaults to False
) —
Whether to normalize the image to [-1,1]. bool
, optional, defaults to True
) —
Whether to binarize the image to 0/1. bool
, optional, defaults to be True
) —
Whether to convert the images to grayscale format. Image processor for IP Adapter image masks.
( mask: Tensor batch_size: int num_queries: int value_embed_dim: int ) → torch.Tensor
Parameters
torch.Tensor
) —
The input mask tensor generated with IPAdapterMaskProcessor.preprocess()
. int
) —
The batch size. int
) —
The number of queries. int
) —
The dimensionality of the value embeddings. Returns
torch.Tensor
The downsampled mask tensor.
Downsamples the provided mask tensor to match the expected dimensions for scaled dot-product attention. If the aspect ratio of the mask does not match the aspect ratio of the output image, a warning is issued.