Diffusers documentation
AutoencoderKLMagvit
AutoencoderKLMagvit
The 3D variational autoencoder (VAE) model with KL loss used in EasyAnimate was introduced by Alibaba PAI.
The model can be loaded with the following code snippet.
from diffusers import AutoencoderKLMagvit
vae = AutoencoderKLMagvit.from_pretrained("alibaba-pai/EasyAnimateV5.1-12b-zh", subfolder="vae", torch_dtype=torch.float16).to("cuda")
AutoencoderKLMagvit
class diffusers.AutoencoderKLMagvit
< source >( in_channels: int = 3latent_channels: int = 16out_channels: int = 3block_out_channels: typing.Tuple[int, ...] = [128, 256, 512, 512]down_block_types: typing.Tuple[str, ...] = ['SpatialDownBlock3D', 'SpatialTemporalDownBlock3D', 'SpatialTemporalDownBlock3D', 'SpatialTemporalDownBlock3D']up_block_types: typing.Tuple[str, ...] = ['SpatialUpBlock3D', 'SpatialTemporalUpBlock3D', 'SpatialTemporalUpBlock3D', 'SpatialTemporalUpBlock3D']layers_per_block: int = 2act_fn: str = 'silu'norm_num_groups: int = 32scaling_factor: float = 0.7125spatial_group_norm: bool = True )
A VAE model with KL loss for encoding images into latents and decoding latent representations into images. This model is used in EasyAnimate.
This model inherits from ModelMixin. Check the superclass documentation for itβs generic methods implemented for all models (such as downloading or saving).
Disable sliced VAE decoding. If enable_slicing
was previously enabled, this method will go back to computing
decoding in one step.
Disable tiled VAE decoding. If enable_tiling
was previously enabled, this method will go back to computing
decoding in one step.
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
enable_tiling
< source >( tile_sample_min_height: typing.Optional[int] = Nonetile_sample_min_width: typing.Optional[int] = Nonetile_sample_min_num_frames: typing.Optional[int] = Nonetile_sample_stride_height: typing.Optional[float] = Nonetile_sample_stride_width: typing.Optional[float] = Nonetile_sample_stride_num_frames: typing.Optional[float] = None )
Parameters
- tile_sample_min_height (
int
, optional) β The minimum height required for a sample to be separated into tiles across the height dimension. - tile_sample_min_width (
int
, optional) β The minimum width required for a sample to be separated into tiles across the width dimension. - tile_sample_stride_height (
int
, optional) β The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are no tiling artifacts produced across the height dimension. - tile_sample_stride_width (
int
, optional) β The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling artifacts produced across the width dimension.
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
forward
< source >( sample: Tensorsample_posterior: bool = Falsereturn_dict: bool = Truegenerator: typing.Optional[torch._C.Generator] = None )
AutoencoderKLOutput
class diffusers.models.modeling_outputs.AutoencoderKLOutput
< source >( latent_dist: DiagonalGaussianDistribution )
Output of AutoencoderKL encoding method.
DecoderOutput
class diffusers.models.autoencoders.vae.DecoderOutput
< source >( sample: Tensorcommit_loss: typing.Optional[torch.FloatTensor] = None )
Output of decoding method.