The 3D variational autoencoder (VAE) model with KL loss used in LTX was introduced by Lightricks.
The model can be loaded with the following code snippet.
from diffusers import AutoencoderKLLTXVideo
vae = AutoencoderKLLTXVideo.from_pretrained("Lightricks/LTX-Video", subfolder="vae", torch_dtype=torch.float32).to("cuda")
( in_channels: int = 3 out_channels: int = 3 latent_channels: int = 128 block_out_channels: typing.Tuple[int, ...] = (128, 256, 512, 512) decoder_block_out_channels: typing.Tuple[int, ...] = (128, 256, 512, 512) layers_per_block: typing.Tuple[int, ...] = (4, 3, 3, 3, 4) decoder_layers_per_block: typing.Tuple[int, ...] = (4, 3, 3, 3, 4) spatio_temporal_scaling: typing.Tuple[bool, ...] = (True, True, True, False) decoder_spatio_temporal_scaling: typing.Tuple[bool, ...] = (True, True, True, False) decoder_inject_noise: typing.Tuple[bool, ...] = (False, False, False, False, False) upsample_residual: typing.Tuple[bool, ...] = (False, False, False, False) upsample_factor: typing.Tuple[int, ...] = (1, 1, 1, 1) timestep_conditioning: bool = False patch_size: int = 4 patch_size_t: int = 1 resnet_norm_eps: float = 1e-06 scaling_factor: float = 1.0 encoder_causal: bool = True decoder_causal: bool = False )
Parameters
int
, defaults to 3
) —
Number of input channels. int
, defaults to 3
) —
Number of output channels. int
, defaults to 128
) —
Number of latent channels. Tuple[int, ...]
, defaults to (128, 256, 512, 512)
) —
The number of output channels for each block. Tuple[bool, ...], defaults to
(True, True, True, False)` —
Whether a block should contain spatio-temporal downscaling or not. Tuple[int, ...]
, defaults to (4, 3, 3, 3, 4)
) —
The number of layers per block. int
, defaults to 4
) —
The size of spatial patches. int
, defaults to 1
) —
The size of temporal patches. float
, defaults to 1e-6
) —
Epsilon value for ResNet normalization layers. float
, optional, defaults to 1.0
) —
The component-wise standard deviation of the trained latent space computed using the first batch of the
training set. This is used to scale the latent space to have unit variance when training the diffusion
model. The latents are scaled with the formula z = z * scaling_factor
before being passed to the
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z
. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image
Synthesis with Latent Diffusion Models paper. bool
, defaults to True
) —
Whether the encoder should behave causally (future frames depend only on past frames) or not. bool
, defaults to False
) —
Whether the decoder should behave causally (future frames depend only on past frames) or not. A VAE model with KL loss for encoding images into latents and decoding latent representations into images. Used in LTX.
This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).
Disable sliced VAE decoding. If enable_slicing
was previously enabled, this method will go back to computing
decoding in one step.
Disable tiled VAE decoding. If enable_tiling
was previously enabled, this method will go back to computing
decoding in one step.
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
( tile_sample_min_height: typing.Optional[int] = None tile_sample_min_width: typing.Optional[int] = None tile_sample_stride_height: typing.Optional[float] = None tile_sample_stride_width: typing.Optional[float] = None )
Parameters
int
, optional) —
The minimum height required for a sample to be separated into tiles across the height dimension. int
, optional) —
The minimum width required for a sample to be separated into tiles across the width dimension. int
, optional) —
The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are
no tiling artifacts produced across the height dimension. int
, optional) —
The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling
artifacts produced across the width dimension. Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
( z: Tensor temb: typing.Optional[torch.Tensor] return_dict: bool = True ) → ~models.vae.DecoderOutput
or tuple
Parameters
torch.Tensor
) — Input batch of latent vectors. bool
, optional, defaults to True
) —
Whether or not to return a ~models.vae.DecoderOutput
instead of a plain tuple. Returns
~models.vae.DecoderOutput
or tuple
If return_dict is True, a ~models.vae.DecoderOutput
is returned, otherwise a plain tuple
is
returned.
Decode a batch of images using a tiled decoder.
( x: Tensor ) → torch.Tensor
Encode a batch of images using a tiled encoder.
( latent_dist: DiagonalGaussianDistribution )
Output of AutoencoderKL encoding method.
( sample: Tensor commit_loss: typing.Optional[torch.FloatTensor] = None )
Output of decoding method.