Diffusers documentation
AllegroTransformer3DModel
AllegroTransformer3DModel
A Diffusion Transformer model for 3D data from Allegro was introduced in Allegro: Open the Black Box of Commercial-Level Video Generation Model by RhymesAI.
The model can be loaded with the following code snippet.
from diffusers import AllegroTransformer3DModel
vae = AllegroTransformer3DModel.from_pretrained("rhymes-ai/Allegro", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")
AllegroTransformer3DModel
class diffusers.AllegroTransformer3DModel
< source >( patch_size: int = 2patch_size_t: int = 1num_attention_heads: int = 24attention_head_dim: int = 96in_channels: int = 4out_channels: int = 4num_layers: int = 32dropout: float = 0.0cross_attention_dim: int = 2304attention_bias: bool = Truesample_height: int = 90sample_width: int = 160sample_frames: int = 22activation_fn: str = 'gelu-approximate'norm_elementwise_affine: bool = Falsenorm_eps: float = 1e-06caption_channels: int = 4096interpolation_scale_h: float = 2.0interpolation_scale_w: float = 2.0interpolation_scale_t: float = 2.2 )
Transformer2DModelOutput
class diffusers.models.modeling_outputs.Transformer2DModelOutput
< source >( sample: torch.Tensor )
Parameters
- sample (
torch.Tensor
of shape(batch_size, num_channels, height, width)
or(batch size, num_vector_embeds - 1, num_latent_pixels)
if Transformer2DModel is discrete) — The hidden states output conditioned on theencoder_hidden_states
input. If discrete, returns probability distributions for the unnoised latent pixels.
The output of Transformer2DModel.