A Diffusion Transformer model for 3D video-like data was introduced in Mochi-1 Preview by Genmo.
The model can be loaded with the following code snippet.
from diffusers import MochiTransformer3DModel
transformer = MochiTransformer3DModel.from_pretrained("genmo/mochi-1-preview", subfolder="transformer", torch_dtype=torch.float16).to("cuda")
( patch_size: int = 2 num_attention_heads: int = 24 attention_head_dim: int = 128 num_layers: int = 48 pooled_projection_dim: int = 1536 in_channels: int = 12 out_channels: typing.Optional[int] = None qk_norm: str = 'rms_norm' text_embed_dim: int = 4096 time_embed_dim: int = 256 activation_fn: str = 'swiglu' max_sequence_length: int = 256 )
Parameters
int
, defaults to 2
) —
The size of the patches to use in the patch embedding layer. int
, defaults to 24
) —
The number of heads to use for multi-head attention. int
, defaults to 128
) —
The number of channels in each head. int
, defaults to 48
) —
The number of layers of Transformer blocks to use. int
, defaults to 12
) —
The number of channels in the input. int
, optional, defaults to None
) —
The number of channels in the output. str
, defaults to "rms_norm"
) —
The normalization layer to use. int
, defaults to 4096
) —
Input dimension of text embeddings from the text encoder. int
, defaults to 256
) —
Output dimension of timestep embeddings. str
, defaults to "swiglu"
) —
Activation function to use in feed-forward. int
, defaults to 256
) —
The maximum sequence length of text embeddings supported. A Transformer model for video-like data introduced in Mochi.
( sample: torch.Tensor )
Parameters
torch.Tensor
of shape (batch_size, num_channels, height, width)
or (batch size, num_vector_embeds - 1, num_latent_pixels)
if Transformer2DModel is discrete) —
The hidden states output conditioned on the encoder_hidden_states
input. If discrete, returns probability
distributions for the unnoised latent pixels. The output of Transformer2DModel.