A Transformer model for image-like data from AuraFlow.
( sample_size: int = 64 patch_size: int = 2 in_channels: int = 4 num_mmdit_layers: int = 4 num_single_dit_layers: int = 32 attention_head_dim: int = 256 num_attention_heads: int = 12 joint_attention_dim: int = 2048 caption_projection_dim: int = 3072 out_channels: int = 4 pos_embed_max_size: int = 1024 )
Parameters
int
) — The width of the latent images. This is fixed during training since
it is used to learn a number of position embeddings. int
) — Patch size to turn the input data into small patches. int
, optional, defaults to 16) — The number of channels in the input. int
, optional, defaults to 4) — The number of layers of MMDiT Transformer blocks to use. int
, optional, defaults to 4) —
The number of layers of Transformer blocks to use. These blocks use concatenated image and text
representations. int
, optional, defaults to 64) — The number of channels in each head. int
, optional, defaults to 18) — The number of heads to use for multi-head attention. int
, optional) — The number of encoder_hidden_states
dimensions to use. int
) — Number of dimensions to use when projecting the encoder_hidden_states
. int
, defaults to 16) — Number of output channels. int
, defaults to 4096) — Maximum positions to embed from the image latents. A 2D Transformer model as introduced in AuraFlow (https://blog.fal.ai/auraflow/).