(num_attention_heads: int = 16attention_head_dim: int = 72in_channels: int = 4out_channels: typing.Optional[int] = Nonenum_layers: int = 28dropout: float = 0.0norm_num_groups: int = 32attention_bias: bool = Truesample_size: int = 32patch_size: int = 2activation_fn: str = 'gelu-approximate'num_embeds_ada_norm: typing.Optional[int] = 1000upcast_attention: bool = Falsenorm_type: str = 'ada_norm_zero'norm_elementwise_affine: bool = Falsenorm_eps: float = 1e-05)
Parameters
num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention.
attention_head_dim (int, optional, defaults to 72) — The number of channels in each head.
in_channels (int, defaults to 4) — The number of channels in the input.
out_channels (int, optional) —
The number of channels in the output. Specify this parameter if the output channel number differs from the
input.
num_layers (int, optional, defaults to 28) — The number of layers of Transformer blocks to use.
dropout (float, optional, defaults to 0.0) — The dropout probability to use within the Transformer blocks.
norm_num_groups (int, optional, defaults to 32) —
Number of groups for group normalization within Transformer blocks.
attention_bias (bool, optional, defaults to True) —
Configure if the Transformer blocks’ attention should contain a bias parameter.
sample_size (int, defaults to 32) —
The width of the latent images. This parameter is fixed during training.
patch_size (int, defaults to 2) —
Size of the patches the model processes, relevant for architectures working on non-sequential data.
activation_fn (str, optional, defaults to “gelu-approximate”) —
Activation function to use in feed-forward networks within Transformer blocks.
num_embeds_ada_norm (int, optional, defaults to 1000) —
Number of embeddings for AdaLayerNorm, fixed during training and affects the maximum denoising steps during
inference.
upcast_attention (bool, optional, defaults to False) —
If true, upcasts the attention mechanism dimensions for potentially improved performance.
norm_type (str, optional, defaults to “ada_norm_zero”) —
Specifies the type of normalization used, can be ‘ada_norm_zero’.
norm_elementwise_affine (bool, optional, defaults to False) —
If true, enables element-wise affine parameters in the normalization layers.
norm_eps (float, optional, defaults to 1e-5) —
A small constant added to the denominator in normalization layers to prevent division by zero.
hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) —
Input hidden_states.
timestep ( torch.LongTensor, optional) —
Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm.
class_labels ( torch.LongTensor of shape (batch size, num classes), optional) —
Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in
AdaLayerZeroNorm.
cross_attention_kwargs ( Dict[str, Any], optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under
self.processor in
diffusers.models.attention_processor.
return_dict (bool, optional, defaults to True) —
Whether or not to return a UNet2DConditionOutput instead of a plain
tuple.