A Transformer model for video-like data.
( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: typing.Optional[int] = None out_channels: typing.Optional[int] = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: typing.Optional[int] = None attention_bias: bool = False sample_size: typing.Optional[int] = None activation_fn: str = 'geglu' norm_elementwise_affine: bool = True double_self_attention: bool = True )
Parameters
int
, optional, defaults to 16) — The number of heads to use for multi-head attention.
int
, optional, defaults to 88) — The number of channels in each head.
int
, optional) —
The number of channels in the input and output (specify if the input is continuous).
int
, optional, defaults to 1) — The number of layers of Transformer blocks to use.
float
, optional, defaults to 0.0) — The dropout probability to use.
int
, optional) — The number of encoder_hidden_states
dimensions to use.
int
, optional) — The width of the latent images (specify if the input is discrete).
This is fixed during training since it is used to learn a number of position embeddings.
str
, optional, defaults to "geglu"
) — Activation function to use in feed-forward.
bool
, optional) —
Configure if the TransformerBlock
attention should contain a bias parameter.
bool
, optional) —
Configure if each TransformerBlock
should contain two self-attention layers.
A Transformer model for video-like data.
(
hidden_states
encoder_hidden_states = None
timestep = None
class_labels = None
num_frames = 1
cross_attention_kwargs = None
return_dict: bool = True
)
→
TransformerTemporalModelOutput or tuple
Parameters
torch.LongTensor
of shape (batch size, num latent pixels)
if discrete, torch.FloatTensor
of shape (batch size, channel, height, width)
if continuous) —
Input hidden_states.
torch.LongTensor
of shape (batch size, encoder_hidden_states dim)
, optional) —
Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
self-attention.
torch.long
, optional) —
Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm
.
torch.LongTensor
of shape (batch size, num classes)
, optional) —
Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in
AdaLayerZeroNorm
.
bool
, optional, defaults to True
) —
Whether or not to return a UNet2DConditionOutput instead of a plain
tuple.
Returns
TransformerTemporalModelOutput or tuple
If return_dict
is True, an TransformerTemporalModelOutput is
returned, otherwise a tuple
where the first element is the sample tensor.
The TransformerTemporal
forward method.
( sample: FloatTensor )
The output of TransformerTemporalModel
.