Diffusers documentation
TransformerTemporalModel
TransformerTemporalModel
A Transformer model for video-like data.
TransformerTemporalModel
class diffusers.models.TransformerTemporalModel
< source >( num_attention_heads: int = 16attention_head_dim: int = 88in_channels: typing.Optional[int] = Noneout_channels: typing.Optional[int] = Nonenum_layers: int = 1dropout: float = 0.0norm_num_groups: int = 32cross_attention_dim: typing.Optional[int] = Noneattention_bias: bool = Falsesample_size: typing.Optional[int] = Noneactivation_fn: str = 'geglu'norm_elementwise_affine: bool = Truedouble_self_attention: bool = Truepositional_embeddings: typing.Optional[str] = Nonenum_positional_embeddings: typing.Optional[int] = None )
Parameters
- num_attention_heads (
int
, optional, defaults to 16) β The number of heads to use for multi-head attention. - attention_head_dim (
int
, optional, defaults to 88) β The number of channels in each head. - in_channels (
int
, optional) β The number of channels in the input and output (specify if the input is continuous). - num_layers (
int
, optional, defaults to 1) β The number of layers of Transformer blocks to use. - dropout (
float
, optional, defaults to 0.0) β The dropout probability to use. - cross_attention_dim (
int
, optional) β The number ofencoder_hidden_states
dimensions to use. - attention_bias (
bool
, optional) β Configure if theTransformerBlock
attention should contain a bias parameter. - sample_size (
int
, optional) β The width of the latent images (specify if the input is discrete). This is fixed during training since it is used to learn a number of position embeddings. - activation_fn (
str
, optional, defaults to"geglu"
) β Activation function to use in feed-forward. Seediffusers.models.activations.get_activation
for supported activation functions. - norm_elementwise_affine (
bool
, optional) β Configure if theTransformerBlock
should use learnable elementwise affine parameters for normalization. - double_self_attention (
bool
, optional) β Configure if eachTransformerBlock
should contain two self-attention layers. - positional_embeddings β (
str
, optional): The type of positional embeddings to apply to the sequence input before passing use. - num_positional_embeddings β (
int
, optional): The maximum length of the sequence over which to apply positional embeddings.
A Transformer model for video-like data.
forward
< source >( hidden_states: Tensorencoder_hidden_states: typing.Optional[torch.LongTensor] = Nonetimestep: typing.Optional[torch.LongTensor] = Noneclass_labels: LongTensor = Nonenum_frames: int = 1cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = Nonereturn_dict: bool = True ) β TransformerTemporalModelOutput or tuple
Parameters
- hidden_states (
torch.LongTensor
of shape(batch size, num latent pixels)
if discrete,torch.Tensor
of shape(batch size, channel, height, width)
if continuous) β Input hidden_states. - encoder_hidden_states (
torch.LongTensor
of shape(batch size, encoder_hidden_states dim)
, optional) β Conditional embeddings for cross attention layer. If not given, cross-attention defaults to self-attention. - timestep (
torch.LongTensor
, optional) β Used to indicate denoising step. Optional timestep to be applied as an embedding inAdaLayerNorm
. - class_labels (
torch.LongTensor
of shape(batch size, num classes)
, optional) β Used to indicate class labels conditioning. Optional class labels to be applied as an embedding inAdaLayerZeroNorm
. - num_frames (
int
, optional, defaults to 1) β The number of frames to be processed per batch. This is used to reshape the hidden states. - cross_attention_kwargs (
dict
, optional) β A kwargs dictionary that if specified is passed along to theAttentionProcessor
as defined underself.processor
in diffusers.models.attention_processor. - return_dict (
bool
, optional, defaults toTrue
) β Whether or not to return a TransformerTemporalModelOutput instead of a plain tuple.
Returns
TransformerTemporalModelOutput or tuple
If return_dict
is True, an
TransformerTemporalModelOutput is returned, otherwise a
tuple
where the first element is the sample tensor.
The TransformerTemporal
forward method.
TransformerTemporalModelOutput
class diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput
< source >( sample: Tensor )
The output of TransformerTemporalModel
.