An attention processor is a class for applying different types of attention mechanisms.
Default processor for performing attention-related computations.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0).
Processor for performing attention-related computations with extra learnable key and value matrices for the text encoder.
Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra learnable key and value matrices for the text encoder.
Processor for implementing flash attention using torch_npu. Torch_npu supports only fp16 and bf16 data types. If fp32 is used, F.scaled_dot_product_attention will be used for computation, but the acceleration effect on NPU is not significant.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.
This API is currently 🧪 experimental in nature and can change in future.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the Allegro model. It applies a normalization layer and rotary embedding on the query and key vector.
Attention processor used typically in processing Aura Flow.
Attention processor used typically in processing Aura Flow with fused projections.
Processor for implementing scaled dot-product attention for the CogVideoX model. It applies a rotary embedding on query and key vectors, but does not include spatial normalization.
Processor for implementing scaled dot-product attention for the CogVideoX model. It applies a rotary embedding on query and key vectors, but does not include spatial normalization.
( batch_size = 2 )
Cross frame attention processor. Each frame attends the first frame.
( train_kv: bool = True train_q_out: bool = True hidden_size: typing.Optional[int] = None cross_attention_dim: typing.Optional[int] = None out_bias: bool = True dropout: float = 0.0 )
Parameters
bool
, defaults to True
) —
Whether to newly train the key and value matrices corresponding to the text features. bool
, defaults to True
) —
Whether to newly train query matrices corresponding to the latent image features. int
, optional, defaults to None
) —
The hidden size of the attention layer. int
, optional, defaults to None
) —
The number of channels in the encoder_hidden_states
. bool
, defaults to True
) —
Whether to include the bias parameter in train_q_out
. float
, optional, defaults to 0.0) —
The dropout probability to use. Processor for implementing attention for the Custom Diffusion method.
( train_kv: bool = True train_q_out: bool = True hidden_size: typing.Optional[int] = None cross_attention_dim: typing.Optional[int] = None out_bias: bool = True dropout: float = 0.0 )
Parameters
bool
, defaults to True
) —
Whether to newly train the key and value matrices corresponding to the text features. bool
, defaults to True
) —
Whether to newly train query matrices corresponding to the latent image features. int
, optional, defaults to None
) —
The hidden size of the attention layer. int
, optional, defaults to None
) —
The number of channels in the encoder_hidden_states
. bool
, defaults to True
) —
Whether to include the bias parameter in train_q_out
. float
, optional, defaults to 0.0) —
The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled dot-product attention.
( train_kv: bool = True train_q_out: bool = False hidden_size: typing.Optional[int] = None cross_attention_dim: typing.Optional[int] = None out_bias: bool = True dropout: float = 0.0 attention_op: typing.Optional[typing.Callable] = None )
Parameters
bool
, defaults to True
) —
Whether to newly train the key and value matrices corresponding to the text features. bool
, defaults to True
) —
Whether to newly train query matrices corresponding to the latent image features. int
, optional, defaults to None
) —
The hidden size of the attention layer. int
, optional, defaults to None
) —
The number of channels in the encoder_hidden_states
. bool
, defaults to True
) —
Whether to include the bias parameter in train_q_out
. float
, optional, defaults to 0.0) —
The dropout probability to use. Callable
, optional, defaults to None
) —
The base
operator to use
as the attention operator. It is recommended to set to None
, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method.
Attention processor used typically in processing the SD3-like self-attention projections.
Attention processor used typically in processing the SD3-like self-attention projections.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0).
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the HunyuanDiT model. It applies a s normalization layer and rotary embedding on query and key vector.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0) with fused projection layers. This is used in the HunyuanDiT model. It applies a s normalization layer and rotary embedding on query and key vector.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the HunyuanDiT model. It applies a normalization layer and rotary embedding on query and key vector. This variant of the processor employs Pertubed Attention Guidance.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the HunyuanDiT model. It applies a normalization layer and rotary embedding on query and key vector. This variant of the processor employs Pertubed Attention Guidance.
Processor for implementing PAG using scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). PAG reference: https://arxiv.org/abs/2403.17377
Processor for implementing PAG using scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). PAG reference: https://arxiv.org/abs/2403.17377
( hidden_size cross_attention_dim = None num_tokens = (4,) scale = 1.0 )
Parameters
int
) —
The hidden size of the attention layer. int
) —
The number of channels in the encoder_hidden_states
. int
, Tuple[int]
or List[int]
, defaults to (4,)
) —
The context length of the image features. float
or Listfloat
, defaults to 1.0) —
the weight scale of image prompt. Attention processor for Multiple IP-Adapters.
( hidden_size cross_attention_dim = None num_tokens = (4,) scale = 1.0 )
Parameters
int
) —
The hidden size of the attention layer. int
) —
The number of channels in the encoder_hidden_states
. int
, Tuple[int]
or List[int]
, defaults to (4,)
) —
The context length of the image features. float
or List[float]
, defaults to 1.0) —
the weight scale of image prompt. Attention processor for IP-Adapter for PyTorch 2.0.
( hidden_size: int ip_hidden_states_dim: int head_dim: int timesteps_emb_dim: int = 1280 scale: float = 0.5 )
Parameters
int
) —
The number of hidden channels. int
) —
The image feature dimension. int
) —
The number of head channels. int
, defaults to 1280) —
The number of input channels for timestep embedding. float
, defaults to 0.5) —
IP-Adapter scale. Attention processor for IP-Adapter used typically in processing the SD3-like self-attention projections, with additional image-based information and timestep embeddings.
Attention processor used typically in processing the SD3-like self-attention projections.
Attention processor used typically in processing the SD3-like self-attention projections.
Attention processor used typically in processing the SD3-like self-attention projections.
Attention processor used typically in processing the SD3-like self-attention projections.
Processor for implementing attention with LoRA.
Processor for implementing attention with LoRA (enabled by default if you’re using PyTorch 2.0).
Processor for implementing attention with LoRA with extra learnable key and value matrices for the text encoder.
Processor for implementing attention with LoRA using xFormers.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the LuminaNextDiT model. It applies a s normalization layer and rotary embedding on query and key vector.
Attention processor used in Mochi.
Attention processor used in Mochi VAE.
Processor for implementing scaled dot-product linear attention.
Processor for implementing multiscale quadratic attention.
Processor for implementing scaled dot-product linear attention.
Processor for implementing scaled dot-product linear attention.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the Stable Audio model. It applies rotary embedding on query and key vector, and allows MHA, GQA or MQA.
( slice_size: int )
Processor for implementing sliced attention.
( slice_size )
Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder.
( attention_op: typing.Optional[typing.Callable] = None )
Parameters
Callable
, optional, defaults to None
) —
The base
operator to
use as the attention operator. It is recommended to set to None
, and allow xFormers to choose the best
operator. Processor for implementing memory efficient attention using xFormers.
( attention_op: typing.Optional[typing.Callable] = None )
Parameters
Callable
, optional, defaults to None
) —
The base
operator to
use as the attention operator. It is recommended to set to None
, and allow xFormers to choose the best
operator. Processor for implementing memory efficient attention using xFormers.
( partition_spec: typing.Optional[typing.Tuple[typing.Optional[str], ...]] = None )
Processor for implementing scaled dot-product attention with pallas flash attention kernel if using torch_xla
.