Diffusers documentation
LongCatImageTransformer2DModel
You are viewing main version, which requires installation from source. If you'd like
regular pip install, checkout the latest stable version (v0.36.0).
LongCatImageTransformer2DModel
The model can be loaded with the following code snippet.
from diffusers import LongCatImageTransformer2DModel
transformer = LongCatImageTransformer2DModel.from_pretrained("meituan-longcat/LongCat-Image ", subfolder="transformer", torch_dtype=torch.bfloat16)LongCatImageTransformer2DModel
class diffusers.LongCatImageTransformer2DModel
< source >( patch_size: int = 1 in_channels: int = 64 num_layers: int = 19 num_single_layers: int = 38 attention_head_dim: int = 128 num_attention_heads: int = 24 joint_attention_dim: int = 3584 pooled_projection_dim: int = 3584 axes_dims_rope: typing.List[int] = [16, 56, 56] )
The Transformer model introduced in Longcat-Image.
forward
< source >( hidden_states: Tensor encoder_hidden_states: Tensor = None timestep: LongTensor = None img_ids: Tensor = None txt_ids: Tensor = None guidance: Tensor = None return_dict: bool = True )
Parameters
- hidden_states (
torch.FloatTensorof shape(batch size, channel, height, width)) — Inputhidden_states. - encoder_hidden_states (
torch.FloatTensorof shape(batch size, sequence_len, embed_dims)) — Conditional embeddings (embeddings computed from the input conditions such as prompts) to use. - timestep (
torch.LongTensor) — Used to indicate denoising step. - block_controlnet_hidden_states — (
listoftorch.Tensor): A list of tensors that if specified are added to the residuals of transformer blocks. - return_dict (
bool, optional, defaults toTrue) — Whether or not to return a~models.transformer_2d.Transformer2DModelOutputinstead of a plain tuple.
The forward method.