Identity-Preserving Text-to-Video Generation by Frequency Decomposition from Peking University & University of Rochester & etc, by Shenghai Yuan, Jinfa Huang, Xianyi He, Yunyang Ge, Yujun Shi, Liuhan Chen, Jiebo Luo, Li Yuan.
The abstract from the paper is:
Identity-preserving text-to-video (IPT2V) generation aims to create high-fidelity videos with consistent human identity. It is an important task in video generation but remains an open problem for generative models. This paper pushes the technical frontier of IPT2V in two directions that have not been resolved in the literature: (1) A tuning-free pipeline without tedious case-by-case finetuning, and (2) A frequency-aware heuristic identity-preserving Diffusion Transformer (DiT)-based control scheme. To achieve these goals, we propose ConsisID, a tuning-free DiT-based controllable IPT2V model to keep human-identity consistent in the generated video. Inspired by prior findings in frequency analysis of vision/diffusion transformers, it employs identity-control signals in the frequency domain, where facial features can be decomposed into low-frequency global features (e.g., profile, proportions) and high-frequency intrinsic features (e.g., identity markers that remain unaffected by pose changes). First, from a low-frequency perspective, we introduce a global facial extractor, which encodes the reference image and facial key points into a latent space, generating features enriched with low-frequency information. These features are then integrated into the shallow layers of the network to alleviate training challenges associated with DiT. Second, from a high-frequency perspective, we design a local facial extractor to capture high-frequency details and inject them into the transformer blocks, enhancing the model’s ability to preserve fine-grained features. To leverage the frequency information for identity preservation, we propose a hierarchical training strategy, transforming a vanilla pre-trained video generation model into an IPT2V model. Extensive experiments demonstrate that our frequency-aware heuristic scheme provides an optimal control solution for DiT-based models. Thanks to this scheme, our ConsisID achieves excellent results in generating high-quality, identity-preserving videos, making strides towards more effective IPT2V. The model weight of ConsID is publicly available at https://github.com/PKU-YuanGroup/ConsisID.
Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.
This pipeline was contributed by SHYuanBest. The original codebase can be found here. The original weights can be found under hf.co/BestWishYsh.
There are two official ConsisID checkpoints for identity-preserving text-to-video.
checkpoints | recommended inference dtype |
---|---|
BestWishYsh/ConsisID-preview | torch.bfloat16 |
BestWishYsh/ConsisID-1.5 | torch.bfloat16 |
ConsisID requires about 44 GB of GPU memory to decode 49 frames (6 seconds of video at 8 FPS) with output resolution 720x480 (W x H), which makes it not possible to run on consumer GPUs or free-tier T4 Colab. The following memory optimizations could be used to reduce the memory footprint. For replication, you can refer to this script.
Feature (overlay the previous) | Max Memory Allocated | Max Memory Reserved |
---|---|---|
- | 37 GB | 44 GB |
enable_model_cpu_offload | 22 GB | 25 GB |
enable_sequential_cpu_offload | 16 GB | 22 GB |
vae.enable_slicing | 16 GB | 22 GB |
vae.enable_tiling | 5 GB | 7 GB |
( tokenizer: T5Tokenizer text_encoder: T5EncoderModel vae: AutoencoderKLCogVideoX transformer: ConsisIDTransformer3DModel scheduler: CogVideoXDPMScheduler )
Parameters
T5EncoderModel
) —
Frozen text-encoder. ConsisID uses
T5; specifically the
t5-v1_1-xxl variant. T5Tokenizer
) —
Tokenizer of class
T5Tokenizer. ConsisIDTransformer3DModel
to denoise the encoded video latents. transformer
to denoise the encoded video latents. Pipeline for image-to-video generation using ConsisID.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
( image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] prompt: typing.Union[str, typing.List[str], NoneType] = None negative_prompt: typing.Union[str, typing.List[str], NoneType] = None height: int = 480 width: int = 720 num_frames: int = 49 num_inference_steps: int = 50 guidance_scale: float = 6.0 use_dynamic_cfg: bool = False num_videos_per_prompt: int = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: str = 'pil' return_dict: bool = True attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None callback_on_step_end: typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] max_sequence_length: int = 226 id_vit_hidden: typing.Optional[torch.Tensor] = None id_cond: typing.Optional[torch.Tensor] = None kps_cond: typing.Optional[torch.Tensor] = None ) → ConsisIDPipelineOutput or tuple
Parameters
PipelineImageInput
) —
The input image to condition the generation on. Must be an image, a list of images or a torch.Tensor
. str
or List[str]
, optional) —
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds
.
instead. str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). int
, optional, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) —
The height in pixels of the generated image. This is set to 480 by default for the best results. int
, optional, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) —
The width in pixels of the generated image. This is set to 720 by default for the best results. int
, defaults to 49
) —
Number of frames to generate. Must be divisible by self.vae_scale_factor_temporal. Generated video will
contain 1 extra frame because ConsisID is conditioned with (num_seconds * fps + 1) frames where
num_seconds is 6 and fps is 4. However, since videos can be saved at any fps, the only condition that
needs to be satisfied is that of divisibility mentioned above. int
, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. float
, optional, defaults to 6) —
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale
is defined as w
of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1
. Higher guidance scale encourages to generate images that are closely linked to the text prompt
,
usually at the expense of lower image quality. bool
, optional, defaults to False
) —
If True, dynamically adjusts the guidance scale during inference. This allows the model to use a
progressive guidance scale, improving the balance between text-guided generation and image quality over
the course of the inference steps. Typically, early inference steps use a higher guidance scale for
more faithful image generation, while later steps reduce it for more diverse and natural results. int
, optional, defaults to 1) —
The number of videos to generate per prompt. torch.Generator
or List[torch.Generator]
, optional) —
One or a list of torch generator(s)
to make generation deterministic. torch.FloatTensor
, optional) —
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will ge generated by sampling using the supplied random generator
. torch.FloatTensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument. torch.FloatTensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument. str
, optional, defaults to "pil"
) —
The output format of the generate image. Choose between
PIL: PIL.Image.Image
or np.array
. bool
, optional, defaults to True
) —
Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput
instead
of a plain tuple. dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined under
self.processor
in
diffusers.models.attention_processor. Callable
, optional) —
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)
. callback_kwargs
will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs
. List
, optional) —
The list of tensor inputs for the callback_on_step_end
function. The tensors specified in the list
will be passed as callback_kwargs
argument. You will only be able to include variables listed in the
._callback_tensor_inputs
attribute of your pipeline class. int
, defaults to 226
) —
Maximum sequence length in encoded prompt. Must be consistent with
self.transformer.config.max_text_seq_length
otherwise may lead to poor results. Optional[torch.Tensor]
, optional) —
The tensor representing the hidden features extracted from the face model, which are used to condition
the local facial extractor. This is crucial for the model to obtain high-frequency information of the
face. If not provided, the local facial extractor will not run normally. Optional[torch.Tensor]
, optional) —
The tensor representing the hidden features extracted from the clip model, which are used to condition
the local facial extractor. This is crucial for the model to edit facial features If not provided, the
local facial extractor will not run normally. Optional[torch.Tensor]
, optional) —
A tensor that determines whether the global facial extractor use keypoint information for conditioning.
If provided, this tensor controls whether facial keypoints such as eyes, nose, and mouth landmarks are
used during the generation process. This helps ensure the model retains more facial low-frequency
information. Returns
ConsisIDPipelineOutput or tuple
ConsisIDPipelineOutput if return_dict
is True, otherwise a
tuple
. When returning a tuple, the first element is a list with the generated images.
Function invoked when calling the pipeline for generation.
Examples:
>>> import torch
>>> from diffusers import ConsisIDPipeline
>>> from diffusers.pipelines.consisid.consisid_utils import prepare_face_models, process_face_embeddings_infer
>>> from diffusers.utils import export_to_video
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="BestWishYsh/ConsisID-preview", local_dir="BestWishYsh/ConsisID-preview")
>>> face_helper_1, face_helper_2, face_clip_model, face_main_model, eva_transform_mean, eva_transform_std = (
... prepare_face_models("BestWishYsh/ConsisID-preview", device="cuda", dtype=torch.bfloat16)
... )
>>> pipe = ConsisIDPipeline.from_pretrained("BestWishYsh/ConsisID-preview", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> # ConsisID works well with long and well-described prompts. Make sure the face in the image is clearly visible (e.g., preferably half-body or full-body).
>>> prompt = "The video captures a boy walking along a city street, filmed in black and white on a classic 35mm camera. His expression is thoughtful, his brow slightly furrowed as if he's lost in contemplation. The film grain adds a textured, timeless quality to the image, evoking a sense of nostalgia. Around him, the cityscape is filled with vintage buildings, cobblestone sidewalks, and softly blurred figures passing by, their outlines faint and indistinct. Streetlights cast a gentle glow, while shadows play across the boy's path, adding depth to the scene. The lighting highlights the boy's subtle smile, hinting at a fleeting moment of curiosity. The overall cinematic atmosphere, complete with classic film still aesthetics and dramatic contrasts, gives the scene an evocative and introspective feel."
>>> image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_input.png?download=true"
>>> id_cond, id_vit_hidden, image, face_kps = process_face_embeddings_infer(
... face_helper_1,
... face_clip_model,
... face_helper_2,
... eva_transform_mean,
... eva_transform_std,
... face_main_model,
... "cuda",
... torch.bfloat16,
... image,
... is_align_face=True,
... )
>>> video = pipe(
... image=image,
... prompt=prompt,
... num_inference_steps=50,
... guidance_scale=6.0,
... use_dynamic_cfg=False,
... id_vit_hidden=id_vit_hidden,
... id_cond=id_cond,
... kps_cond=face_kps,
... generator=torch.Generator("cuda").manual_seed(42),
... )
>>> export_to_video(video.frames[0], "output.mp4", fps=8)
( prompt: typing.Union[str, typing.List[str]] negative_prompt: typing.Union[str, typing.List[str], NoneType] = None do_classifier_free_guidance: bool = True num_videos_per_prompt: int = 1 prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None max_sequence_length: int = 226 device: typing.Optional[torch.device] = None dtype: typing.Optional[torch.dtype] = None )
Parameters
str
or List[str]
, optional) —
prompt to be encoded str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). bool
, optional, defaults to True
) —
Whether to use classifier free guidance or not. int
, optional, defaults to 1) —
Number of videos that should be generated per prompt. torch device to place the resulting embeddings on torch.Tensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument. torch.Tensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument. torch.device
, optional):
torch device torch.dtype
, optional):
torch dtype Encodes the prompt into text encoder hidden states.
( frames: Tensor )
Parameters
torch.Tensor
, np.ndarray
, or List[List[PIL.Image.Image]]) —
List of video outputs - It can be a nested list of length batch_size,
with each sub-list containing
denoised PIL image sequences of length num_frames.
It can also be a NumPy array or Torch tensor of shape
(batch_size, num_frames, channels, height, width)
. Output class for ConsisID pipelines.