metadata
license: apache-2.0
FastVideo Wan2.1-VSA-T2V-14B-720P-Diffusers
Model Overview
- This model is finetuned with VSA, based on Wan-AI/Wan2.1-T2V-14B-Diffusers.
- It achieves up to 2.1x speed up on a single H100 GPU.
- Our model is trained on 77×768×1280 resolution, but it supports generating videos with any resolution.(quality may degrade).
- We set VSA attention sparsity to 0.9, and training runs for 1500 steps (~14 hours). You can tune this value from 0 to 0.9 to balance speed and performance for inference.
- Finetuning and inference scripts are available in the FastVideo repository:
# install FastVideo and VSA first
git clone https://github.com/hao-ai-lab/FastVideo
pip install -e .
cd csrc/attn
git submodule update --init --recursive
python setup_vsa.py install
num_gpus=1
export FASTVIDEO_ATTENTION_BACKEND=VIDEO_SPARSE_ATTN
# change model path to local dir if you want to inference using your checkpoint
export MODEL_BASE=FastVideo/Wan2.1-VSA-T2V-14B-720P-Diffusers
# export MODEL_BASE=hunyuanvideo-community/HunyuanVideo
fastvideo generate \
--model-path $MODEL_BASE \
--sp-size $num_gpus \
--tp-size 1 \
--num-gpus $num_gpus \
--dit-cpu-offload False \
--vae-cpu-offload False \
--text-encoder-cpu-offload True \
--pin-cpu-memory False \
--height 720 \
--width 1280 \
--num-frames 81 \
--num-inference-steps 50 \
--fps 16 \
--guidance-scale 5.0 \
--flow-shift 5.0 \
--VSA-sparsity 0.9 \
--prompt-txt assets/prompt.txt \
--negative-prompt "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards" \
--seed 1024 \
--output-path outputs_Wan-VSA-14B/ \
--enable_torch_compile
- Try it out on FastVideo — we support a wide range of GPUs from H100 to 4090
- We use FastVideo 720P Synthetic Wan dataset for training.
If you use Wan2.1-VSA-T2V-14B-720P-Diffusers model for your research, please cite our paper:
@article{zhang2025vsa,
title={VSA: Faster Video Diffusion with Trainable Sparse Attention},
author={Zhang, Peiyuan and Huang, Haofeng and Chen, Yongqi and Lin, Will and Liu, Zhengzhong and Stoica, Ion and Xing, Eric and Zhang, Hao},
journal={arXiv preprint arXiv:2505.13389},
year={2025}
}
@article{zhang2025fast,
title={Fast video generation with sliding tile attention},
author={Zhang, Peiyuan and Chen, Yongqi and Su, Runlong and Ding, Hangliang and Stoica, Ion and Liu, Zhengzhong and Zhang, Hao},
journal={arXiv preprint arXiv:2502.04507},
year={2025}
}