Wan 2.2 Accelerated Inference
Collection
optimized demos for Wan 2.2 14B models, using FP8 quantization + AoT compilation & community LoRAs for fast & high quality inference on ZeroGPU π¨
β’
3 items
β’
Updated
β’
5