Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach.
The abstract from the paper is:
We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models.
guidance_scale=0.0
.timestep_spacing='trailing'
for the scheduler and use between 1 and 4 steps.To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide.
Check out the Stability AI Hub organization for the official base and refiner model checkpoints!