VideoModelStudio / docs /diffusers /Load schedulers and models in Diffusers.md
jbilcke-hf's picture
jbilcke-hf HF staff
working on the new Preview tab
c8cb798

A newer version of the Gradio SDK is available: 5.23.1

Upgrade

Load schedulers and models

Open In Colab

Open In Studio Lab

Diffusion pipelines are a collection of interchangeable schedulers and models that can be mixed and matched to tailor a pipeline to a specific use case. The scheduler encapsulates the entire denoising process such as the number of denoising steps and the algorithm for finding the denoised sample. A scheduler is not parameterized or trained so they don’t take very much memory. The model is usually only concerned with the forward pass of going from a noisy input to a less noisy sample.

This guide will show you how to load schedulers and models to customize a pipeline. You’ll use the stable-diffusion-v1-5/stable-diffusion-v1-5 checkpoint throughout this guide, so let’s load it first.

Copied

import torch from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True ).to("cuda")

You can see what scheduler this pipeline uses with the pipeline.scheduler attribute.

Copied

pipeline.scheduler PNDMScheduler { "_class_name": "PNDMScheduler", "_diffusers_version": "0.21.4", "beta_end": 0.012, "beta_schedule": "scaled_linear", "beta_start": 0.00085, "clip_sample": false, "num_train_timesteps": 1000, "set_alpha_to_one": false, "skip_prk_steps": true, "steps_offset": 1, "timestep_spacing": "leading", "trained_betas": null }

Load a scheduler

Schedulers are defined by a configuration file that can be used by a variety of schedulers. Load a scheduler with the SchedulerMixin.from_pretrained() method, and specify the subfolder parameter to load the configuration file into the correct subfolder of the pipeline repository.

For example, to load the DDIMScheduler:

Copied

from diffusers import DDIMScheduler, DiffusionPipeline

ddim = DDIMScheduler.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", subfolder="scheduler")

Then you can pass the newly loaded scheduler to the pipeline.

Copied

pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", scheduler=ddim, torch_dtype=torch.float16, use_safetensors=True ).to("cuda")

Compare schedulers

Schedulers have their own unique strengths and weaknesses, making it difficult to quantitatively compare which scheduler works best for a pipeline. You typically have to make a trade-off between denoising speed and denoising quality. We recommend trying out different schedulers to find one that works best for your use case. Call the pipeline.scheduler.compatibles attribute to see what schedulers are compatible with a pipeline.

Let’s compare the LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler, and the DPMSolverMultistepScheduler on the following prompt and seed.

Copied

import torch from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True ).to("cuda")

prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." generator = torch.Generator(device="cuda").manual_seed(8)

To change the pipelines scheduler, use the from_config() method to load a different scheduler’s pipeline.scheduler.config into the pipeline.

LMSDiscreteScheduler

EulerDiscreteScheduler

EulerAncestralDiscreteScheduler

DPMSolverMultistepScheduler

LMSDiscreteScheduler typically generates higher quality images than the default scheduler.

Copied

from diffusers import LMSDiscreteScheduler

pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) image = pipeline(prompt, generator=generator).images[0] image

LMSDiscreteScheduler

EulerDiscreteScheduler

EulerAncestralDiscreteScheduler

DPMSolverMultistepScheduler

Most images look very similar and are comparable in quality. Again, it often comes down to your specific use case so a good approach is to run multiple different schedulers and compare the results.

Flax schedulers

To compare Flax schedulers, you need to additionally load the scheduler state into the model parameters. For example, let’s change the default scheduler in FlaxStableDiffusionPipeline to use the super fast FlaxDPMSolverMultistepScheduler.

The FlaxLMSDiscreteScheduler and FlaxDDPMScheduler are not compatible with the FlaxStableDiffusionPipeline yet.

Copied

import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler

scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", subfolder="scheduler" ) pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", scheduler=scheduler, variant="bf16", dtype=jax.numpy.bfloat16, ) params["scheduler"] = scheduler_state

Then you can take advantage of Flax’s compatibility with TPUs to generate a number of images in parallel. You’ll need to make a copy of the model parameters for each available device and then split the inputs across them to generate your desired number of images.

Copied

# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." num_samples = jax.device_count() prompt_ids = pipeline.prepare_inputs([prompt] * num_samples)

prng_seed = jax.random.PRNGKey(0) num_inference_steps = 25

# shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, jax.device_count()) prompt_ids = shard(prompt_ids)

images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))

Models

Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them.

Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for stable-diffusion-v1-5/stable-diffusion-v1-5 are stored in the unet subfolder.

Copied

from diffusers import UNet2DConditionModel

unet = UNet2DConditionModel.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", subfolder="unet", use_safetensors=True)

They can also be directly loaded from a repository.

Copied

from diffusers import UNet2DModel

unet = UNet2DModel.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True)

To load and save model variants, specify the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained().

Copied

from diffusers import UNet2DConditionModel

unet = UNet2DConditionModel.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True ) unet.save_pretrained("./local-unet", variant="non_ema")

< > Update on GitHub

←Load community pipelines and components Model files and layouts→