The CosineDPMSolverMultistepScheduler is a variant of DPMSolverMultistepScheduler with cosine schedule, proposed by Nichol and Dhariwal (2021). It is being used in the Stable Audio Open paper and the Stability-AI/stable-audio-tool codebase.
This scheduler was contributed by Yoach Lacombe.
( sigma_min: float = 0.3 sigma_max: float = 500 sigma_data: float = 1.0 sigma_schedule: str = 'exponential' num_train_timesteps: int = 1000 solver_order: int = 2 prediction_type: str = 'v_prediction' rho: float = 7.0 solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False final_sigmas_type: typing.Optional[str] = 'zero' )
Parameters
float
, optional, defaults to 0.3) —
Minimum noise magnitude in the sigma schedule. This was set to 0.3 in Stable Audio Open [1]. float
, optional, defaults to 500) —
Maximum noise magnitude in the sigma schedule. This was set to 500 in Stable Audio Open [1]. float
, optional, defaults to 1.0) —
The standard deviation of the data distribution. This is set to 1.0 in Stable Audio Open [1]. str
, optional, defaults to exponential
) —
Sigma schedule to compute the sigmas
. By default, we the schedule introduced in the EDM paper
(https://arxiv.org/abs/2206.00364). Other acceptable value is “exponential”. The exponential schedule was
incorporated in this model: https://huggingface.co/stabilityai/cosxl. int
, defaults to 1000) —
The number of diffusion steps to train the model. int
, defaults to 2) —
The DPMSolver order which can be 1
or 2
. It is recommended to use solver_order=2
. str
, defaults to v_prediction
, optional) —
Prediction type of the scheduler function; can be epsilon
(predicts the noise of the diffusion process),
sample
(directly predicts the noisy sample) or
v_prediction` (see section 2.4 of Imagen
Video paper). str
, defaults to midpoint
) —
Solver type for the second-order solver; can be midpoint
or heun
. The solver type slightly affects the
sample quality, especially for a small number of steps. It is recommended to use midpoint
solvers. bool
, defaults to True
) —
Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can
stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. bool
, defaults to False
) —
Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail
richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference
steps, but sometimes may result in blurring. str
, defaults to "zero"
) —
The final sigma
value for the noise schedule during the sampling process. If "sigma_min"
, the final
sigma is the same as the last sigma in the training schedule. If zero
, the final sigma is set to 0. Implements a variant of DPMSolverMultistepScheduler
with cosine schedule, proposed by Nichol and Dhariwal (2021).
This scheduler was used in Stable Audio Open [1].
[1] Evans, Parker, et al. “Stable Audio Open” https://arxiv.org/abs/2407.14358
This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving.
( model_output: Tensor sample: Tensor = None ) → torch.Tensor
Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an integral of the data prediction model.
The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise prediction and data prediction models.
( model_output: Tensor sample: Tensor = None noise: typing.Optional[torch.Tensor] = None ) → torch.Tensor
One step for the first-order DPMSolver (equivalent to DDIM).
( model_output_list: typing.List[torch.Tensor] sample: Tensor = None noise: typing.Optional[torch.Tensor] = None ) → torch.Tensor
One step for the second-order multistep DPMSolver.
( sample: Tensor timestep: typing.Union[float, torch.Tensor] ) → torch.Tensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5
to match the Euler algorithm.
( begin_index: int = 0 )
Sets the begin index for the scheduler. This function should be run from pipeline before the inference.
( num_inference_steps: int = None device: typing.Union[str, torch.device] = None )
Sets the discrete timesteps used for the diffusion chain (to be run before inference).
( model_output: Tensor timestep: typing.Union[int, torch.Tensor] sample: Tensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple
Parameters
torch.Tensor
) —
The direct output from learned diffusion model. int
) —
The current discrete timestep in the diffusion chain. torch.Tensor
) —
A current instance of a sample created by the diffusion process. torch.Generator
, optional) —
A random number generator. bool
) —
Whether or not to return a SchedulerOutput or tuple
. Returns
SchedulerOutput or tuple
If return_dict is True
, SchedulerOutput is returned, otherwise a
tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with the multistep DPMSolver.
( prev_sample: Tensor )
Base class for the output of a scheduler’s step
function.