Inspired by Karras et. al. Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library:
All credit for making this scheduler work goes to Katherine Crowson
( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 )
Parameters
int) — number of diffusion steps used to train the model. beta_start (float): the
beta value of inference. beta_end (float) — the final beta value. beta_schedule (str):
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear or scaled_linear.
np.ndarray, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc.
options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small,
fixed_small_log, fixed_large, fixed_large_log, learned or learned_range.
str, default epsilon, optional) —
prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion
process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
str, default "linspace") —
The way the timesteps should be scaled. Refer to Table 2. of Common Diffusion Noise Schedules and Sample
Steps are Flawed for more information.
int, default 0) —
an offset added to the inference steps. You can use a combination of offset=1 and
set_alpha_to_one=False, to make the last step use step 0 for the previous alpha product, as done in
stable diffusion.
Scheduler created by @crowsonkb in k_diffusion, see: https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188
Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022).
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
(
sample: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
)
→
torch.FloatTensor
( num_inference_steps: int device: typing.Union[str, torch.device] = None num_train_timesteps: typing.Optional[int] = None )
Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: typing.Union[torch.FloatTensor, numpy.ndarray]
timestep: typing.Union[float, torch.FloatTensor]
sample: typing.Union[torch.FloatTensor, numpy.ndarray]
return_dict: bool = True
)
→
SchedulerOutput or tuple
Parameters
torch.FloatTensor or np.ndarray): direct output from learned diffusion model. timestep
(int): current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray):
current instance of sample being created by diffusion process.
return_dict (bool): option for returning tuple rather than SchedulerOutput class
Returns
SchedulerOutput or tuple
SchedulerOutput if return_dict is True, otherwise a tuple. When
returning a tuple, the first element is the sample tensor.