text
stringlengths
0
5.54k
make_image_grid([init_image, image], rows=1, cols=2) Speed-up SDXL Turbo even more Compile the UNet if you are using PyTorch version 2 or better. The first inference run will be very slow, but subsequent ones will be much faster. Copied pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) When using the default VAE, keep it in float32 to avoid costly dtype conversions before and after each generation. You only need to do this one before your first generation: Copied pipe.upcast_vae() As an alternative, you can also use a 16-bit VAE created by community member @madebyollin that does not need to be upcasted to float32.
Adapt a model to a new task Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained UNet2DConditionModel. Configure UNet2DConditionModel parameters A UNet2DConditionModel by default accepts 4 channels in the input sample. For example, load a pretrained text-to-image model like runwayml/stable-diffusion-v1-5 and take a look at the number of in_channels: Copied from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True)
pipeline.unet.config["in_channels"]
4 Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like runwayml/stable-diffusion-inpainting: Copied from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True)
pipeline.unet.config["in_channels"]
9 To adapt your text-to-image model for inpainting, you’ll need to change the number of in_channels from 4 to 9. Initialize a UNet2DConditionModel with the pretrained text-to-image model weights, and change in_channels to 9. Changing the number of in_channels means you need to set ignore_mismatched_sizes=True and low_cpu_mem_usage=False to avoid a size mismatch error because the shape is different now. Copied from diffusers import UNet2DConditionModel
model_id = "runwayml/stable-diffusion-v1-5"
unet = UNet2DConditionModel.from_pretrained(
model_id,
subfolder="unet",
in_channels=9,
low_cpu_mem_usage=False,
ignore_mismatched_sizes=True,
use_safetensors=True,
) The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (conv_in.weight) of the unet are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise.
PNDMScheduler PNDMScheduler, or pseudo numerical methods for diffusion models, uses more advanced ODE integration techniques like the Runge-Kutta and linear multi-step method. The original implementation can be found at crowsonkb/k-diffusion. PNDMScheduler class diffusers.PNDMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None skip_prk_steps: bool = False set_alpha_to_one: bool = False prediction_type: str = 'epsilon' timestep_spacing: str = 'leading' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) β€”
The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) β€”
The starting beta value of inference. beta_end (float, defaults to 0.02) β€”
The final beta value. beta_schedule (str, defaults to "linear") β€”
The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) β€”
Pass an array of betas directly to the constructor to bypass beta_start and beta_end. skip_prk_steps (bool, defaults to False) β€”
Allows the scheduler to skip the Runge-Kutta steps defined in the original paper as being required before
PLMS steps. set_alpha_to_one (bool, defaults to False) β€”
Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
there is no previous alpha. When this option is True the previous alpha product is fixed to 1,
otherwise it uses the alpha value at step 0. prediction_type (str, defaults to epsilon, optional) β€”
Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process)
or v_prediction (see section 2.4 of Imagen Video
paper). timestep_spacing (str, defaults to "leading") β€”
The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and
Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) β€”
An offset added to the inference steps. You can use a combination of offset=1 and
set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable
Diffusion. PNDMScheduler uses pseudo numerical methods for diffusion models such as the Runge-Kutta and linear multi-step
method. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) β†’ torch.FloatTensor Parameters sample (torch.FloatTensor) β€”
The input sample. Returns
torch.FloatTensor
A scaled input sample.
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) β€”
The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) β€”
The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) β†’ SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β€”
The direct output from learned diffusion model. timestep (int) β€”
The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β€”
A current instance of a sample created by the diffusion process. return_dict (bool) β€”
Whether or not to return a SchedulerOutput or tuple. Returns
SchedulerOutput or tuple
If return_dict is True, SchedulerOutput is returned, otherwise a
tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise), and calls step_prk()
or step_plms() depending on the internal variable counter. step_plms < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) β†’ SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β€”
The direct output from learned diffusion model. timestep (int) β€”
The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β€”
A current instance of a sample created by the diffusion process. return_dict (bool) β€”
Whether or not to return a SchedulerOutput or tuple. Returns
SchedulerOutput or tuple
If return_dict is True, SchedulerOutput is returned, otherwise a
tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the linear multistep method. It performs one forward pass multiple times to approximate the solution. step_prk < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) β†’ SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β€”
The direct output from learned diffusion model. timestep (int) β€”
The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β€”
A current instance of a sample created by the diffusion process. return_dict (bool) β€”
Whether or not to return a SchedulerOutput or tuple. Returns
SchedulerOutput or tuple
If return_dict is True, SchedulerOutput is returned, otherwise a
tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the Runge-Kutta method. It performs four forward passes to approximate the solution to the differential
equation. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€”
Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the
denoising loop. Base class for the output of a scheduler’s step function.
Latent Consistency Model Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. LCM distilled models are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. This guide shows how to perform inference with LCMs for text-to-image image-to-image combined with style LoRAs ControlNet/T2I-Adapter Text-to-image You’ll use the StableDiffusionXLPipeline pipeline with the LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow, overcoming the slow iterative nature of diffusion models. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler
import torch
unet = UNet2DConditionModel.from_pretrained(
"latent-consistency/lcm-sdxl",
torch_dtype=torch.float16,
variant="fp16",
)
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16",
).to("cuda")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
generator = torch.manual_seed(0)