text
stringlengths
0
5.54k
CLIPTokenizer. prior_scheduler (UnCLIPScheduler) —
A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) —
A image_processor to be used to preprocess image from clip. Combined Pipeline for inpainting generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) —
The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) —
Image, or tensor representing an image batch, that will be used as the starting point for the
process. Can also accept image latents as image, if passing latents directly, it will not be encoded
again. mask_image (np.array) —
Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while
black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single
channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3,
so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) —
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) —
The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) —
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale is defined as w of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt,
usually at the expense of lower image quality. num_inference_steps (int, optional, defaults to 100) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. height (int, optional, defaults to 512) —
The height in pixels of the generated image. width (int, optional, defaults to 512) —
The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) —
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale is defined as w of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt,
usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) —
One or a list of torch generator(s)
to make generation deterministic. latents (torch.FloatTensor, optional) —
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") —
The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np"
(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) —
Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) —
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) —
The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the
list will be passed as callback_kwargs argument. You will only be able to include variables listed in
the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) —
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) —
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list
will be passed as callback_kwargs argument. You will only be able to include variables listed in the
._callback_tensor_inputs attribute of your pipeline class. Returns
ImagePipelineOutput or tuple
Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image
import torch
import numpy as np
pipe = AutoPipelineForInpainting.from_pretrained(
"kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16
)
pipe.enable_model_cpu_offload()
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"
original_image = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)
mask = np.zeros((768, 768), dtype=np.float32)
# Let's mask out an area above the cat's head
mask[:250, 250:-250] = 1
image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower.
Using Diffusers with other modalities Diffusers is in the process of expanding to modalities other than images. Example type Colab Pipeline Molecule conformation generation ❌ More coming soon!
Performing inference with LCM-LoRA Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. However, each model needs to be distilled separately for latent consistency distillation. The core idea with LCM-LoRA is to train just a few adapter layers, the adapter being LoRA in this case.
This way, we don’t have to train the full model and keep the number of trainable parameters manageable. The resulting LoRAs can then be applied to any fine-tuned version of the model without distilling them separately.
Additionally, the LoRAs can be applied to image-to-image, ControlNet/T2I-Adapter, inpainting, AnimateDiff etc.
The LCM-LoRA can also be combined with other LoRAs to generate styled images in very few steps (4-8). LCM-LoRAs are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. For more details about LCM-LoRA, refer to the technical report. This guide shows how to perform inference with LCM-LoRAs for text-to-image image-to-image combined with styled LoRAs ControlNet/T2I-Adapter inpainting AnimateDiff Before going through this guide, we’ll take a look at the general workflow for performing inference with LCM-LoRAs.
LCM-LoRAs are similar to other Stable Diffusion LoRAs so they can be used with any DiffusionPipeline that supports LoRAs. Load the task specific pipeline and model. Set the scheduler to LCMScheduler. Load the LCM-LoRA weights for the model. Reduce the guidance_scale between [1.0, 2.0] and set the num_inference_steps between [4, 8]. Perform inference with the pipeline with the usual parameters. Let’s look at how we can perform inference with LCM-LoRAs for different tasks. First, make sure you have peft installed, for better LoRA support. Copied pip install -U peft Text-to-image You’ll use the StableDiffusionXLPipeline with the scheduler: LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied import torch
from diffusers import DiffusionPipeline, LCMScheduler
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
variant="fp16",
torch_dtype=torch.float16
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl")
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
generator = torch.manual_seed(42)
image = pipe(
prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0
).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. You may have noticed that we set guidance_scale=1.0, which disables classifer-free-guidance. This is because the LCM-LoRA is trained with guidance, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. You can also use guidance with LCM-LoRA, but due to the nature of training the model is very sensitve to the guidance_scale values, high values can lead to artifacts in the generated images. In our experiments, we found that the best values are in the range of [1.0, 2.0]. Inference with a fine-tuned model As mentioned above, the LCM-LoRA can be applied to any fine-tuned version of the model without having to distill them separately. Let’s look at how we can perform inference with a fine-tuned model. In this example, we’ll use the animagine-xl model, which is a fine-tuned version of the SDXL model for generating anime. Copied from diffusers import DiffusionPipeline, LCMScheduler