text
stringlengths
0
5.54k
>>> from io import BytesIO
>>> from diffusers import AltDiffusionImg2ImgPipeline
>>> device = "cuda"
>>> model_id_or_path = "BAAI/AltDiffusion-m9"
>>> pipe = AltDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
>>> pipe = pipe.to(device)
>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
>>> response = requests.get(url)
>>> init_image = Image.open(BytesIO(response.content)).convert("RGB")
>>> init_image = init_image.resize((768, 512))
>>> # "A fantasy landscape, trending on artstation"
>>> prompt = "幻想风景, artstation"
>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
>>> images[0].save("幻想风景.png")
enable_model_cpu_offload
<
source
>
(
gpu_id = 0
)
Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet.
enable_sequential_cpu_offload
<
source
>
(
gpu_id = 0
)
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower.
Control image brightness The Stable Diffusion pipeline is mediocre at generating images that are either very bright or dark as explained in the Common Diffusion Noise Schedules and Sample Steps are Flawed paper. The solutions proposed in the paper are currently implemented in the DDIMScheduler which you can use to improve the lighting in your images. 💡 Take a look at the paper linked above for more details about the proposed solutions! One of the solutions is to train a model with v prediction and v loss. Add the following flag to the train_text_to_image.py or train_text_to_image_lora.py scripts to enable v_prediction: Copied --prediction_type="v_prediction" For example, let’s use the ptx0/pseudo-journey-v2 checkpoint which has been finetuned with v_prediction. Next, configure the following parameters in the DDIMScheduler: rescale_betas_zero_snr=True, rescales the noise schedule to zero terminal signal-to-noise ratio (SNR) timestep_spacing="trailing", starts sampling from the last timestep Copied from diffusers import DiffusionPipeline, DDIMScheduler
pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True)
# switch the scheduler in the pipeline to use the DDIMScheduler
pipeline.scheduler = DDIMScheduler.from_config(
pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
)
pipeline.to("cuda") Finally, in your call to the pipeline, set guidance_rescale to prevent overexposure: Copied prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"
image = pipeline(prompt, guidance_rescale=0.7).images[0]
image
SDXL Turbo Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Tips SDXL Turbo uses the exact same architecture as SDXL, which means it also has the same API. Please refer to the SDXL API reference for more details. SDXL Turbo should disable guidance scale by setting guidance_scale=0.0 SDXL Turbo should use timestep_spacing='trailing' for the scheduler and use between 1 and 4 steps. SDXL Turbo has been trained to generate images of size 512x512. SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the official model card to learn more. To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints!
Stable Diffusion XL Turbo SDXL Turbo is an adversarial time-distilled Stable Diffusion XL (SDXL) model capable
of running inference in as little as 1 step. This guide will show you how to use SDXL-Turbo for text-to-image and image-to-image. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab
#!pip install -q diffusers transformers accelerate omegaconf Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16")
pipeline = pipeline.to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline
import torch
pipeline = StableDiffusionXLPipeline.from_single_file(
"https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", torch_dtype=torch.float16)
pipeline = pipeline.to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the height and width parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so. Make sure to set guidance_scale to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images.
Increasing the number of steps to 2, 3 or 4 should improve image quality. Copied from diffusers import AutoPipelineForText2Image
import torch
pipeline_text2image = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16")
pipeline_text2image = pipeline_text2image.to("cuda")
prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe."
image = pipeline_text2image(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images[0]
image Image-to-image For image-to-image generation, make sure that num_inference_steps * strength is larger or equal to 1.
The image-to-image pipeline will run for int(num_inference_steps * strength) steps, e.g. 0.5 * 2.0 = 1 step in
our example below. Copied from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image, make_image_grid
# use from_pipe to avoid consuming additional memory when loading a checkpoint
pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda")
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
init_image = init_image.resize((512, 512))
prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"
image = pipeline(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0]