text
stringlengths 0
5.54k
|
---|
Returns |
SemanticStableDiffusionPipelineOutput or tuple |
SemanticStableDiffusionPipelineOutput if return_dict is True, |
otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. |
Function invoked when calling the pipeline for generation. |
Token merging Token merging (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of StableDiffusionPipeline. Install ToMe from pip: Copied pip install tomesd You can use ToMe from the tomesd library with the apply_patch function: Copied from diffusers import StableDiffusionPipeline |
import torch |
import tomesd |
pipeline = StableDiffusionPipeline.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, |
).to("cuda") |
+ tomesd.apply_patch(pipeline, ratio=0.5) |
image = pipeline("a photo of an astronaut riding a horse on mars").images[0] The apply_patch function exposes a number of arguments to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is ratio which controls the number of tokens that are merged during the forward pass. As reported in the paper, ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the ratio, you can speed-up inference even further, but at the cost of some degraded image quality. To test the quality of the generated images, we sampled a few prompts from Parti Prompts and performed inference with the StableDiffusionPipeline with the following settings: We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this WandB report. If you’re interested in reproducing this experiment, use this script. Benchmarks We also benchmarked the impact of tomesd on the StableDiffusionPipeline with xFormers enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment: Copied - `diffusers` version: 0.15.1 |
- Python version: 3.8.16 |
- PyTorch version (GPU?): 1.13.1+cu116 (True) |
- Huggingface_hub version: 0.13.2 |
- Transformers version: 4.27.2 |
- Accelerate version: 0.18.0 |
- xFormers version: 0.0.16 |
- tomesd version: 0.1.2 To reproduce this benchmark, feel free to use this script. The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. GPU Resolution Batch size Vanilla ToMe ToMe + xFormers A100 512 10 6.88 5.26 (+23.55%) 4.69 (+31.83%) 768 10 OOM 14.71 11 8 OOM 11.56 8.84 4 OOM 5.98 4.66 2 4.99 3.24 (+35.07%) 2.1 (+37.88%) 1 3.29 2.24 (+31.91%) 2.03 (+38.3%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM 12.51 9.09 2 OOM 6.52 4.96 1 6.4 3.61 (+43.59%) 2.81 (+56.09%) V100 512 10 OOM 10.03 9.29 8 OOM 8.05 7.47 4 5.7 4.3 (+24.56%) 3.98 (+30.18%) 2 3.14 2.43 (+22.61%) 2.27 (+27.71%) 1 1.88 1.57 (+16.49%) 1.57 (+16.49%) 768 10 OOM OOM 23.67 8 OOM OOM 18.81 4 OOM 11.81 9.7 2 OOM 6.27 5.2 1 5.43 3.38 (+37.75%) 2.82 (+48.07%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM OOM 19.35 2 OOM 13 10.78 1 OOM 6.66 5.54 As seen in the tables above, the speed-up from tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with torch.compile. |
DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab |
#!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt. These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears, then: Copied source_prompt = "a bowl of fruits" |
target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt, but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch |
from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline |
pipeline = StableDiffusionDiffEditPipeline.from_pretrained( |
"stabilityai/stable-diffusion-2-1", |
torch_dtype=torch.float16, |
safety_checker=None, |
use_safetensors=True, |
) |
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) |
pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) |
pipeline.enable_model_cpu_offload() |
pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid |
img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" |
raw_image = load_image(img_url).resize((768, 768)) |
raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image |
source_prompt = "a bowl of fruits" |
target_prompt = "a basket of pears" |
mask_image = pipeline.generate_mask( |
image=raw_image, |
source_prompt=source_prompt, |
target_prompt=target_prompt, |
) |
Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt: Copied output_image = pipeline( |
prompt=target_prompt, |
mask_image=mask_image, |
image_latents=inv_latents, |
negative_prompt=source_prompt, |
).images[0] |
mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) |
make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the 🤗 Transformers library: Copied import torch |
from transformers import AutoTokenizer, T5ForConditionalGeneration |
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large") |
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl" |
target_concept = "basket" |
source_text = f"Provide a caption for images containing a {source_concept}. " |
"The captions should be in English and should be no longer than 150 characters." |
target_text = f"Provide a caption for images containing a {target_concept}. " |
"The captions should be in English and should be no longer than 150 characters." Next, create a utility function to generate the prompts: Copied @torch.no_grad() |
def generate_prompts(input_prompt): |
input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") |
outputs = model.generate( |
input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 |
) |
return tokenizer.batch_decode(outputs, skip_special_tokens=True) |
source_prompts = generate_prompts(source_text) |
target_prompts = generate_prompts(target_text) |
print(source_prompts) |
print(target_prompts) Check out the generation strategy guide if you’re interested in learning more about strategies for generating different quality text. Load the text encoder model used by the StableDiffusionDiffEditPipeline to encode the text. You’ll use the text encoder to compute the text embeddings: Copied import torch |
from diffusers import StableDiffusionDiffEditPipeline |
pipeline = StableDiffusionDiffEditPipeline.from_pretrained( |
"stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
pipeline.enable_vae_slicing() |
@torch.no_grad() |
def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"): |
embeddings = [] |
for sent in sentences: |
text_inputs = tokenizer( |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.