Datasets:

ArXiv:
diffusers-benchmarking-bot commited on
Commit
5bca759
·
verified ·
1 Parent(s): 9069d05

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. main/README.md +194 -64
  2. main/mixture_tiling_sdxl.py +1185 -0
main/README.md CHANGED
@@ -24,8 +24,8 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
24
  | Speech to Image | Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images | [Speech to Image](#speech-to-image) |[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/speech_to_image.ipynb) | [Mikail Duzenli](https://github.com/MikailINTech)
25
  | Wild Card Stable Diffusion | Stable Diffusion Pipeline that supports prompts that contain wildcard terms (indicated by surrounding double underscores), with values instantiated randomly from a corresponding txt file or a dictionary of possible values | [Wildcard Stable Diffusion](#wildcard-stable-diffusion) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/wildcard_stable_diffusion.ipynb) | [Shyam Sudhakaran](https://github.com/shyamsn97) |
26
  | [Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) | Stable Diffusion Pipeline that supports prompts that contain "|" in prompts (as an AND condition) and weights (separated by "|" as well) to positively / negatively weight prompts. | [Composable Stable Diffusion](#composable-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
27
- | Seed Resizing Stable Diffusion | Stable Diffusion Pipeline that supports resizing an image and retaining the concepts of the 512 by 512 generation. | [Seed Resizing](#seed-resizing) | - | [Mark Rich](https://github.com/MarkRich) |
28
- | Imagic Stable Diffusion | Stable Diffusion Pipeline that enables writing a text prompt to edit an existing image | [Imagic Stable Diffusion](#imagic-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
29
  | Multilingual Stable Diffusion | Stable Diffusion Pipeline that supports prompts in 50 different languages. | [Multilingual Stable Diffusion](#multilingual-stable-diffusion-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/multilingual_stable_diffusion.ipynb) | [Juan Carlos Piñeros](https://github.com/juancopi81) |
30
  | GlueGen Stable Diffusion | Stable Diffusion Pipeline that supports prompts in different languages using GlueGen adapter. | [GlueGen Stable Diffusion](#gluegen-stable-diffusion-pipeline) | - | [Phạm Hồng Vinh](https://github.com/rootonchair) |
31
  | Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting | [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) |
@@ -37,7 +37,7 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
37
  | MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt | [MagicMix](#magic-mix) | - | [Partho Das](https://github.com/daspartho) |
38
  | Stable UnCLIP | Diffusion Pipeline for combining prior model (generate clip image embedding from text, UnCLIPPipeline `"kakaobrain/karlo-v1-alpha"`) and decoder pipeline (decode clip image embedding to image, StableDiffusionImageVariationPipeline `"lambdalabs/sd-image-variations-diffusers"` ). | [Stable UnCLIP](#stable-unclip) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_unclip.ipynb) | [Ray Wang](https://wrong.wang) |
39
  | UnCLIP Text Interpolation Pipeline | Diffusion Pipeline that allows passing two prompts and produces images while interpolating between the text-embeddings of the two prompts | [UnCLIP Text Interpolation Pipeline](#unclip-text-interpolation-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/unclip_text_interpolation.ipynb)| [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
40
- | UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
41
  | DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/ddim_noise_comparative_analysis.ipynb)| [Aengus (Duc-Anh)](https://github.com/aengusng8) |
42
  | CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
43
  | TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
@@ -50,6 +50,8 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
50
  | IADB Pipeline | Implementation of [Iterative α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) | [IADB Pipeline](#iadb-pipeline) | - | [Thomas Chambon](https://github.com/tchambon)
51
  | Zero1to3 Pipeline | Implementation of [Zero-1-to-3: Zero-shot One Image to 3D Object](https://arxiv.org/abs/2303.11328) | [Zero1to3 Pipeline](#zero1to3-pipeline) | - | [Xin Kong](https://github.com/kxhit) |
52
  | Stable Diffusion XL Long Weighted Prompt Pipeline | A pipeline support unlimited length of prompt and negative prompt, use A1111 style of prompt weighting | [Stable Diffusion XL Long Weighted Prompt Pipeline](#stable-diffusion-xl-long-weighted-prompt-pipeline) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LsqilswLR40XLLcp6XFOl5nKb_wOe26W?usp=sharing) | [Andrew Zhu](https://xhinker.medium.com/) |
 
 
53
  | FABRIC - Stable Diffusion with feedback Pipeline | pipeline supports feedback from liked and disliked images | [Stable Diffusion Fabric Pipeline](#stable-diffusion-fabric-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_diffusion_fabric.ipynb)| [Shauray Singh](https://shauray8.github.io/about_shauray/) |
54
  | sketch inpaint - Inpainting with non-inpaint Stable Diffusion | sketch inpaint much like in automatic1111 | [Masked Im2Im Stable Diffusion Pipeline](#stable-diffusion-masked-im2im) | - | [Anatoly Belikov](https://github.com/noskill) |
55
  | sketch inpaint xl - Inpainting with non-inpaint Stable Diffusion | sketch inpaint much like in automatic1111 | [Masked Im2Im Stable Diffusion XL Pipeline](#stable-diffusion-xl-masked-im2im) | - | [Anatoly Belikov](https://github.com/noskill) |
@@ -57,7 +59,7 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
57
  | Latent Consistency Pipeline | Implementation of [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378) | [Latent Consistency Pipeline](#latent-consistency-pipeline) | - | [Simian Luo](https://github.com/luosiallen) |
58
  | Latent Consistency Img2img Pipeline | Img2img pipeline for Latent Consistency Models | [Latent Consistency Img2Img Pipeline](#latent-consistency-img2img-pipeline) | - | [Logan Zoellner](https://github.com/nagolinc) |
59
  | Latent Consistency Interpolation Pipeline | Interpolate the latent space of Latent Consistency Models with multiple prompts | [Latent Consistency Interpolation Pipeline](#latent-consistency-interpolation-pipeline) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1pK3NrLWJSiJsBynLns1K1-IDTW9zbPvl?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) |
60
- | SDE Drag Pipeline | The pipeline supports drag editing of images using stochastic differential equations | [SDE Drag Pipeline](#sde-drag-pipeline) | - | [NieShen](https://github.com/NieShenRuc) [Fengqi Zhu](https://github.com/Monohydroxides) |
61
  | Regional Prompting Pipeline | Assign multiple prompts for different regions | [Regional Prompting Pipeline](#regional-prompting-pipeline) | - | [hako-mikan](https://github.com/hako-mikan) |
62
  | LDM3D-sr (LDM3D upscaler) | Upscale low resolution RGB and depth inputs to high resolution | [StableDiffusionUpscaleLDM3D Pipeline](https://github.com/estelleafl/diffusers/tree/ldm3d_upscaler_community/examples/community#stablediffusionupscaleldm3d-pipeline) | - | [Estelle Aflalo](https://github.com/estelleafl) |
63
  | AnimateDiff ControlNet Pipeline | Combines AnimateDiff with precise motion control using ControlNets | [AnimateDiff ControlNet Pipeline](#animatediff-controlnet-pipeline) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SKboYeGjEQmQPWoFC0aLYpBlYdHXkvAu?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) and [Edoardo Botta](https://github.com/EdoardoBotta) |
@@ -948,10 +950,15 @@ image.save('./imagic/imagic_image_alpha_2.png')
948
  Test seed resizing. Originally generate an image in 512 by 512, then generate image with same seed at 512 by 592 using seed resizing. Finally, generate 512 by 592 using original stable diffusion pipeline.
949
 
950
  ```python
 
951
  import torch as th
952
  import numpy as np
953
  from diffusers import DiffusionPipeline
954
 
 
 
 
 
955
  has_cuda = th.cuda.is_available()
956
  device = th.device('cpu' if not has_cuda else 'cuda')
957
 
@@ -965,7 +972,6 @@ def dummy(images, **kwargs):
965
 
966
  pipe.safety_checker = dummy
967
 
968
-
969
  images = []
970
  th.manual_seed(0)
971
  generator = th.Generator("cuda").manual_seed(0)
@@ -984,15 +990,14 @@ res = pipe(
984
  width=width,
985
  generator=generator)
986
  image = res.images[0]
987
- image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
988
-
989
 
990
  th.manual_seed(0)
991
  generator = th.Generator("cuda").manual_seed(0)
992
 
993
  pipe = DiffusionPipeline.from_pretrained(
994
  "CompVis/stable-diffusion-v1-4",
995
- custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
996
  ).to(device)
997
 
998
  width = 512
@@ -1006,11 +1011,11 @@ res = pipe(
1006
  width=width,
1007
  generator=generator)
1008
  image = res.images[0]
1009
- image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
1010
 
1011
  pipe_compare = DiffusionPipeline.from_pretrained(
1012
  "CompVis/stable-diffusion-v1-4",
1013
- custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
1014
  ).to(device)
1015
 
1016
  res = pipe_compare(
@@ -1023,7 +1028,7 @@ res = pipe_compare(
1023
  )
1024
 
1025
  image = res.images[0]
1026
- image.save('./seed_resize/seed_resize_{w}_{h}_image_compare.png'.format(w=width, h=height))
1027
  ```
1028
 
1029
  ### Multilingual Stable Diffusion Pipeline
@@ -1543,6 +1548,8 @@ This Diffusion Pipeline takes two images or an image_embeddings tensor of size 2
1543
  import torch
1544
  from diffusers import DiffusionPipeline
1545
  from PIL import Image
 
 
1546
 
1547
  device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
1548
  dtype = torch.float16 if torch.cuda.is_available() else torch.bfloat16
@@ -1554,13 +1561,25 @@ pipe = DiffusionPipeline.from_pretrained(
1554
  )
1555
  pipe.to(device)
1556
 
1557
- images = [Image.open('./starry_night.jpg'), Image.open('./flowers.jpg')]
 
 
 
 
 
 
 
 
 
 
 
 
1558
  # For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
1559
  generator = torch.Generator(device=device).manual_seed(42)
1560
 
1561
  output = pipe(image=images, steps=6, generator=generator)
1562
 
1563
- for i,image in enumerate(output.images):
1564
  image.save('starry_to_flowers_%s.jpg' % i)
1565
  ```
1566
 
@@ -2385,7 +2404,7 @@ pipe_images = mixing_pipeline(
2385
 
2386
  ![image_mixing_result](https://huggingface.co/datasets/TheDenk/images_mixing/resolve/main/boromir_gigachad.png)
2387
 
2388
- ### Stable Diffusion Mixture Tiling
2389
 
2390
  This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
2391
 
@@ -2416,6 +2435,96 @@ image = pipeline(
2416
 
2417
  ![mixture_tiling_results](https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/mixture_tiling.png)
2418
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2419
  ### TensorRT Inpainting Stable Diffusion Pipeline
2420
 
2421
  The TensorRT Pipeline can be used to accelerate the Inpainting Stable Diffusion Inference run.
@@ -2458,41 +2567,6 @@ image = pipe(prompt, image=input_image, mask_image=mask_image, strength=0.75,).i
2458
  image.save('tensorrt_inpaint_mecha_robot.png')
2459
  ```
2460
 
2461
- ### Stable Diffusion Mixture Canvas
2462
-
2463
- This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
2464
-
2465
- ```python
2466
- from PIL import Image
2467
- from diffusers import LMSDiscreteScheduler, DiffusionPipeline
2468
- from diffusers.pipelines.pipeline_utils import Image2ImageRegion, Text2ImageRegion, preprocess_image
2469
-
2470
-
2471
- # Load and preprocess guide image
2472
- iic_image = preprocess_image(Image.open("input_image.png").convert("RGB"))
2473
-
2474
- # Create scheduler and model (similar to StableDiffusionPipeline)
2475
- scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
2476
- pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler).to("cuda:0", custom_pipeline="mixture_canvas")
2477
- pipeline.to("cuda")
2478
-
2479
- # Mixture of Diffusers generation
2480
- output = pipeline(
2481
- canvas_height=800,
2482
- canvas_width=352,
2483
- regions=[
2484
- Text2ImageRegion(0, 800, 0, 352, guidance_scale=8,
2485
- prompt=f"best quality, masterpiece, WLOP, sakimichan, art contest winner on pixiv, 8K, intricate details, wet effects, rain drops, ethereal, mysterious, futuristic, UHD, HDR, cinematic lighting, in a beautiful forest, rainy day, award winning, trending on artstation, beautiful confident cheerful young woman, wearing a futuristic sleeveless dress, ultra beautiful detailed eyes, hyper-detailed face, complex, perfect, model, textured, chiaroscuro, professional make-up, realistic, figure in frame, "),
2486
- Image2ImageRegion(352-800, 352, 0, 352, reference_image=iic_image, strength=1.0),
2487
- ],
2488
- num_inference_steps=100,
2489
- seed=5525475061,
2490
- )["images"][0]
2491
- ```
2492
-
2493
- ![Input_Image](https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/input_image.png)
2494
- ![mixture_canvas_results](https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/canvas.png)
2495
-
2496
  ### IADB pipeline
2497
 
2498
  This pipeline is the implementation of the [α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) paper.
@@ -3909,33 +3983,89 @@ This pipeline provides drag-and-drop image editing using stochastic differential
3909
  See [paper](https://arxiv.org/abs/2311.01410), [paper page](https://ml-gsai.github.io/SDE-Drag-demo/), [original repo](https://github.com/ML-GSAI/SDE-Drag) for more information.
3910
 
3911
  ```py
3912
- import PIL
3913
  import torch
3914
  from diffusers import DDIMScheduler, DiffusionPipeline
 
 
 
 
3915
 
3916
  # Load the pipeline
3917
  model_path = "stable-diffusion-v1-5/stable-diffusion-v1-5"
3918
  scheduler = DDIMScheduler.from_pretrained(model_path, subfolder="scheduler")
3919
  pipe = DiffusionPipeline.from_pretrained(model_path, scheduler=scheduler, custom_pipeline="sde_drag")
3920
- pipe.to('cuda')
3921
 
3922
- # To save GPU memory, torch.float16 can be used, but it may compromise image quality.
3923
- # If not training LoRA, please avoid using torch.float16
3924
- # pipe.to(torch.float16)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3925
 
3926
- # Provide prompt, image, mask image, and the starting and target points for drag editing.
3927
- prompt = "prompt of the image"
3928
- image = PIL.Image.open('/path/to/image')
3929
- mask_image = PIL.Image.open('/path/to/mask_image')
3930
- source_points = [[123, 456]]
3931
- target_points = [[234, 567]]
3932
 
3933
- # train_lora is optional, and in most cases, using train_lora can better preserve consistency with the original image.
3934
- pipe.train_lora(prompt, image)
 
3935
 
3936
- output = pipe(prompt, image, mask_image, source_points, target_points)
3937
- output_image = PIL.Image.fromarray(output)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3938
  output_image.save("./output.png")
 
 
3939
  ```
3940
 
3941
  ### Instaflow Pipeline
 
24
  | Speech to Image | Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images | [Speech to Image](#speech-to-image) |[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/speech_to_image.ipynb) | [Mikail Duzenli](https://github.com/MikailINTech)
25
  | Wild Card Stable Diffusion | Stable Diffusion Pipeline that supports prompts that contain wildcard terms (indicated by surrounding double underscores), with values instantiated randomly from a corresponding txt file or a dictionary of possible values | [Wildcard Stable Diffusion](#wildcard-stable-diffusion) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/wildcard_stable_diffusion.ipynb) | [Shyam Sudhakaran](https://github.com/shyamsn97) |
26
  | [Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) | Stable Diffusion Pipeline that supports prompts that contain "|" in prompts (as an AND condition) and weights (separated by "|" as well) to positively / negatively weight prompts. | [Composable Stable Diffusion](#composable-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
27
+ | Seed Resizing Stable Diffusion | Stable Diffusion Pipeline that supports resizing an image and retaining the concepts of the 512 by 512 generation. | [Seed Resizing](#seed-resizing) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/seed_resizing.ipynb) | [Mark Rich](https://github.com/MarkRich) |
28
+ | Imagic Stable Diffusion | Stable Diffusion Pipeline that enables writing a text prompt to edit an existing image | [Imagic Stable Diffusion](#imagic-stable-diffusion) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/imagic_stable_diffusion.ipynb) | [Mark Rich](https://github.com/MarkRich) |
29
  | Multilingual Stable Diffusion | Stable Diffusion Pipeline that supports prompts in 50 different languages. | [Multilingual Stable Diffusion](#multilingual-stable-diffusion-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/multilingual_stable_diffusion.ipynb) | [Juan Carlos Piñeros](https://github.com/juancopi81) |
30
  | GlueGen Stable Diffusion | Stable Diffusion Pipeline that supports prompts in different languages using GlueGen adapter. | [GlueGen Stable Diffusion](#gluegen-stable-diffusion-pipeline) | - | [Phạm Hồng Vinh](https://github.com/rootonchair) |
31
  | Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting | [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) |
 
37
  | MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt | [MagicMix](#magic-mix) | - | [Partho Das](https://github.com/daspartho) |
38
  | Stable UnCLIP | Diffusion Pipeline for combining prior model (generate clip image embedding from text, UnCLIPPipeline `"kakaobrain/karlo-v1-alpha"`) and decoder pipeline (decode clip image embedding to image, StableDiffusionImageVariationPipeline `"lambdalabs/sd-image-variations-diffusers"` ). | [Stable UnCLIP](#stable-unclip) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_unclip.ipynb) | [Ray Wang](https://wrong.wang) |
39
  | UnCLIP Text Interpolation Pipeline | Diffusion Pipeline that allows passing two prompts and produces images while interpolating between the text-embeddings of the two prompts | [UnCLIP Text Interpolation Pipeline](#unclip-text-interpolation-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/unclip_text_interpolation.ipynb)| [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
40
+ | UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/unclip_image_interpolation.ipynb)| [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
41
  | DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/ddim_noise_comparative_analysis.ipynb)| [Aengus (Duc-Anh)](https://github.com/aengusng8) |
42
  | CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
43
  | TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
 
50
  | IADB Pipeline | Implementation of [Iterative α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) | [IADB Pipeline](#iadb-pipeline) | - | [Thomas Chambon](https://github.com/tchambon)
51
  | Zero1to3 Pipeline | Implementation of [Zero-1-to-3: Zero-shot One Image to 3D Object](https://arxiv.org/abs/2303.11328) | [Zero1to3 Pipeline](#zero1to3-pipeline) | - | [Xin Kong](https://github.com/kxhit) |
52
  | Stable Diffusion XL Long Weighted Prompt Pipeline | A pipeline support unlimited length of prompt and negative prompt, use A1111 style of prompt weighting | [Stable Diffusion XL Long Weighted Prompt Pipeline](#stable-diffusion-xl-long-weighted-prompt-pipeline) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LsqilswLR40XLLcp6XFOl5nKb_wOe26W?usp=sharing) | [Andrew Zhu](https://xhinker.medium.com/) |
53
+ | Stable Diffusion Mixture Tiling Pipeline SD 1.5 | A pipeline generates cohesive images by integrating multiple diffusion processes, each focused on a specific image region and considering boundary effects for smooth blending | [Stable Diffusion Mixture Tiling Pipeline SD 1.5](#stable-diffusion-mixture-tiling-sd-15) | [![Hugging Face Space](https://img.shields.io/badge/🤗%20Hugging%20Face-Space-yellow)](https://huggingface.co/spaces/albarji/mixture-of-diffusers) | [Álvaro B Jiménez](https://github.com/albarji/) |
54
+ | Stable Diffusion Mixture Tiling Pipeline SDXL | A pipeline generates cohesive images by integrating multiple diffusion processes, each focused on a specific image region and considering boundary effects for smooth blending | [Stable Diffusion Mixture Tiling Pipeline SDXL](#stable-diffusion-mixture-tiling-sdxl) | [![Hugging Face Space](https://img.shields.io/badge/🤗%20Hugging%20Face-Space-yellow)](https://huggingface.co/spaces/elismasilva/mixture-of-diffusers-sdxl-tiling) | [Eliseu Silva](https://github.com/DEVAIEXP/) |
55
  | FABRIC - Stable Diffusion with feedback Pipeline | pipeline supports feedback from liked and disliked images | [Stable Diffusion Fabric Pipeline](#stable-diffusion-fabric-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_diffusion_fabric.ipynb)| [Shauray Singh](https://shauray8.github.io/about_shauray/) |
56
  | sketch inpaint - Inpainting with non-inpaint Stable Diffusion | sketch inpaint much like in automatic1111 | [Masked Im2Im Stable Diffusion Pipeline](#stable-diffusion-masked-im2im) | - | [Anatoly Belikov](https://github.com/noskill) |
57
  | sketch inpaint xl - Inpainting with non-inpaint Stable Diffusion | sketch inpaint much like in automatic1111 | [Masked Im2Im Stable Diffusion XL Pipeline](#stable-diffusion-xl-masked-im2im) | - | [Anatoly Belikov](https://github.com/noskill) |
 
59
  | Latent Consistency Pipeline | Implementation of [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378) | [Latent Consistency Pipeline](#latent-consistency-pipeline) | - | [Simian Luo](https://github.com/luosiallen) |
60
  | Latent Consistency Img2img Pipeline | Img2img pipeline for Latent Consistency Models | [Latent Consistency Img2Img Pipeline](#latent-consistency-img2img-pipeline) | - | [Logan Zoellner](https://github.com/nagolinc) |
61
  | Latent Consistency Interpolation Pipeline | Interpolate the latent space of Latent Consistency Models with multiple prompts | [Latent Consistency Interpolation Pipeline](#latent-consistency-interpolation-pipeline) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1pK3NrLWJSiJsBynLns1K1-IDTW9zbPvl?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) |
62
+ | SDE Drag Pipeline | The pipeline supports drag editing of images using stochastic differential equations | [SDE Drag Pipeline](#sde-drag-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/sde_drag.ipynb) | [NieShen](https://github.com/NieShenRuc) [Fengqi Zhu](https://github.com/Monohydroxides) |
63
  | Regional Prompting Pipeline | Assign multiple prompts for different regions | [Regional Prompting Pipeline](#regional-prompting-pipeline) | - | [hako-mikan](https://github.com/hako-mikan) |
64
  | LDM3D-sr (LDM3D upscaler) | Upscale low resolution RGB and depth inputs to high resolution | [StableDiffusionUpscaleLDM3D Pipeline](https://github.com/estelleafl/diffusers/tree/ldm3d_upscaler_community/examples/community#stablediffusionupscaleldm3d-pipeline) | - | [Estelle Aflalo](https://github.com/estelleafl) |
65
  | AnimateDiff ControlNet Pipeline | Combines AnimateDiff with precise motion control using ControlNets | [AnimateDiff ControlNet Pipeline](#animatediff-controlnet-pipeline) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SKboYeGjEQmQPWoFC0aLYpBlYdHXkvAu?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) and [Edoardo Botta](https://github.com/EdoardoBotta) |
 
950
  Test seed resizing. Originally generate an image in 512 by 512, then generate image with same seed at 512 by 592 using seed resizing. Finally, generate 512 by 592 using original stable diffusion pipeline.
951
 
952
  ```python
953
+ import os
954
  import torch as th
955
  import numpy as np
956
  from diffusers import DiffusionPipeline
957
 
958
+ # Ensure the save directory exists or create it
959
+ save_dir = './seed_resize/'
960
+ os.makedirs(save_dir, exist_ok=True)
961
+
962
  has_cuda = th.cuda.is_available()
963
  device = th.device('cpu' if not has_cuda else 'cuda')
964
 
 
972
 
973
  pipe.safety_checker = dummy
974
 
 
975
  images = []
976
  th.manual_seed(0)
977
  generator = th.Generator("cuda").manual_seed(0)
 
990
  width=width,
991
  generator=generator)
992
  image = res.images[0]
993
+ image.save(os.path.join(save_dir, 'seed_resize_{w}_{h}_image.png'.format(w=width, h=height)))
 
994
 
995
  th.manual_seed(0)
996
  generator = th.Generator("cuda").manual_seed(0)
997
 
998
  pipe = DiffusionPipeline.from_pretrained(
999
  "CompVis/stable-diffusion-v1-4",
1000
+ custom_pipeline="seed_resize_stable_diffusion"
1001
  ).to(device)
1002
 
1003
  width = 512
 
1011
  width=width,
1012
  generator=generator)
1013
  image = res.images[0]
1014
+ image.save(os.path.join(save_dir, 'seed_resize_{w}_{h}_image.png'.format(w=width, h=height)))
1015
 
1016
  pipe_compare = DiffusionPipeline.from_pretrained(
1017
  "CompVis/stable-diffusion-v1-4",
1018
+ custom_pipeline="seed_resize_stable_diffusion"
1019
  ).to(device)
1020
 
1021
  res = pipe_compare(
 
1028
  )
1029
 
1030
  image = res.images[0]
1031
+ image.save(os.path.join(save_dir, 'seed_resize_{w}_{h}_image_compare.png'.format(w=width, h=height)))
1032
  ```
1033
 
1034
  ### Multilingual Stable Diffusion Pipeline
 
1548
  import torch
1549
  from diffusers import DiffusionPipeline
1550
  from PIL import Image
1551
+ import requests
1552
+ from io import BytesIO
1553
 
1554
  device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
1555
  dtype = torch.float16 if torch.cuda.is_available() else torch.bfloat16
 
1561
  )
1562
  pipe.to(device)
1563
 
1564
+ # List of image URLs
1565
+ image_urls = [
1566
+ 'https://camo.githubusercontent.com/ef13c8059b12947c0d5e8d3ea88900de6bf1cd76bbf61ace3928e824c491290e/68747470733a2f2f68756767696e67666163652e636f2f64617461736574732f4e616761536169416268696e61792f556e434c4950496d616765496e746572706f6c6174696f6e53616d706c65732f7265736f6c76652f6d61696e2f7374617272795f6e696768742e6a7067',
1567
+ 'https://camo.githubusercontent.com/d1947ab7c49ae3f550c28409d5e8b120df48e456559cf4557306c0848337702c/68747470733a2f2f68756767696e67666163652e636f2f64617461736574732f4e616761536169416268696e61792f556e434c4950496d616765496e746572706f6c6174696f6e53616d706c65732f7265736f6c76652f6d61696e2f666c6f776572732e6a7067'
1568
+ ]
1569
+
1570
+ # Open images from URLs
1571
+ images = []
1572
+ for url in image_urls:
1573
+ response = requests.get(url)
1574
+ img = Image.open(BytesIO(response.content))
1575
+ images.append(img)
1576
+
1577
  # For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
1578
  generator = torch.Generator(device=device).manual_seed(42)
1579
 
1580
  output = pipe(image=images, steps=6, generator=generator)
1581
 
1582
+ for i, image in enumerate(output.images):
1583
  image.save('starry_to_flowers_%s.jpg' % i)
1584
  ```
1585
 
 
2404
 
2405
  ![image_mixing_result](https://huggingface.co/datasets/TheDenk/images_mixing/resolve/main/boromir_gigachad.png)
2406
 
2407
+ ### Stable Diffusion Mixture Tiling SD 1.5
2408
 
2409
  This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
2410
 
 
2435
 
2436
  ![mixture_tiling_results](https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/mixture_tiling.png)
2437
 
2438
+ ### Stable Diffusion Mixture Canvas
2439
+
2440
+ This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
2441
+
2442
+ ```python
2443
+ from PIL import Image
2444
+ from diffusers import LMSDiscreteScheduler, DiffusionPipeline
2445
+ from diffusers.pipelines.pipeline_utils import Image2ImageRegion, Text2ImageRegion, preprocess_image
2446
+
2447
+
2448
+ # Load and preprocess guide image
2449
+ iic_image = preprocess_image(Image.open("input_image.png").convert("RGB"))
2450
+
2451
+ # Create scheduler and model (similar to StableDiffusionPipeline)
2452
+ scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
2453
+ pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler).to("cuda:0", custom_pipeline="mixture_canvas")
2454
+ pipeline.to("cuda")
2455
+
2456
+ # Mixture of Diffusers generation
2457
+ output = pipeline(
2458
+ canvas_height=800,
2459
+ canvas_width=352,
2460
+ regions=[
2461
+ Text2ImageRegion(0, 800, 0, 352, guidance_scale=8,
2462
+ prompt=f"best quality, masterpiece, WLOP, sakimichan, art contest winner on pixiv, 8K, intricate details, wet effects, rain drops, ethereal, mysterious, futuristic, UHD, HDR, cinematic lighting, in a beautiful forest, rainy day, award winning, trending on artstation, beautiful confident cheerful young woman, wearing a futuristic sleeveless dress, ultra beautiful detailed eyes, hyper-detailed face, complex, perfect, model, textured, chiaroscuro, professional make-up, realistic, figure in frame, "),
2463
+ Image2ImageRegion(352-800, 352, 0, 352, reference_image=iic_image, strength=1.0),
2464
+ ],
2465
+ num_inference_steps=100,
2466
+ seed=5525475061,
2467
+ )["images"][0]
2468
+ ```
2469
+
2470
+ ![Input_Image](https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/input_image.png)
2471
+ ![mixture_canvas_results](https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/canvas.png)
2472
+
2473
+ ### Stable Diffusion Mixture Tiling SDXL
2474
+
2475
+ This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
2476
+
2477
+ ```python
2478
+ import torch
2479
+ from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler, AutoencoderKL
2480
+
2481
+ device="cuda"
2482
+
2483
+ # Load fixed vae (optional)
2484
+ vae = AutoencoderKL.from_pretrained(
2485
+ "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16
2486
+ ).to(device)
2487
+
2488
+ # Create scheduler and model (similar to StableDiffusionPipeline)
2489
+ model_id="stablediffusionapi/yamermix-v8-vae"
2490
+ scheduler = DPMSolverMultistepScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
2491
+ pipe = DiffusionPipeline.from_pretrained(
2492
+ model_id,
2493
+ torch_dtype=torch.float16,
2494
+ vae=vae,
2495
+ custom_pipeline="mixture_tiling_sdxl",
2496
+ scheduler=scheduler,
2497
+ use_safetensors=False
2498
+ ).to(device)
2499
+
2500
+ pipe.enable_model_cpu_offload()
2501
+ pipe.enable_vae_tiling()
2502
+ pipe.enable_vae_slicing()
2503
+
2504
+ generator = torch.Generator(device).manual_seed(297984183)
2505
+
2506
+ # Mixture of Diffusers generation
2507
+ image = pipe(
2508
+ prompt=[[
2509
+ "A charming house in the countryside, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece",
2510
+ "A dirt road in the countryside crossing pastures, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece",
2511
+ "An old and rusty giant robot lying on a dirt road, by jakub rozalski, dark sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece"
2512
+ ]],
2513
+ tile_height=1024,
2514
+ tile_width=1280,
2515
+ tile_row_overlap=0,
2516
+ tile_col_overlap=256,
2517
+ guidance_scale_tiles=[[7, 7, 7]], # or guidance_scale=7 if is the same for all prompts
2518
+ height=1024,
2519
+ width=3840,
2520
+ target_size=(1024, 3840),
2521
+ generator=generator,
2522
+ num_inference_steps=30,
2523
+ )["images"][0]
2524
+ ```
2525
+
2526
+ ![mixture_tiling_results](https://huggingface.co/datasets/elismasilva/results/resolve/main/mixture_sdxl.png)
2527
+
2528
  ### TensorRT Inpainting Stable Diffusion Pipeline
2529
 
2530
  The TensorRT Pipeline can be used to accelerate the Inpainting Stable Diffusion Inference run.
 
2567
  image.save('tensorrt_inpaint_mecha_robot.png')
2568
  ```
2569
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2570
  ### IADB pipeline
2571
 
2572
  This pipeline is the implementation of the [α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) paper.
 
3983
  See [paper](https://arxiv.org/abs/2311.01410), [paper page](https://ml-gsai.github.io/SDE-Drag-demo/), [original repo](https://github.com/ML-GSAI/SDE-Drag) for more information.
3984
 
3985
  ```py
 
3986
  import torch
3987
  from diffusers import DDIMScheduler, DiffusionPipeline
3988
+ from PIL import Image
3989
+ import requests
3990
+ from io import BytesIO
3991
+ import numpy as np
3992
 
3993
  # Load the pipeline
3994
  model_path = "stable-diffusion-v1-5/stable-diffusion-v1-5"
3995
  scheduler = DDIMScheduler.from_pretrained(model_path, subfolder="scheduler")
3996
  pipe = DiffusionPipeline.from_pretrained(model_path, scheduler=scheduler, custom_pipeline="sde_drag")
 
3997
 
3998
+ # Ensure the model is moved to the GPU
3999
+ device = "cuda" if torch.cuda.is_available() else "cpu"
4000
+ pipe.to(device)
4001
+
4002
+ # Function to load image from URL
4003
+ def load_image_from_url(url):
4004
+ response = requests.get(url)
4005
+ return Image.open(BytesIO(response.content)).convert("RGB")
4006
+
4007
+ # Function to prepare mask
4008
+ def prepare_mask(mask_image):
4009
+ # Convert to grayscale
4010
+ mask = mask_image.convert("L")
4011
+ return mask
4012
+
4013
+ # Function to convert numpy array to PIL Image
4014
+ def array_to_pil(array):
4015
+ # Ensure the array is in uint8 format
4016
+ if array.dtype != np.uint8:
4017
+ if array.max() <= 1.0:
4018
+ array = (array * 255).astype(np.uint8)
4019
+ else:
4020
+ array = array.astype(np.uint8)
4021
+
4022
+ # Handle different array shapes
4023
+ if len(array.shape) == 3:
4024
+ if array.shape[0] == 3: # If channels first
4025
+ array = array.transpose(1, 2, 0)
4026
+ return Image.fromarray(array)
4027
+ elif len(array.shape) == 4: # If batch dimension
4028
+ array = array[0]
4029
+ if array.shape[0] == 3: # If channels first
4030
+ array = array.transpose(1, 2, 0)
4031
+ return Image.fromarray(array)
4032
+ else:
4033
+ raise ValueError(f"Unexpected array shape: {array.shape}")
4034
 
4035
+ # Image and mask URLs
4036
+ image_url = 'https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png'
4037
+ mask_url = 'https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png'
 
 
 
4038
 
4039
+ # Load the images
4040
+ image = load_image_from_url(image_url)
4041
+ mask_image = load_image_from_url(mask_url)
4042
 
4043
+ # Resize images to a size that's compatible with the model's latent space
4044
+ image = image.resize((512, 512))
4045
+ mask_image = mask_image.resize((512, 512))
4046
+
4047
+ # Prepare the mask (keep as PIL Image)
4048
+ mask = prepare_mask(mask_image)
4049
+
4050
+ # Provide the prompt and points for drag editing
4051
+ prompt = "A cute dog"
4052
+ source_points = [[32, 32]] # Adjusted for 512x512 image
4053
+ target_points = [[64, 64]] # Adjusted for 512x512 image
4054
+
4055
+ # Generate the output image
4056
+ output_array = pipe(
4057
+ prompt=prompt,
4058
+ image=image,
4059
+ mask_image=mask,
4060
+ source_points=source_points,
4061
+ target_points=target_points
4062
+ )
4063
+
4064
+ # Convert output array to PIL Image and save
4065
+ output_image = array_to_pil(output_array)
4066
  output_image.save("./output.png")
4067
+ print("Output image saved as './output.png'")
4068
+
4069
  ```
4070
 
4071
  ### Instaflow Pipeline
main/mixture_tiling_sdxl.py ADDED
@@ -0,0 +1,1185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import inspect
16
+ from enum import Enum
17
+ from typing import Any, Dict, List, Optional, Tuple, Union
18
+
19
+ import torch
20
+ from transformers import (
21
+ CLIPTextModel,
22
+ CLIPTextModelWithProjection,
23
+ CLIPTokenizer,
24
+ )
25
+
26
+ from diffusers.image_processor import VaeImageProcessor
27
+ from diffusers.loaders import (
28
+ FromSingleFileMixin,
29
+ StableDiffusionXLLoraLoaderMixin,
30
+ TextualInversionLoaderMixin,
31
+ )
32
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
33
+ from diffusers.models.attention_processor import (
34
+ AttnProcessor2_0,
35
+ FusedAttnProcessor2_0,
36
+ XFormersAttnProcessor,
37
+ )
38
+ from diffusers.models.lora import adjust_lora_scale_text_encoder
39
+ from diffusers.pipelines.pipeline_utils import DiffusionPipeline, StableDiffusionMixin
40
+ from diffusers.pipelines.stable_diffusion_xl.pipeline_output import StableDiffusionXLPipelineOutput
41
+ from diffusers.schedulers import KarrasDiffusionSchedulers, LMSDiscreteScheduler
42
+ from diffusers.utils import (
43
+ USE_PEFT_BACKEND,
44
+ is_invisible_watermark_available,
45
+ is_torch_xla_available,
46
+ logging,
47
+ replace_example_docstring,
48
+ scale_lora_layers,
49
+ unscale_lora_layers,
50
+ )
51
+ from diffusers.utils.torch_utils import randn_tensor
52
+
53
+
54
+ try:
55
+ from ligo.segments import segment
56
+ except ImportError:
57
+ raise ImportError("Please install transformers and ligo-segments to use the mixture pipeline")
58
+
59
+ if is_invisible_watermark_available():
60
+ from diffusers.pipelines.stable_diffusion_xl.watermark import StableDiffusionXLWatermarker
61
+
62
+ if is_torch_xla_available():
63
+ import torch_xla.core.xla_model as xm
64
+
65
+ XLA_AVAILABLE = True
66
+ else:
67
+ XLA_AVAILABLE = False
68
+
69
+
70
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
71
+
72
+ EXAMPLE_DOC_STRING = """
73
+ Examples:
74
+ ```py
75
+ >>> import torch
76
+ >>> from diffusers import StableDiffusionXLPipeline
77
+
78
+ >>> pipe = StableDiffusionXLPipeline.from_pretrained(
79
+ ... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
80
+ ... )
81
+ >>> pipe = pipe.to("cuda")
82
+
83
+ >>> prompt = "a photo of an astronaut riding a horse on mars"
84
+ >>> image = pipe(prompt).images[0]
85
+ ```
86
+ """
87
+
88
+
89
+ def _tile2pixel_indices(tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap):
90
+ """Given a tile row and column numbers returns the range of pixels affected by that tiles in the overall image
91
+
92
+ Returns a tuple with:
93
+ - Starting coordinates of rows in pixel space
94
+ - Ending coordinates of rows in pixel space
95
+ - Starting coordinates of columns in pixel space
96
+ - Ending coordinates of columns in pixel space
97
+ """
98
+ px_row_init = 0 if tile_row == 0 else tile_row * (tile_height - tile_row_overlap)
99
+ px_row_end = px_row_init + tile_height
100
+ px_col_init = 0 if tile_col == 0 else tile_col * (tile_width - tile_col_overlap)
101
+ px_col_end = px_col_init + tile_width
102
+ return px_row_init, px_row_end, px_col_init, px_col_end
103
+
104
+
105
+ def _pixel2latent_indices(px_row_init, px_row_end, px_col_init, px_col_end):
106
+ """Translates coordinates in pixel space to coordinates in latent space"""
107
+ return px_row_init // 8, px_row_end // 8, px_col_init // 8, px_col_end // 8
108
+
109
+
110
+ def _tile2latent_indices(tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap):
111
+ """Given a tile row and column numbers returns the range of latents affected by that tiles in the overall image
112
+
113
+ Returns a tuple with:
114
+ - Starting coordinates of rows in latent space
115
+ - Ending coordinates of rows in latent space
116
+ - Starting coordinates of columns in latent space
117
+ - Ending coordinates of columns in latent space
118
+ """
119
+ px_row_init, px_row_end, px_col_init, px_col_end = _tile2pixel_indices(
120
+ tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap
121
+ )
122
+ return _pixel2latent_indices(px_row_init, px_row_end, px_col_init, px_col_end)
123
+
124
+
125
+ def _tile2latent_exclusive_indices(
126
+ tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap, rows, columns
127
+ ):
128
+ """Given a tile row and column numbers returns the range of latents affected only by that tile in the overall image
129
+
130
+ Returns a tuple with:
131
+ - Starting coordinates of rows in latent space
132
+ - Ending coordinates of rows in latent space
133
+ - Starting coordinates of columns in latent space
134
+ - Ending coordinates of columns in latent space
135
+ """
136
+ row_init, row_end, col_init, col_end = _tile2latent_indices(
137
+ tile_row, tile_col, tile_width, tile_height, tile_row_overlap, tile_col_overlap
138
+ )
139
+ row_segment = segment(row_init, row_end)
140
+ col_segment = segment(col_init, col_end)
141
+ # Iterate over the rest of tiles, clipping the region for the current tile
142
+ for row in range(rows):
143
+ for column in range(columns):
144
+ if row != tile_row and column != tile_col:
145
+ clip_row_init, clip_row_end, clip_col_init, clip_col_end = _tile2latent_indices(
146
+ row, column, tile_width, tile_height, tile_row_overlap, tile_col_overlap
147
+ )
148
+ row_segment = row_segment - segment(clip_row_init, clip_row_end)
149
+ col_segment = col_segment - segment(clip_col_init, clip_col_end)
150
+ # return row_init, row_end, col_init, col_end
151
+ return row_segment[0], row_segment[1], col_segment[0], col_segment[1]
152
+
153
+
154
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg
155
+ def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0):
156
+ r"""
157
+ Rescales `noise_cfg` tensor based on `guidance_rescale` to improve image quality and fix overexposure. Based on
158
+ Section 3.4 from [Common Diffusion Noise Schedules and Sample Steps are
159
+ Flawed](https://arxiv.org/pdf/2305.08891.pdf).
160
+
161
+ Args:
162
+ noise_cfg (`torch.Tensor`):
163
+ The predicted noise tensor for the guided diffusion process.
164
+ noise_pred_text (`torch.Tensor`):
165
+ The predicted noise tensor for the text-guided diffusion process.
166
+ guidance_rescale (`float`, *optional*, defaults to 0.0):
167
+ A rescale factor applied to the noise predictions.
168
+
169
+ Returns:
170
+ noise_cfg (`torch.Tensor`): The rescaled noise prediction tensor.
171
+ """
172
+ std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True)
173
+ std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True)
174
+ # rescale the results from guidance (fixes overexposure)
175
+ noise_pred_rescaled = noise_cfg * (std_text / std_cfg)
176
+ # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images
177
+ noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg
178
+ return noise_cfg
179
+
180
+
181
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.retrieve_timesteps
182
+ def retrieve_timesteps(
183
+ scheduler,
184
+ num_inference_steps: Optional[int] = None,
185
+ device: Optional[Union[str, torch.device]] = None,
186
+ timesteps: Optional[List[int]] = None,
187
+ sigmas: Optional[List[float]] = None,
188
+ **kwargs,
189
+ ):
190
+ r"""
191
+ Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles
192
+ custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`.
193
+
194
+ Args:
195
+ scheduler (`SchedulerMixin`):
196
+ The scheduler to get timesteps from.
197
+ num_inference_steps (`int`):
198
+ The number of diffusion steps used when generating samples with a pre-trained model. If used, `timesteps`
199
+ must be `None`.
200
+ device (`str` or `torch.device`, *optional*):
201
+ The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
202
+ timesteps (`List[int]`, *optional*):
203
+ Custom timesteps used to override the timestep spacing strategy of the scheduler. If `timesteps` is passed,
204
+ `num_inference_steps` and `sigmas` must be `None`.
205
+ sigmas (`List[float]`, *optional*):
206
+ Custom sigmas used to override the timestep spacing strategy of the scheduler. If `sigmas` is passed,
207
+ `num_inference_steps` and `timesteps` must be `None`.
208
+
209
+ Returns:
210
+ `Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the
211
+ second element is the number of inference steps.
212
+ """
213
+
214
+ if timesteps is not None and sigmas is not None:
215
+ raise ValueError("Only one of `timesteps` or `sigmas` can be passed. Please choose one to set custom values")
216
+ if timesteps is not None:
217
+ accepts_timesteps = "timesteps" in set(inspect.signature(scheduler.set_timesteps).parameters.keys())
218
+ if not accepts_timesteps:
219
+ raise ValueError(
220
+ f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom"
221
+ f" timestep schedules. Please check whether you are using the correct scheduler."
222
+ )
223
+ scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs)
224
+ timesteps = scheduler.timesteps
225
+ num_inference_steps = len(timesteps)
226
+ elif sigmas is not None:
227
+ accept_sigmas = "sigmas" in set(inspect.signature(scheduler.set_timesteps).parameters.keys())
228
+ if not accept_sigmas:
229
+ raise ValueError(
230
+ f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom"
231
+ f" sigmas schedules. Please check whether you are using the correct scheduler."
232
+ )
233
+ scheduler.set_timesteps(sigmas=sigmas, device=device, **kwargs)
234
+ timesteps = scheduler.timesteps
235
+ num_inference_steps = len(timesteps)
236
+ else:
237
+ scheduler.set_timesteps(num_inference_steps, device=device, **kwargs)
238
+ timesteps = scheduler.timesteps
239
+ return timesteps, num_inference_steps
240
+
241
+
242
+ class StableDiffusionXLTilingPipeline(
243
+ DiffusionPipeline,
244
+ StableDiffusionMixin,
245
+ FromSingleFileMixin,
246
+ StableDiffusionXLLoraLoaderMixin,
247
+ TextualInversionLoaderMixin,
248
+ ):
249
+ r"""
250
+ Pipeline for text-to-image generation using Stable Diffusion XL.
251
+
252
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
253
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
254
+
255
+ The pipeline also inherits the following loading methods:
256
+ - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings
257
+ - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files
258
+ - [`~loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights`] for loading LoRA weights
259
+ - [`~loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights`] for saving LoRA weights
260
+
261
+ Args:
262
+ vae ([`AutoencoderKL`]):
263
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
264
+ text_encoder ([`CLIPTextModel`]):
265
+ Frozen text-encoder. Stable Diffusion XL uses the text portion of
266
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
267
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
268
+ text_encoder_2 ([` CLIPTextModelWithProjection`]):
269
+ Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
270
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
271
+ specifically the
272
+ [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
273
+ variant.
274
+ tokenizer (`CLIPTokenizer`):
275
+ Tokenizer of class
276
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
277
+ tokenizer_2 (`CLIPTokenizer`):
278
+ Second Tokenizer of class
279
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
280
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
281
+ scheduler ([`SchedulerMixin`]):
282
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
283
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
284
+ force_zeros_for_empty_prompt (`bool`, *optional*, defaults to `"True"`):
285
+ Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
286
+ `stabilityai/stable-diffusion-xl-base-1-0`.
287
+ add_watermarker (`bool`, *optional*):
288
+ Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
289
+ watermark output images. If not defined, it will default to True if the package is installed, otherwise no
290
+ watermarker will be used.
291
+ """
292
+
293
+ model_cpu_offload_seq = "text_encoder->text_encoder_2->image_encoder->unet->vae"
294
+ _optional_components = [
295
+ "tokenizer",
296
+ "tokenizer_2",
297
+ "text_encoder",
298
+ "text_encoder_2",
299
+ ]
300
+
301
+ def __init__(
302
+ self,
303
+ vae: AutoencoderKL,
304
+ text_encoder: CLIPTextModel,
305
+ text_encoder_2: CLIPTextModelWithProjection,
306
+ tokenizer: CLIPTokenizer,
307
+ tokenizer_2: CLIPTokenizer,
308
+ unet: UNet2DConditionModel,
309
+ scheduler: KarrasDiffusionSchedulers,
310
+ force_zeros_for_empty_prompt: bool = True,
311
+ add_watermarker: Optional[bool] = None,
312
+ ):
313
+ super().__init__()
314
+
315
+ self.register_modules(
316
+ vae=vae,
317
+ text_encoder=text_encoder,
318
+ text_encoder_2=text_encoder_2,
319
+ tokenizer=tokenizer,
320
+ tokenizer_2=tokenizer_2,
321
+ unet=unet,
322
+ scheduler=scheduler,
323
+ )
324
+ self.register_to_config(force_zeros_for_empty_prompt=force_zeros_for_empty_prompt)
325
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) if getattr(self, "vae", None) else 8
326
+ self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
327
+
328
+ self.default_sample_size = (
329
+ self.unet.config.sample_size
330
+ if hasattr(self, "unet") and self.unet is not None and hasattr(self.unet.config, "sample_size")
331
+ else 128
332
+ )
333
+
334
+ add_watermarker = add_watermarker if add_watermarker is not None else is_invisible_watermark_available()
335
+
336
+ if add_watermarker:
337
+ self.watermark = StableDiffusionXLWatermarker()
338
+ else:
339
+ self.watermark = None
340
+
341
+ class SeedTilesMode(Enum):
342
+ """Modes in which the latents of a particular tile can be re-seeded"""
343
+
344
+ FULL = "full"
345
+ EXCLUSIVE = "exclusive"
346
+
347
+ def encode_prompt(
348
+ self,
349
+ prompt: str,
350
+ prompt_2: Optional[str] = None,
351
+ device: Optional[torch.device] = None,
352
+ num_images_per_prompt: int = 1,
353
+ do_classifier_free_guidance: bool = True,
354
+ negative_prompt: Optional[str] = None,
355
+ negative_prompt_2: Optional[str] = None,
356
+ prompt_embeds: Optional[torch.Tensor] = None,
357
+ negative_prompt_embeds: Optional[torch.Tensor] = None,
358
+ pooled_prompt_embeds: Optional[torch.Tensor] = None,
359
+ negative_pooled_prompt_embeds: Optional[torch.Tensor] = None,
360
+ lora_scale: Optional[float] = None,
361
+ clip_skip: Optional[int] = None,
362
+ ):
363
+ r"""
364
+ Encodes the prompt into text encoder hidden states.
365
+
366
+ Args:
367
+ prompt (`str` or `List[str]`, *optional*):
368
+ prompt to be encoded
369
+ prompt_2 (`str` or `List[str]`, *optional*):
370
+ The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
371
+ used in both text-encoders
372
+ device: (`torch.device`):
373
+ torch device
374
+ num_images_per_prompt (`int`):
375
+ number of images that should be generated per prompt
376
+ do_classifier_free_guidance (`bool`):
377
+ whether to use classifier free guidance or not
378
+ negative_prompt (`str` or `List[str]`, *optional*):
379
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
380
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
381
+ less than `1`).
382
+ negative_prompt_2 (`str` or `List[str]`, *optional*):
383
+ The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
384
+ `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
385
+ prompt_embeds (`torch.Tensor`, *optional*):
386
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
387
+ provided, text embeddings will be generated from `prompt` input argument.
388
+ negative_prompt_embeds (`torch.Tensor`, *optional*):
389
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
390
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
391
+ argument.
392
+ pooled_prompt_embeds (`torch.Tensor`, *optional*):
393
+ Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
394
+ If not provided, pooled text embeddings will be generated from `prompt` input argument.
395
+ negative_pooled_prompt_embeds (`torch.Tensor`, *optional*):
396
+ Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
397
+ weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
398
+ input argument.
399
+ lora_scale (`float`, *optional*):
400
+ A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
401
+ clip_skip (`int`, *optional*):
402
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
403
+ the output of the pre-final layer will be used for computing the prompt embeddings.
404
+ """
405
+ device = device or self._execution_device
406
+
407
+ # set lora scale so that monkey patched LoRA
408
+ # function of text encoder can correctly access it
409
+ if lora_scale is not None and isinstance(self, StableDiffusionXLLoraLoaderMixin):
410
+ self._lora_scale = lora_scale
411
+
412
+ # dynamically adjust the LoRA scale
413
+ if self.text_encoder is not None:
414
+ if not USE_PEFT_BACKEND:
415
+ adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
416
+ else:
417
+ scale_lora_layers(self.text_encoder, lora_scale)
418
+
419
+ if self.text_encoder_2 is not None:
420
+ if not USE_PEFT_BACKEND:
421
+ adjust_lora_scale_text_encoder(self.text_encoder_2, lora_scale)
422
+ else:
423
+ scale_lora_layers(self.text_encoder_2, lora_scale)
424
+
425
+ prompt = [prompt] if isinstance(prompt, str) else prompt
426
+
427
+ if prompt is not None:
428
+ batch_size = len(prompt)
429
+ else:
430
+ batch_size = prompt_embeds.shape[0]
431
+
432
+ # Define tokenizers and text encoders
433
+ tokenizers = [self.tokenizer, self.tokenizer_2] if self.tokenizer is not None else [self.tokenizer_2]
434
+ text_encoders = (
435
+ [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2]
436
+ )
437
+
438
+ if prompt_embeds is None:
439
+ prompt_2 = prompt_2 or prompt
440
+ prompt_2 = [prompt_2] if isinstance(prompt_2, str) else prompt_2
441
+
442
+ # textual inversion: process multi-vector tokens if necessary
443
+ prompt_embeds_list = []
444
+ prompts = [prompt, prompt_2]
445
+ for prompt, tokenizer, text_encoder in zip(prompts, tokenizers, text_encoders):
446
+ if isinstance(self, TextualInversionLoaderMixin):
447
+ prompt = self.maybe_convert_prompt(prompt, tokenizer)
448
+
449
+ text_inputs = tokenizer(
450
+ prompt,
451
+ padding="max_length",
452
+ max_length=tokenizer.model_max_length,
453
+ truncation=True,
454
+ return_tensors="pt",
455
+ )
456
+
457
+ text_input_ids = text_inputs.input_ids
458
+ untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
459
+
460
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
461
+ text_input_ids, untruncated_ids
462
+ ):
463
+ removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1 : -1])
464
+ logger.warning(
465
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
466
+ f" {tokenizer.model_max_length} tokens: {removed_text}"
467
+ )
468
+
469
+ prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
470
+
471
+ # We are only ALWAYS interested in the pooled output of the final text encoder
472
+ if pooled_prompt_embeds is None and prompt_embeds[0].ndim == 2:
473
+ pooled_prompt_embeds = prompt_embeds[0]
474
+
475
+ if clip_skip is None:
476
+ prompt_embeds = prompt_embeds.hidden_states[-2]
477
+ else:
478
+ # "2" because SDXL always indexes from the penultimate layer.
479
+ prompt_embeds = prompt_embeds.hidden_states[-(clip_skip + 2)]
480
+
481
+ prompt_embeds_list.append(prompt_embeds)
482
+
483
+ prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
484
+
485
+ # get unconditional embeddings for classifier free guidance
486
+ zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt
487
+ if do_classifier_free_guidance and negative_prompt_embeds is None and zero_out_negative_prompt:
488
+ negative_prompt_embeds = torch.zeros_like(prompt_embeds)
489
+ negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds)
490
+ elif do_classifier_free_guidance and negative_prompt_embeds is None:
491
+ negative_prompt = negative_prompt or ""
492
+ negative_prompt_2 = negative_prompt_2 or negative_prompt
493
+
494
+ # normalize str to list
495
+ negative_prompt = batch_size * [negative_prompt] if isinstance(negative_prompt, str) else negative_prompt
496
+ negative_prompt_2 = (
497
+ batch_size * [negative_prompt_2] if isinstance(negative_prompt_2, str) else negative_prompt_2
498
+ )
499
+
500
+ uncond_tokens: List[str]
501
+ if prompt is not None and type(prompt) is not type(negative_prompt):
502
+ raise TypeError(
503
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
504
+ f" {type(prompt)}."
505
+ )
506
+ elif batch_size != len(negative_prompt):
507
+ raise ValueError(
508
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
509
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
510
+ " the batch size of `prompt`."
511
+ )
512
+ else:
513
+ uncond_tokens = [negative_prompt, negative_prompt_2]
514
+
515
+ negative_prompt_embeds_list = []
516
+ for negative_prompt, tokenizer, text_encoder in zip(uncond_tokens, tokenizers, text_encoders):
517
+ if isinstance(self, TextualInversionLoaderMixin):
518
+ negative_prompt = self.maybe_convert_prompt(negative_prompt, tokenizer)
519
+
520
+ max_length = prompt_embeds.shape[1]
521
+ uncond_input = tokenizer(
522
+ negative_prompt,
523
+ padding="max_length",
524
+ max_length=max_length,
525
+ truncation=True,
526
+ return_tensors="pt",
527
+ )
528
+
529
+ negative_prompt_embeds = text_encoder(
530
+ uncond_input.input_ids.to(device),
531
+ output_hidden_states=True,
532
+ )
533
+
534
+ # We are only ALWAYS interested in the pooled output of the final text encoder
535
+ if negative_pooled_prompt_embeds is None and negative_prompt_embeds[0].ndim == 2:
536
+ negative_pooled_prompt_embeds = negative_prompt_embeds[0]
537
+ negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
538
+
539
+ negative_prompt_embeds_list.append(negative_prompt_embeds)
540
+
541
+ negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1)
542
+
543
+ if self.text_encoder_2 is not None:
544
+ prompt_embeds = prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
545
+ else:
546
+ prompt_embeds = prompt_embeds.to(dtype=self.unet.dtype, device=device)
547
+
548
+ bs_embed, seq_len, _ = prompt_embeds.shape
549
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
550
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
551
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
552
+
553
+ if do_classifier_free_guidance:
554
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
555
+ seq_len = negative_prompt_embeds.shape[1]
556
+
557
+ if self.text_encoder_2 is not None:
558
+ negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
559
+ else:
560
+ negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.unet.dtype, device=device)
561
+
562
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
563
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
564
+
565
+ pooled_prompt_embeds = pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
566
+ bs_embed * num_images_per_prompt, -1
567
+ )
568
+ if do_classifier_free_guidance:
569
+ negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
570
+ bs_embed * num_images_per_prompt, -1
571
+ )
572
+
573
+ if self.text_encoder is not None:
574
+ if isinstance(self, StableDiffusionXLLoraLoaderMixin) and USE_PEFT_BACKEND:
575
+ # Retrieve the original scale by scaling back the LoRA layers
576
+ unscale_lora_layers(self.text_encoder, lora_scale)
577
+
578
+ if self.text_encoder_2 is not None:
579
+ if isinstance(self, StableDiffusionXLLoraLoaderMixin) and USE_PEFT_BACKEND:
580
+ # Retrieve the original scale by scaling back the LoRA layers
581
+ unscale_lora_layers(self.text_encoder_2, lora_scale)
582
+
583
+ return prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds
584
+
585
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
586
+ def prepare_extra_step_kwargs(self, generator, eta):
587
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
588
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
589
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
590
+ # and should be between [0, 1]
591
+
592
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
593
+ extra_step_kwargs = {}
594
+ if accepts_eta:
595
+ extra_step_kwargs["eta"] = eta
596
+
597
+ # check if the scheduler accepts generator
598
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
599
+ if accepts_generator:
600
+ extra_step_kwargs["generator"] = generator
601
+ return extra_step_kwargs
602
+
603
+ def check_inputs(self, prompt, height, width, grid_cols, seed_tiles_mode, tiles_mode):
604
+ if height % 8 != 0 or width % 8 != 0:
605
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
606
+
607
+ if prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
608
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
609
+
610
+ if not isinstance(prompt, list) or not all(isinstance(row, list) for row in prompt):
611
+ raise ValueError(f"`prompt` has to be a list of lists but is {type(prompt)}")
612
+
613
+ if not all(len(row) == grid_cols for row in prompt):
614
+ raise ValueError("All prompt rows must have the same number of prompt columns")
615
+
616
+ if not isinstance(seed_tiles_mode, str) and (
617
+ not isinstance(seed_tiles_mode, list) or not all(isinstance(row, list) for row in seed_tiles_mode)
618
+ ):
619
+ raise ValueError(f"`seed_tiles_mode` has to be a string or list of lists but is {type(prompt)}")
620
+
621
+ if any(mode not in tiles_mode for row in seed_tiles_mode for mode in row):
622
+ raise ValueError(f"Seed tiles mode must be one of {tiles_mode}")
623
+
624
+ def _get_add_time_ids(
625
+ self, original_size, crops_coords_top_left, target_size, dtype, text_encoder_projection_dim=None
626
+ ):
627
+ add_time_ids = list(original_size + crops_coords_top_left + target_size)
628
+
629
+ passed_add_embed_dim = (
630
+ self.unet.config.addition_time_embed_dim * len(add_time_ids) + text_encoder_projection_dim
631
+ )
632
+ expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features
633
+
634
+ if expected_add_embed_dim != passed_add_embed_dim:
635
+ raise ValueError(
636
+ f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`."
637
+ )
638
+
639
+ add_time_ids = torch.tensor([add_time_ids], dtype=dtype)
640
+ return add_time_ids
641
+
642
+ def _gaussian_weights(self, tile_width, tile_height, nbatches, device, dtype):
643
+ """Generates a gaussian mask of weights for tile contributions"""
644
+ import numpy as np
645
+ from numpy import exp, pi, sqrt
646
+
647
+ latent_width = tile_width // 8
648
+ latent_height = tile_height // 8
649
+
650
+ var = 0.01
651
+ midpoint = (latent_width - 1) / 2 # -1 because index goes from 0 to latent_width - 1
652
+ x_probs = [
653
+ exp(-(x - midpoint) * (x - midpoint) / (latent_width * latent_width) / (2 * var)) / sqrt(2 * pi * var)
654
+ for x in range(latent_width)
655
+ ]
656
+ midpoint = latent_height / 2
657
+ y_probs = [
658
+ exp(-(y - midpoint) * (y - midpoint) / (latent_height * latent_height) / (2 * var)) / sqrt(2 * pi * var)
659
+ for y in range(latent_height)
660
+ ]
661
+
662
+ weights_np = np.outer(y_probs, x_probs)
663
+ weights_torch = torch.tensor(weights_np, device=device)
664
+ weights_torch = weights_torch.to(dtype)
665
+ return torch.tile(weights_torch, (nbatches, self.unet.config.in_channels, 1, 1))
666
+
667
+ def upcast_vae(self):
668
+ dtype = self.vae.dtype
669
+ self.vae.to(dtype=torch.float32)
670
+ use_torch_2_0_or_xformers = isinstance(
671
+ self.vae.decoder.mid_block.attentions[0].processor,
672
+ (
673
+ AttnProcessor2_0,
674
+ XFormersAttnProcessor,
675
+ FusedAttnProcessor2_0,
676
+ ),
677
+ )
678
+ # if xformers or torch_2_0 is used attention block does not need
679
+ # to be in float32 which can save lots of memory
680
+ if use_torch_2_0_or_xformers:
681
+ self.vae.post_quant_conv.to(dtype)
682
+ self.vae.decoder.conv_in.to(dtype)
683
+ self.vae.decoder.mid_block.to(dtype)
684
+
685
+ # Copied from diffusers.pipelines.latent_consistency_models.pipeline_latent_consistency_text2img.LatentConsistencyModelPipeline.get_guidance_scale_embedding
686
+ def get_guidance_scale_embedding(
687
+ self, w: torch.Tensor, embedding_dim: int = 512, dtype: torch.dtype = torch.float32
688
+ ) -> torch.Tensor:
689
+ """
690
+ See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298
691
+
692
+ Args:
693
+ w (`torch.Tensor`):
694
+ Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
695
+ embedding_dim (`int`, *optional*, defaults to 512):
696
+ Dimension of the embeddings to generate.
697
+ dtype (`torch.dtype`, *optional*, defaults to `torch.float32`):
698
+ Data type of the generated embeddings.
699
+
700
+ Returns:
701
+ `torch.Tensor`: Embedding vectors with shape `(len(w), embedding_dim)`.
702
+ """
703
+ assert len(w.shape) == 1
704
+ w = w * 1000.0
705
+
706
+ half_dim = embedding_dim // 2
707
+ emb = torch.log(torch.tensor(10000.0)) / (half_dim - 1)
708
+ emb = torch.exp(torch.arange(half_dim, dtype=dtype) * -emb)
709
+ emb = w.to(dtype)[:, None] * emb[None, :]
710
+ emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1)
711
+ if embedding_dim % 2 == 1: # zero pad
712
+ emb = torch.nn.functional.pad(emb, (0, 1))
713
+ assert emb.shape == (w.shape[0], embedding_dim)
714
+ return emb
715
+
716
+ @property
717
+ def guidance_scale(self):
718
+ return self._guidance_scale
719
+
720
+ @property
721
+ def clip_skip(self):
722
+ return self._clip_skip
723
+
724
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
725
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
726
+ # corresponds to doing no classifier free guidance.
727
+ @property
728
+ def do_classifier_free_guidance(self):
729
+ return self._guidance_scale > 1 and self.unet.config.time_cond_proj_dim is None
730
+
731
+ @property
732
+ def cross_attention_kwargs(self):
733
+ return self._cross_attention_kwargs
734
+
735
+ @property
736
+ def num_timesteps(self):
737
+ return self._num_timesteps
738
+
739
+ @property
740
+ def interrupt(self):
741
+ return self._interrupt
742
+
743
+ @torch.no_grad()
744
+ @replace_example_docstring(EXAMPLE_DOC_STRING)
745
+ def __call__(
746
+ self,
747
+ prompt: Union[str, List[str]] = None,
748
+ height: Optional[int] = None,
749
+ width: Optional[int] = None,
750
+ num_inference_steps: int = 50,
751
+ guidance_scale: float = 5.0,
752
+ negative_prompt: Optional[Union[str, List[str]]] = None,
753
+ num_images_per_prompt: Optional[int] = 1,
754
+ eta: float = 0.0,
755
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
756
+ output_type: Optional[str] = "pil",
757
+ return_dict: bool = True,
758
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
759
+ original_size: Optional[Tuple[int, int]] = None,
760
+ crops_coords_top_left: Tuple[int, int] = (0, 0),
761
+ target_size: Optional[Tuple[int, int]] = None,
762
+ negative_original_size: Optional[Tuple[int, int]] = None,
763
+ negative_crops_coords_top_left: Tuple[int, int] = (0, 0),
764
+ negative_target_size: Optional[Tuple[int, int]] = None,
765
+ clip_skip: Optional[int] = None,
766
+ tile_height: Optional[int] = 1024,
767
+ tile_width: Optional[int] = 1024,
768
+ tile_row_overlap: Optional[int] = 128,
769
+ tile_col_overlap: Optional[int] = 128,
770
+ guidance_scale_tiles: Optional[List[List[float]]] = None,
771
+ seed_tiles: Optional[List[List[int]]] = None,
772
+ seed_tiles_mode: Optional[Union[str, List[List[str]]]] = "full",
773
+ seed_reroll_regions: Optional[List[Tuple[int, int, int, int, int]]] = None,
774
+ **kwargs,
775
+ ):
776
+ r"""
777
+ Function invoked when calling the pipeline for generation.
778
+
779
+ Args:
780
+ prompt (`str` or `List[str]`, *optional*):
781
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
782
+ instead.
783
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
784
+ The height in pixels of the generated image. This is set to 1024 by default for the best results.
785
+ Anything below 512 pixels won't work well for
786
+ [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
787
+ and checkpoints that are not specifically fine-tuned on low resolutions.
788
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
789
+ The width in pixels of the generated image. This is set to 1024 by default for the best results.
790
+ Anything below 512 pixels won't work well for
791
+ [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
792
+ and checkpoints that are not specifically fine-tuned on low resolutions.
793
+ num_inference_steps (`int`, *optional*, defaults to 50):
794
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
795
+ expense of slower inference.
796
+ guidance_scale (`float`, *optional*, defaults to 5.0):
797
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
798
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
799
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
800
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
801
+ usually at the expense of lower image quality.
802
+ negative_prompt (`str` or `List[str]`, *optional*):
803
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
804
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
805
+ less than `1`).
806
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
807
+ The number of images to generate per prompt.
808
+ eta (`float`, *optional*, defaults to 0.0):
809
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
810
+ [`schedulers.DDIMScheduler`], will be ignored for others.
811
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
812
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
813
+ to make generation deterministic.
814
+ output_type (`str`, *optional*, defaults to `"pil"`):
815
+ The output format of the generate image. Choose between
816
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
817
+ return_dict (`bool`, *optional*, defaults to `True`):
818
+ Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead
819
+ of a plain tuple.
820
+ cross_attention_kwargs (`dict`, *optional*):
821
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
822
+ `self.processor` in
823
+ [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
824
+ original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
825
+ If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
826
+ `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
827
+ explained in section 2.2 of
828
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
829
+ crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
830
+ `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
831
+ `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
832
+ `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
833
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
834
+ target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
835
+ For most cases, `target_size` should be set to the desired height and width of the generated image. If
836
+ not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
837
+ section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
838
+ negative_original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
839
+ To negatively condition the generation process based on a specific image resolution. Part of SDXL's
840
+ micro-conditioning as explained in section 2.2 of
841
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
842
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
843
+ negative_crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
844
+ To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
845
+ micro-conditioning as explained in section 2.2 of
846
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
847
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
848
+ negative_target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
849
+ To negatively condition the generation process based on a target image resolution. It should be as same
850
+ as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
851
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
852
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
853
+ tile_height (`int`, *optional*, defaults to 1024):
854
+ Height of each grid tile in pixels.
855
+ tile_width (`int`, *optional*, defaults to 1024):
856
+ Width of each grid tile in pixels.
857
+ tile_row_overlap (`int`, *optional*, defaults to 128):
858
+ Number of overlapping pixels between tiles in consecutive rows.
859
+ tile_col_overlap (`int`, *optional*, defaults to 128):
860
+ Number of overlapping pixels between tiles in consecutive columns.
861
+ guidance_scale_tiles (`List[List[float]]`, *optional*):
862
+ Specific weights for classifier-free guidance in each tile. If `None`, the value provided in `guidance_scale` will be used.
863
+ seed_tiles (`List[List[int]]`, *optional*):
864
+ Specific seeds for the initialization latents in each tile. These will override the latents generated for the whole canvas using the standard `generator` parameter.
865
+ seed_tiles_mode (`Union[str, List[List[str]]]`, *optional*, defaults to `"full"`):
866
+ Mode for seeding tiles, can be `"full"` or `"exclusive"`. If `"full"`, all the latents affected by the tile will be overridden. If `"exclusive"`, only the latents that are exclusively affected by this tile (and no other tiles) will be overridden.
867
+ seed_reroll_regions (`List[Tuple[int, int, int, int, int]]`, *optional*):
868
+ A list of tuples in the form of `(start_row, end_row, start_column, end_column, seed)` defining regions in pixel space for which the latents will be overridden using the given seed. Takes priority over `seed_tiles`.
869
+ **kwargs (`Dict[str, Any]`, *optional*):
870
+ Additional optional keyword arguments to be passed to the `unet.__call__` and `scheduler.step` functions.
871
+
872
+ Examples:
873
+
874
+ Returns:
875
+ [`~pipelines.stable_diffusion_xl.StableDiffusionXLTilingPipelineOutput`] or `tuple`:
876
+ [`~pipelines.stable_diffusion_xl.StableDiffusionXLTilingPipelineOutput`] if `return_dict` is True, otherwise a
877
+ `tuple`. When returning a tuple, the first element is a list with the generated images.
878
+ """
879
+
880
+ # 0. Default height and width to unet
881
+ height = height or self.default_sample_size * self.vae_scale_factor
882
+ width = width or self.default_sample_size * self.vae_scale_factor
883
+
884
+ original_size = original_size or (height, width)
885
+ target_size = target_size or (height, width)
886
+
887
+ self._guidance_scale = guidance_scale
888
+ self._clip_skip = clip_skip
889
+ self._cross_attention_kwargs = cross_attention_kwargs
890
+ self._interrupt = False
891
+
892
+ grid_rows = len(prompt)
893
+ grid_cols = len(prompt[0])
894
+
895
+ tiles_mode = [mode.value for mode in self.SeedTilesMode]
896
+
897
+ if isinstance(seed_tiles_mode, str):
898
+ seed_tiles_mode = [[seed_tiles_mode for _ in range(len(row))] for row in prompt]
899
+
900
+ # 1. Check inputs. Raise error if not correct
901
+ self.check_inputs(
902
+ prompt,
903
+ height,
904
+ width,
905
+ grid_cols,
906
+ seed_tiles_mode,
907
+ tiles_mode,
908
+ )
909
+
910
+ if seed_reroll_regions is None:
911
+ seed_reroll_regions = []
912
+
913
+ batch_size = 1
914
+
915
+ device = self._execution_device
916
+
917
+ # update height and width tile size and tile overlap size
918
+ height = tile_height + (grid_rows - 1) * (tile_height - tile_row_overlap)
919
+ width = tile_width + (grid_cols - 1) * (tile_width - tile_col_overlap)
920
+
921
+ # 3. Encode input prompt
922
+ lora_scale = (
923
+ self.cross_attention_kwargs.get("scale", None) if self.cross_attention_kwargs is not None else None
924
+ )
925
+ text_embeddings = [
926
+ [
927
+ self.encode_prompt(
928
+ prompt=col,
929
+ device=device,
930
+ num_images_per_prompt=num_images_per_prompt,
931
+ do_classifier_free_guidance=self.do_classifier_free_guidance,
932
+ negative_prompt=negative_prompt,
933
+ prompt_embeds=None,
934
+ negative_prompt_embeds=None,
935
+ pooled_prompt_embeds=None,
936
+ negative_pooled_prompt_embeds=None,
937
+ lora_scale=lora_scale,
938
+ clip_skip=self.clip_skip,
939
+ )
940
+ for col in row
941
+ ]
942
+ for row in prompt
943
+ ]
944
+
945
+ # 3. Prepare latents
946
+ latents_shape = (batch_size, self.unet.config.in_channels, height // 8, width // 8)
947
+ dtype = text_embeddings[0][0][0].dtype
948
+ latents = randn_tensor(latents_shape, generator=generator, device=device, dtype=dtype)
949
+
950
+ # 3.1 overwrite latents for specific tiles if provided
951
+ if seed_tiles is not None:
952
+ for row in range(grid_rows):
953
+ for col in range(grid_cols):
954
+ if (seed_tile := seed_tiles[row][col]) is not None:
955
+ mode = seed_tiles_mode[row][col]
956
+ if mode == self.SeedTilesMode.FULL.value:
957
+ row_init, row_end, col_init, col_end = _tile2latent_indices(
958
+ row, col, tile_width, tile_height, tile_row_overlap, tile_col_overlap
959
+ )
960
+ else:
961
+ row_init, row_end, col_init, col_end = _tile2latent_exclusive_indices(
962
+ row,
963
+ col,
964
+ tile_width,
965
+ tile_height,
966
+ tile_row_overlap,
967
+ tile_col_overlap,
968
+ grid_rows,
969
+ grid_cols,
970
+ )
971
+ tile_generator = torch.Generator(device).manual_seed(seed_tile)
972
+ tile_shape = (latents_shape[0], latents_shape[1], row_end - row_init, col_end - col_init)
973
+ latents[:, :, row_init:row_end, col_init:col_end] = torch.randn(
974
+ tile_shape, generator=tile_generator, device=device
975
+ )
976
+
977
+ # 3.2 overwrite again for seed reroll regions
978
+ for row_init, row_end, col_init, col_end, seed_reroll in seed_reroll_regions:
979
+ row_init, row_end, col_init, col_end = _pixel2latent_indices(
980
+ row_init, row_end, col_init, col_end
981
+ ) # to latent space coordinates
982
+ reroll_generator = torch.Generator(device).manual_seed(seed_reroll)
983
+ region_shape = (latents_shape[0], latents_shape[1], row_end - row_init, col_end - col_init)
984
+ latents[:, :, row_init:row_end, col_init:col_end] = torch.randn(
985
+ region_shape, generator=reroll_generator, device=device
986
+ )
987
+
988
+ # 4. Prepare timesteps
989
+ accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
990
+ extra_set_kwargs = {}
991
+ if accepts_offset:
992
+ extra_set_kwargs["offset"] = 1
993
+ timesteps, num_inference_steps = retrieve_timesteps(
994
+ self.scheduler, num_inference_steps, device, None, None, **extra_set_kwargs
995
+ )
996
+
997
+ # if we use LMSDiscreteScheduler, let's make sure latents are multiplied by sigmas
998
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
999
+ latents = latents * self.scheduler.sigmas[0]
1000
+
1001
+ # 5. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
1002
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
1003
+
1004
+ # 6. Prepare added time ids & embeddings
1005
+ # text_embeddings order: prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds
1006
+ embeddings_and_added_time = []
1007
+ for row in range(grid_rows):
1008
+ addition_embed_type_row = []
1009
+ for col in range(grid_cols):
1010
+ # extract generated values
1011
+ prompt_embeds = text_embeddings[row][col][0]
1012
+ negative_prompt_embeds = text_embeddings[row][col][1]
1013
+ pooled_prompt_embeds = text_embeddings[row][col][2]
1014
+ negative_pooled_prompt_embeds = text_embeddings[row][col][3]
1015
+
1016
+ add_text_embeds = pooled_prompt_embeds
1017
+ if self.text_encoder_2 is None:
1018
+ text_encoder_projection_dim = int(pooled_prompt_embeds.shape[-1])
1019
+ else:
1020
+ text_encoder_projection_dim = self.text_encoder_2.config.projection_dim
1021
+ add_time_ids = self._get_add_time_ids(
1022
+ original_size,
1023
+ crops_coords_top_left,
1024
+ target_size,
1025
+ dtype=prompt_embeds.dtype,
1026
+ text_encoder_projection_dim=text_encoder_projection_dim,
1027
+ )
1028
+ if negative_original_size is not None and negative_target_size is not None:
1029
+ negative_add_time_ids = self._get_add_time_ids(
1030
+ negative_original_size,
1031
+ negative_crops_coords_top_left,
1032
+ negative_target_size,
1033
+ dtype=prompt_embeds.dtype,
1034
+ text_encoder_projection_dim=text_encoder_projection_dim,
1035
+ )
1036
+ else:
1037
+ negative_add_time_ids = add_time_ids
1038
+
1039
+ if self.do_classifier_free_guidance:
1040
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
1041
+ add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0)
1042
+ add_time_ids = torch.cat([negative_add_time_ids, add_time_ids], dim=0)
1043
+
1044
+ prompt_embeds = prompt_embeds.to(device)
1045
+ add_text_embeds = add_text_embeds.to(device)
1046
+ add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1)
1047
+ addition_embed_type_row.append((prompt_embeds, add_text_embeds, add_time_ids))
1048
+ embeddings_and_added_time.append(addition_embed_type_row)
1049
+
1050
+ num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0)
1051
+
1052
+ # 7. Mask for tile weights strength
1053
+ tile_weights = self._gaussian_weights(tile_width, tile_height, batch_size, device, torch.float32)
1054
+
1055
+ # 8. Denoising loop
1056
+ self._num_timesteps = len(timesteps)
1057
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
1058
+ for i, t in enumerate(timesteps):
1059
+ # Diffuse each tile
1060
+ noise_preds = []
1061
+ for row in range(grid_rows):
1062
+ noise_preds_row = []
1063
+ for col in range(grid_cols):
1064
+ if self.interrupt:
1065
+ continue
1066
+ px_row_init, px_row_end, px_col_init, px_col_end = _tile2latent_indices(
1067
+ row, col, tile_width, tile_height, tile_row_overlap, tile_col_overlap
1068
+ )
1069
+ tile_latents = latents[:, :, px_row_init:px_row_end, px_col_init:px_col_end]
1070
+ # expand the latents if we are doing classifier free guidance
1071
+ latent_model_input = (
1072
+ torch.cat([tile_latents] * 2) if self.do_classifier_free_guidance else tile_latents
1073
+ )
1074
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
1075
+
1076
+ # predict the noise residual
1077
+ added_cond_kwargs = {
1078
+ "text_embeds": embeddings_and_added_time[row][col][1],
1079
+ "time_ids": embeddings_and_added_time[row][col][2],
1080
+ }
1081
+ with torch.amp.autocast(device.type, dtype=dtype, enabled=dtype != self.unet.dtype):
1082
+ noise_pred = self.unet(
1083
+ latent_model_input,
1084
+ t,
1085
+ encoder_hidden_states=embeddings_and_added_time[row][col][0],
1086
+ cross_attention_kwargs=self.cross_attention_kwargs,
1087
+ added_cond_kwargs=added_cond_kwargs,
1088
+ return_dict=False,
1089
+ )[0]
1090
+
1091
+ # perform guidance
1092
+ if self.do_classifier_free_guidance:
1093
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
1094
+ guidance = (
1095
+ guidance_scale
1096
+ if guidance_scale_tiles is None or guidance_scale_tiles[row][col] is None
1097
+ else guidance_scale_tiles[row][col]
1098
+ )
1099
+ noise_pred_tile = noise_pred_uncond + guidance * (noise_pred_text - noise_pred_uncond)
1100
+ noise_preds_row.append(noise_pred_tile)
1101
+ noise_preds.append(noise_preds_row)
1102
+
1103
+ # Stitch noise predictions for all tiles
1104
+ noise_pred = torch.zeros(latents.shape, device=device)
1105
+ contributors = torch.zeros(latents.shape, device=device)
1106
+
1107
+ # Add each tile contribution to overall latents
1108
+ for row in range(grid_rows):
1109
+ for col in range(grid_cols):
1110
+ px_row_init, px_row_end, px_col_init, px_col_end = _tile2latent_indices(
1111
+ row, col, tile_width, tile_height, tile_row_overlap, tile_col_overlap
1112
+ )
1113
+ noise_pred[:, :, px_row_init:px_row_end, px_col_init:px_col_end] += (
1114
+ noise_preds[row][col] * tile_weights
1115
+ )
1116
+ contributors[:, :, px_row_init:px_row_end, px_col_init:px_col_end] += tile_weights
1117
+
1118
+ # Average overlapping areas with more than 1 contributor
1119
+ noise_pred /= contributors
1120
+ noise_pred = noise_pred.to(dtype)
1121
+
1122
+ # compute the previous noisy sample x_t -> x_t-1
1123
+ latents_dtype = latents.dtype
1124
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
1125
+ if latents.dtype != latents_dtype:
1126
+ if torch.backends.mps.is_available():
1127
+ # some platforms (eg. apple mps) misbehave due to a pytorch bug: https://github.com/pytorch/pytorch/pull/99272
1128
+ latents = latents.to(latents_dtype)
1129
+
1130
+ # update progress bar
1131
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
1132
+ progress_bar.update()
1133
+
1134
+ if XLA_AVAILABLE:
1135
+ xm.mark_step()
1136
+
1137
+ if not output_type == "latent":
1138
+ # make sure the VAE is in float32 mode, as it overflows in float16
1139
+ needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast
1140
+
1141
+ if needs_upcasting:
1142
+ self.upcast_vae()
1143
+ latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype)
1144
+ elif latents.dtype != self.vae.dtype:
1145
+ if torch.backends.mps.is_available():
1146
+ # some platforms (eg. apple mps) misbehave due to a pytorch bug: https://github.com/pytorch/pytorch/pull/99272
1147
+ self.vae = self.vae.to(latents.dtype)
1148
+
1149
+ # unscale/denormalize the latents
1150
+ # denormalize with the mean and std if available and not None
1151
+ has_latents_mean = hasattr(self.vae.config, "latents_mean") and self.vae.config.latents_mean is not None
1152
+ has_latents_std = hasattr(self.vae.config, "latents_std") and self.vae.config.latents_std is not None
1153
+ if has_latents_mean and has_latents_std:
1154
+ latents_mean = (
1155
+ torch.tensor(self.vae.config.latents_mean).view(1, 4, 1, 1).to(latents.device, latents.dtype)
1156
+ )
1157
+ latents_std = (
1158
+ torch.tensor(self.vae.config.latents_std).view(1, 4, 1, 1).to(latents.device, latents.dtype)
1159
+ )
1160
+ latents = latents * latents_std / self.vae.config.scaling_factor + latents_mean
1161
+ else:
1162
+ latents = latents / self.vae.config.scaling_factor
1163
+
1164
+ image = self.vae.decode(latents, return_dict=False)[0]
1165
+
1166
+ # cast back to fp16 if needed
1167
+ if needs_upcasting:
1168
+ self.vae.to(dtype=torch.float16)
1169
+ else:
1170
+ image = latents
1171
+
1172
+ if not output_type == "latent":
1173
+ # apply watermark if available
1174
+ if self.watermark is not None:
1175
+ image = self.watermark.apply_watermark(image)
1176
+
1177
+ image = self.image_processor.postprocess(image, output_type=output_type)
1178
+
1179
+ # Offload all models
1180
+ self.maybe_free_model_hooks()
1181
+
1182
+ if not return_dict:
1183
+ return (image,)
1184
+
1185
+ return StableDiffusionXLPipelineOutput(images=image)