Datasets:

ArXiv:
Diffusers Bot commited on
Commit
ac10807
·
verified ·
1 Parent(s): d3bb672

Upload folder using huggingface_hub

Browse files
v0.7.0/README.md ADDED
@@ -0,0 +1,503 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Community Examples
2
+
3
+ > **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).**
4
+
5
+ **Community** examples consist of both inference and training examples that have been added by the community.
6
+ Please have a look at the following table to get an overview of all community examples. Click on the **Code Example** to get a copy-and-paste ready code example that you can try out.
7
+ If a community doesn't work as expected, please open an issue and ping the author on it.
8
+
9
+ | Example | Description | Code Example | Colab | Author |
10
+ |:---------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------:|
11
+ | CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion | [CLIP Guided Stable Diffusion](#clip-guided-stable-diffusion) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) | [Suraj Patil](https://github.com/patil-suraj/) |
12
+ | One Step U-Net (Dummy) | Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) | [One Step U-Net](#one-step-unet) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
13
+ | Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Stable Diffusion Interpolation](#stable-diffusion-interpolation) | - | [Nate Raw](https://github.com/nateraw/) |
14
+ | Stable Diffusion Mega | **One** Stable Diffusion Pipeline with all functionalities of [Text2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py), [Image2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) and [Inpainting](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | [Stable Diffusion Mega](#stable-diffusion-mega) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
15
+ | Long Prompt Weighting Stable Diffusion | **One** Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. | [Long Prompt Weighting Stable Diffusion](#long-prompt-weighting-stable-diffusion) | - | [SkyTNT](https://github.com/SkyTNT) |
16
+ | Speech to Image | Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images | [Speech to Image](#speech-to-image) | - | [Mikail Duzenli](https://github.com/MikailINTech)
17
+ | Wild Card Stable Diffusion | Stable Diffusion Pipeline that supports prompts that contain wildcard terms (indicated by surrounding double underscores), with values instantiated randomly from a corresponding txt file or a dictionary of possible values | [Wildcard Stable Diffusion](#wildcard-stable-diffusion) | - | [Shyam Sudhakaran](https://github.com/shyamsn97) |
18
+ | Composable Stable Diffusion| Stable Diffusion Pipeline that supports prompts that contain "|" in prompts (as an AND condition) and weights (separated by "|" as well) to positively / negatively weight prompts. | [Composable Stable Diffusion](#composable-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
19
+ | Seed Resizing Stable Diffusion| Stable Diffusion Pipeline that supports resizing an image and retaining the concepts of the 512 by 512 generation. | [Seed Resizing](#seed-resizing) | - | [Mark Rich](https://github.com/MarkRich) |
20
+
21
+ | Imagic Stable Diffusion | Stable Diffusion Pipeline that enables writing a text prompt to edit an existing image| [Imagic Stable Diffusion](#imagic-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
22
+
23
+
24
+ To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
25
+ ```py
26
+ pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="filename_in_the_community_folder")
27
+ ```
28
+
29
+ ## Example usages
30
+
31
+ ### CLIP Guided Stable Diffusion
32
+
33
+ CLIP guided stable diffusion can help to generate more realistic images
34
+ by guiding stable diffusion at every denoising step with an additional CLIP model.
35
+
36
+ The following code requires roughly 12GB of GPU RAM.
37
+
38
+ ```python
39
+ from diffusers import DiffusionPipeline
40
+ from transformers import CLIPFeatureExtractor, CLIPModel
41
+ import torch
42
+
43
+
44
+ feature_extractor = CLIPFeatureExtractor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K")
45
+ clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16)
46
+
47
+
48
+ guided_pipeline = DiffusionPipeline.from_pretrained(
49
+ "runwayml/stable-diffusion-v1-5",
50
+ custom_pipeline="clip_guided_stable_diffusion",
51
+ clip_model=clip_model,
52
+ feature_extractor=feature_extractor,
53
+ revision="fp16",
54
+ torch_dtype=torch.float16,
55
+ )
56
+ guided_pipeline.enable_attention_slicing()
57
+ guided_pipeline = guided_pipeline.to("cuda")
58
+
59
+ prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
60
+
61
+ generator = torch.Generator(device="cuda").manual_seed(0)
62
+ images = []
63
+ for i in range(4):
64
+ image = guided_pipeline(
65
+ prompt,
66
+ num_inference_steps=50,
67
+ guidance_scale=7.5,
68
+ clip_guidance_scale=100,
69
+ num_cutouts=4,
70
+ use_cutouts=False,
71
+ generator=generator,
72
+ ).images[0]
73
+ images.append(image)
74
+
75
+ # save images locally
76
+ for i, img in enumerate(images):
77
+ img.save(f"./clip_guided_sd/image_{i}.png")
78
+ ```
79
+
80
+ The `images` list contains a list of PIL images that can be saved locally or displayed directly in a google colab.
81
+ Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images:
82
+
83
+ ![clip_guidance](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/clip_guidance/merged_clip_guidance.jpg).
84
+
85
+ ### One Step Unet
86
+
87
+ The dummy "one-step-unet" can be run as follows:
88
+
89
+ ```python
90
+ from diffusers import DiffusionPipeline
91
+
92
+ pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet")
93
+ pipe()
94
+ ```
95
+
96
+ **Note**: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see https://github.com/huggingface/diffusers/issues/841).
97
+
98
+ ### Stable Diffusion Interpolation
99
+
100
+ The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes.
101
+
102
+ ```python
103
+ from diffusers import DiffusionPipeline
104
+ import torch
105
+
106
+ pipe = DiffusionPipeline.from_pretrained(
107
+ "CompVis/stable-diffusion-v1-4",
108
+ revision='fp16',
109
+ torch_dtype=torch.float16,
110
+ safety_checker=None, # Very important for videos...lots of false positives while interpolating
111
+ custom_pipeline="interpolate_stable_diffusion",
112
+ ).to('cuda')
113
+ pipe.enable_attention_slicing()
114
+
115
+ frame_filepaths = pipe.walk(
116
+ prompts=['a dog', 'a cat', 'a horse'],
117
+ seeds=[42, 1337, 1234],
118
+ num_interpolation_steps=16,
119
+ output_dir='./dreams',
120
+ batch_size=4,
121
+ height=512,
122
+ width=512,
123
+ guidance_scale=8.5,
124
+ num_inference_steps=50,
125
+ )
126
+ ```
127
+
128
+ The output of the `walk(...)` function returns a list of images saved under the folder as defined in `output_dir`. You can use these images to create videos of stable diffusion.
129
+
130
+ > **Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality.**
131
+
132
+ ### Stable Diffusion Mega
133
+
134
+ The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class.
135
+
136
+ ```python
137
+ #!/usr/bin/env python3
138
+ from diffusers import DiffusionPipeline
139
+ import PIL
140
+ import requests
141
+ from io import BytesIO
142
+ import torch
143
+
144
+
145
+ def download_image(url):
146
+ response = requests.get(url)
147
+ return PIL.Image.open(BytesIO(response.content)).convert("RGB")
148
+
149
+ pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_mega", torch_dtype=torch.float16, revision="fp16")
150
+ pipe.to("cuda")
151
+ pipe.enable_attention_slicing()
152
+
153
+
154
+ ### Text-to-Image
155
+
156
+ images = pipe.text2img("An astronaut riding a horse").images
157
+
158
+ ### Image-to-Image
159
+
160
+ init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
161
+
162
+ prompt = "A fantasy landscape, trending on artstation"
163
+
164
+ images = pipe.img2img(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5).images
165
+
166
+ ### Inpainting
167
+
168
+ img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
169
+ mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
170
+ init_image = download_image(img_url).resize((512, 512))
171
+ mask_image = download_image(mask_url).resize((512, 512))
172
+
173
+ prompt = "a cat sitting on a bench"
174
+ images = pipe.inpaint(prompt=prompt, init_image=init_image, mask_image=mask_image, strength=0.75).images
175
+ ```
176
+
177
+ As shown above this one pipeline can run all both "text-to-image", "image-to-image", and "inpainting" in one pipeline.
178
+
179
+ ### Long Prompt Weighting Stable Diffusion
180
+
181
+ The Pipeline lets you input prompt without 77 token length limit. And you can increase words weighting by using "()" or decrease words weighting by using "[]"
182
+ The Pipeline also lets you use the main use cases of the stable diffusion pipeline in a single class.
183
+
184
+ #### pytorch
185
+
186
+ ```python
187
+ from diffusers import DiffusionPipeline
188
+ import torch
189
+
190
+ pipe = DiffusionPipeline.from_pretrained(
191
+ 'hakurei/waifu-diffusion',
192
+ custom_pipeline="lpw_stable_diffusion",
193
+ revision="fp16",
194
+ torch_dtype=torch.float16
195
+ )
196
+ pipe=pipe.to("cuda")
197
+
198
+ prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms"
199
+ neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry"
200
+
201
+ pipe.text2img(prompt, negative_prompt=neg_prompt, width=512,height=512,max_embeddings_multiples=3).images[0]
202
+
203
+ ```
204
+
205
+ #### onnxruntime
206
+
207
+ ```python
208
+ from diffusers import DiffusionPipeline
209
+ import torch
210
+
211
+ pipe = DiffusionPipeline.from_pretrained(
212
+ 'CompVis/stable-diffusion-v1-4',
213
+ custom_pipeline="lpw_stable_diffusion_onnx",
214
+ revision="onnx",
215
+ provider="CUDAExecutionProvider"
216
+ )
217
+
218
+ prompt = "a photo of an astronaut riding a horse on mars, best quality"
219
+ neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
220
+
221
+ pipe.text2img(prompt,negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
222
+
223
+ ```
224
+
225
+ if you see `Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors`. Do not worry, it is normal.
226
+
227
+ ### Speech to Image
228
+
229
+ The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion.
230
+
231
+ ```Python
232
+ import torch
233
+
234
+ import matplotlib.pyplot as plt
235
+ from datasets import load_dataset
236
+ from diffusers import DiffusionPipeline
237
+ from transformers import (
238
+ WhisperForConditionalGeneration,
239
+ WhisperProcessor,
240
+ )
241
+
242
+
243
+ device = "cuda" if torch.cuda.is_available() else "cpu"
244
+
245
+ ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
246
+
247
+ audio_sample = ds[3]
248
+
249
+ text = audio_sample["text"].lower()
250
+ speech_data = audio_sample["audio"]["array"]
251
+
252
+ model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device)
253
+ processor = WhisperProcessor.from_pretrained("openai/whisper-small")
254
+
255
+ diffuser_pipeline = DiffusionPipeline.from_pretrained(
256
+ "CompVis/stable-diffusion-v1-4",
257
+ custom_pipeline="speech_to_image_diffusion",
258
+ speech_model=model,
259
+ speech_processor=processor,
260
+ revision="fp16",
261
+ torch_dtype=torch.float16,
262
+ )
263
+
264
+ diffuser_pipeline.enable_attention_slicing()
265
+ diffuser_pipeline = diffuser_pipeline.to(device)
266
+
267
+ output = diffuser_pipeline(speech_data)
268
+ plt.imshow(output.images[0])
269
+ ```
270
+ This example produces the following image:
271
+
272
+ ![image](https://user-images.githubusercontent.com/45072645/196901736-77d9c6fc-63ee-4072-90b0-dc8b903d63e3.png)
273
+
274
+ ### Wildcard Stable Diffusion
275
+ Following the great examples from https://github.com/jtkelm2/stable-diffusion-webui-1/blob/master/scripts/wildcards.py and https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts#wildcards, here's a minimal implementation that allows for users to add "wildcards", denoted by `__wildcard__` to prompts that are used as placeholders for randomly sampled values given by either a dictionary or a `.txt` file. For example:
276
+
277
+ Say we have a prompt:
278
+
279
+ ```
280
+ prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
281
+ ```
282
+
283
+ We can then define possible values to be sampled for `animal`, `object`, and `clothing`. These can either be from a `.txt` with the same name as the category.
284
+
285
+ The possible values can also be defined / combined by using a dictionary like: `{"animal":["dog", "cat", mouse"]}`.
286
+
287
+ The actual pipeline works just like `StableDiffusionPipeline`, except the `__call__` method takes in:
288
+
289
+ `wildcard_files`: list of file paths for wild card replacement
290
+ `wildcard_option_dict`: dict with key as `wildcard` and values as a list of possible replacements
291
+ `num_prompt_samples`: number of prompts to sample, uniformly sampling wildcards
292
+
293
+ A full example:
294
+
295
+ create `animal.txt`, with contents like:
296
+
297
+ ```
298
+ dog
299
+ cat
300
+ mouse
301
+ ```
302
+
303
+ create `object.txt`, with contents like:
304
+
305
+ ```
306
+ chair
307
+ sofa
308
+ bench
309
+ ```
310
+
311
+ ```python
312
+ from diffusers import DiffusionPipeline
313
+ import torch
314
+
315
+ pipe = DiffusionPipeline.from_pretrained(
316
+ "CompVis/stable-diffusion-v1-4",
317
+ custom_pipeline="wildcard_stable_diffusion",
318
+ revision="fp16",
319
+ torch_dtype=torch.float16,
320
+ )
321
+ prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
322
+ out = pipe(
323
+ prompt,
324
+ wildcard_option_dict={
325
+ "clothing":["hat", "shirt", "scarf", "beret"]
326
+ },
327
+ wildcard_files=["object.txt", "animal.txt"],
328
+ num_prompt_samples=1
329
+ )
330
+ ```
331
+
332
+
333
+ ### Composable Stable diffusion
334
+
335
+ ```python
336
+ import torch as th
337
+ import numpy as np
338
+ import torchvision.utils as tvu
339
+ from diffusers import DiffusionPipeline
340
+
341
+ has_cuda = th.cuda.is_available()
342
+ device = th.device('cpu' if not has_cuda else 'cuda')
343
+
344
+ pipe = DiffusionPipeline.from_pretrained(
345
+ "CompVis/stable-diffusion-v1-4",
346
+ use_auth_token=True,
347
+ custom_pipeline="composable_stable_diffusion",
348
+ ).to(device)
349
+
350
+
351
+ def dummy(images, **kwargs):
352
+ return images, False
353
+
354
+ pipe.safety_checker = dummy
355
+
356
+ images = []
357
+ generator = th.Generator("cuda").manual_seed(0)
358
+
359
+ seed = 0
360
+ prompt = "a forest | a camel"
361
+ weights = " 1 | 1" # Equal weight to each prompt. Can be negative
362
+
363
+ images = []
364
+ for i in range(4):
365
+ res = pipe(
366
+ prompt,
367
+ guidance_scale=7.5,
368
+ num_inference_steps=50,
369
+ weights=weights,
370
+ generator=generator)
371
+ image = res.images[0]
372
+ images.append(image)
373
+
374
+ for i, img in enumerate(images):
375
+ img.save(f"./composable_diffusion/image_{i}.png")
376
+ ```
377
+
378
+ ### Imagic Stable Diffusion
379
+ Allows you to edit an image using stable diffusion.
380
+
381
+ ```python
382
+ import requests
383
+ from PIL import Image
384
+ from io import BytesIO
385
+ import torch
386
+ from diffusers import DiffusionPipeline, DDIMScheduler
387
+ has_cuda = torch.cuda.is_available()
388
+ device = torch.device('cpu' if not has_cuda else 'cuda')
389
+ pipe = DiffusionPipeline.from_pretrained(
390
+ "CompVis/stable-diffusion-v1-4",
391
+ safety_checker=None,
392
+ use_auth_token=True,
393
+ custom_pipeline="imagic_stable_diffusion",
394
+ scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
395
+ ).to(device)
396
+ generator = th.Generator("cuda").manual_seed(0)
397
+ seed = 0
398
+ prompt = "A photo of Barack Obama smiling with a big grin"
399
+ url = 'https://www.dropbox.com/s/6tlwzr73jd1r9yk/obama.png?dl=1'
400
+ response = requests.get(url)
401
+ init_image = Image.open(BytesIO(response.content)).convert("RGB")
402
+ init_image = init_image.resize((512, 512))
403
+ res = pipe.train(
404
+ prompt,
405
+ init_image,
406
+ guidance_scale=7.5,
407
+ num_inference_steps=50,
408
+ generator=generator)
409
+ res = pipe(alpha=1)
410
+ image = res.images[0]
411
+ image.save('./imagic/imagic_image_alpha_1.png')
412
+ res = pipe(alpha=1.5)
413
+ image = res.images[0]
414
+ image.save('./imagic/imagic_image_alpha_1_5.png')
415
+ res = pipe(alpha=2)
416
+ image = res.images[0]
417
+ image.save('./imagic/imagic_image_alpha_2.png')
418
+ ```
419
+
420
+ ### Seed Resizing
421
+ Test seed resizing. Originally generate an image in 512 by 512, then generate image with same seed at 512 by 592 using seed resizing. Finally, generate 512 by 592 using original stable diffusion pipeline.
422
+
423
+ ```python
424
+ import torch as th
425
+ import numpy as np
426
+ from diffusers import DiffusionPipeline
427
+
428
+ has_cuda = th.cuda.is_available()
429
+ device = th.device('cpu' if not has_cuda else 'cuda')
430
+
431
+ pipe = DiffusionPipeline.from_pretrained(
432
+ "CompVis/stable-diffusion-v1-4",
433
+ use_auth_token=True,
434
+ custom_pipeline="seed_resize_stable_diffusion"
435
+ ).to(device)
436
+
437
+ def dummy(images, **kwargs):
438
+ return images, False
439
+
440
+ pipe.safety_checker = dummy
441
+
442
+
443
+ images = []
444
+ th.manual_seed(0)
445
+ generator = th.Generator("cuda").manual_seed(0)
446
+
447
+ seed = 0
448
+ prompt = "A painting of a futuristic cop"
449
+
450
+ width = 512
451
+ height = 512
452
+
453
+ res = pipe(
454
+ prompt,
455
+ guidance_scale=7.5,
456
+ num_inference_steps=50,
457
+ height=height,
458
+ width=width,
459
+ generator=generator)
460
+ image = res.images[0]
461
+ image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
462
+
463
+
464
+ th.manual_seed(0)
465
+ generator = th.Generator("cuda").manual_seed(0)
466
+
467
+ pipe = DiffusionPipeline.from_pretrained(
468
+ "CompVis/stable-diffusion-v1-4",
469
+ use_auth_token=True,
470
+ custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
471
+ ).to(device)
472
+
473
+ width = 512
474
+ height = 592
475
+
476
+ res = pipe(
477
+ prompt,
478
+ guidance_scale=7.5,
479
+ num_inference_steps=50,
480
+ height=height,
481
+ width=width,
482
+ generator=generator)
483
+ image = res.images[0]
484
+ image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
485
+
486
+ pipe_compare = DiffusionPipeline.from_pretrained(
487
+ "CompVis/stable-diffusion-v1-4",
488
+ use_auth_token=True,
489
+ custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
490
+ ).to(device)
491
+
492
+ res = pipe_compare(
493
+ prompt,
494
+ guidance_scale=7.5,
495
+ num_inference_steps=50,
496
+ height=height,
497
+ width=width,
498
+ generator=generator
499
+ )
500
+
501
+ image = res.images[0]
502
+ image.save('./seed_resize/seed_resize_{w}_{h}_image_compare.png'.format(w=width, h=height))
503
+ ```
v0.7.0/clip_guided_stable_diffusion.py ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import List, Optional, Union
3
+
4
+ import torch
5
+ from torch import nn
6
+ from torch.nn import functional as F
7
+
8
+ from diffusers import AutoencoderKL, DiffusionPipeline, LMSDiscreteScheduler, PNDMScheduler, UNet2DConditionModel
9
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
10
+ from torchvision import transforms
11
+ from transformers import CLIPFeatureExtractor, CLIPModel, CLIPTextModel, CLIPTokenizer
12
+
13
+
14
+ class MakeCutouts(nn.Module):
15
+ def __init__(self, cut_size, cut_power=1.0):
16
+ super().__init__()
17
+
18
+ self.cut_size = cut_size
19
+ self.cut_power = cut_power
20
+
21
+ def forward(self, pixel_values, num_cutouts):
22
+ sideY, sideX = pixel_values.shape[2:4]
23
+ max_size = min(sideX, sideY)
24
+ min_size = min(sideX, sideY, self.cut_size)
25
+ cutouts = []
26
+ for _ in range(num_cutouts):
27
+ size = int(torch.rand([]) ** self.cut_power * (max_size - min_size) + min_size)
28
+ offsetx = torch.randint(0, sideX - size + 1, ())
29
+ offsety = torch.randint(0, sideY - size + 1, ())
30
+ cutout = pixel_values[:, :, offsety : offsety + size, offsetx : offsetx + size]
31
+ cutouts.append(F.adaptive_avg_pool2d(cutout, self.cut_size))
32
+ return torch.cat(cutouts)
33
+
34
+
35
+ def spherical_dist_loss(x, y):
36
+ x = F.normalize(x, dim=-1)
37
+ y = F.normalize(y, dim=-1)
38
+ return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
39
+
40
+
41
+ def set_requires_grad(model, value):
42
+ for param in model.parameters():
43
+ param.requires_grad = value
44
+
45
+
46
+ class CLIPGuidedStableDiffusion(DiffusionPipeline):
47
+ """CLIP guided stable diffusion based on the amazing repo by @crowsonkb and @Jack000
48
+ - https://github.com/Jack000/glid-3-xl
49
+ - https://github.dev/crowsonkb/k-diffusion
50
+ """
51
+
52
+ def __init__(
53
+ self,
54
+ vae: AutoencoderKL,
55
+ text_encoder: CLIPTextModel,
56
+ clip_model: CLIPModel,
57
+ tokenizer: CLIPTokenizer,
58
+ unet: UNet2DConditionModel,
59
+ scheduler: Union[PNDMScheduler, LMSDiscreteScheduler],
60
+ feature_extractor: CLIPFeatureExtractor,
61
+ ):
62
+ super().__init__()
63
+ self.register_modules(
64
+ vae=vae,
65
+ text_encoder=text_encoder,
66
+ clip_model=clip_model,
67
+ tokenizer=tokenizer,
68
+ unet=unet,
69
+ scheduler=scheduler,
70
+ feature_extractor=feature_extractor,
71
+ )
72
+
73
+ self.normalize = transforms.Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std)
74
+ self.make_cutouts = MakeCutouts(feature_extractor.size)
75
+
76
+ set_requires_grad(self.text_encoder, False)
77
+ set_requires_grad(self.clip_model, False)
78
+
79
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
80
+ if slice_size == "auto":
81
+ # half the attention head size is usually a good trade-off between
82
+ # speed and memory
83
+ slice_size = self.unet.config.attention_head_dim // 2
84
+ self.unet.set_attention_slice(slice_size)
85
+
86
+ def disable_attention_slicing(self):
87
+ self.enable_attention_slicing(None)
88
+
89
+ def freeze_vae(self):
90
+ set_requires_grad(self.vae, False)
91
+
92
+ def unfreeze_vae(self):
93
+ set_requires_grad(self.vae, True)
94
+
95
+ def freeze_unet(self):
96
+ set_requires_grad(self.unet, False)
97
+
98
+ def unfreeze_unet(self):
99
+ set_requires_grad(self.unet, True)
100
+
101
+ @torch.enable_grad()
102
+ def cond_fn(
103
+ self,
104
+ latents,
105
+ timestep,
106
+ index,
107
+ text_embeddings,
108
+ noise_pred_original,
109
+ text_embeddings_clip,
110
+ clip_guidance_scale,
111
+ num_cutouts,
112
+ use_cutouts=True,
113
+ ):
114
+ latents = latents.detach().requires_grad_()
115
+
116
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
117
+ sigma = self.scheduler.sigmas[index]
118
+ # the model input needs to be scaled to match the continuous ODE formulation in K-LMS
119
+ latent_model_input = latents / ((sigma**2 + 1) ** 0.5)
120
+ else:
121
+ latent_model_input = latents
122
+
123
+ # predict the noise residual
124
+ noise_pred = self.unet(latent_model_input, timestep, encoder_hidden_states=text_embeddings).sample
125
+
126
+ if isinstance(self.scheduler, PNDMScheduler):
127
+ alpha_prod_t = self.scheduler.alphas_cumprod[timestep]
128
+ beta_prod_t = 1 - alpha_prod_t
129
+ # compute predicted original sample from predicted noise also called
130
+ # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
131
+ pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5)
132
+
133
+ fac = torch.sqrt(beta_prod_t)
134
+ sample = pred_original_sample * (fac) + latents * (1 - fac)
135
+ elif isinstance(self.scheduler, LMSDiscreteScheduler):
136
+ sigma = self.scheduler.sigmas[index]
137
+ sample = latents - sigma * noise_pred
138
+ else:
139
+ raise ValueError(f"scheduler type {type(self.scheduler)} not supported")
140
+
141
+ sample = 1 / 0.18215 * sample
142
+ image = self.vae.decode(sample).sample
143
+ image = (image / 2 + 0.5).clamp(0, 1)
144
+
145
+ if use_cutouts:
146
+ image = self.make_cutouts(image, num_cutouts)
147
+ else:
148
+ image = transforms.Resize(self.feature_extractor.size)(image)
149
+ image = self.normalize(image).to(latents.dtype)
150
+
151
+ image_embeddings_clip = self.clip_model.get_image_features(image)
152
+ image_embeddings_clip = image_embeddings_clip / image_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
153
+
154
+ if use_cutouts:
155
+ dists = spherical_dist_loss(image_embeddings_clip, text_embeddings_clip)
156
+ dists = dists.view([num_cutouts, sample.shape[0], -1])
157
+ loss = dists.sum(2).mean(0).sum() * clip_guidance_scale
158
+ else:
159
+ loss = spherical_dist_loss(image_embeddings_clip, text_embeddings_clip).mean() * clip_guidance_scale
160
+
161
+ grads = -torch.autograd.grad(loss, latents)[0]
162
+
163
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
164
+ latents = latents.detach() + grads * (sigma**2)
165
+ noise_pred = noise_pred_original
166
+ else:
167
+ noise_pred = noise_pred_original - torch.sqrt(beta_prod_t) * grads
168
+ return noise_pred, latents
169
+
170
+ @torch.no_grad()
171
+ def __call__(
172
+ self,
173
+ prompt: Union[str, List[str]],
174
+ height: Optional[int] = 512,
175
+ width: Optional[int] = 512,
176
+ num_inference_steps: Optional[int] = 50,
177
+ guidance_scale: Optional[float] = 7.5,
178
+ num_images_per_prompt: Optional[int] = 1,
179
+ clip_guidance_scale: Optional[float] = 100,
180
+ clip_prompt: Optional[Union[str, List[str]]] = None,
181
+ num_cutouts: Optional[int] = 4,
182
+ use_cutouts: Optional[bool] = True,
183
+ generator: Optional[torch.Generator] = None,
184
+ latents: Optional[torch.FloatTensor] = None,
185
+ output_type: Optional[str] = "pil",
186
+ return_dict: bool = True,
187
+ ):
188
+ if isinstance(prompt, str):
189
+ batch_size = 1
190
+ elif isinstance(prompt, list):
191
+ batch_size = len(prompt)
192
+ else:
193
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
194
+
195
+ if height % 8 != 0 or width % 8 != 0:
196
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
197
+
198
+ # get prompt text embeddings
199
+ text_input = self.tokenizer(
200
+ prompt,
201
+ padding="max_length",
202
+ max_length=self.tokenizer.model_max_length,
203
+ truncation=True,
204
+ return_tensors="pt",
205
+ )
206
+ text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0]
207
+ # duplicate text embeddings for each generation per prompt
208
+ text_embeddings = text_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
209
+
210
+ if clip_guidance_scale > 0:
211
+ if clip_prompt is not None:
212
+ clip_text_input = self.tokenizer(
213
+ clip_prompt,
214
+ padding="max_length",
215
+ max_length=self.tokenizer.model_max_length,
216
+ truncation=True,
217
+ return_tensors="pt",
218
+ ).input_ids.to(self.device)
219
+ else:
220
+ clip_text_input = text_input.input_ids.to(self.device)
221
+ text_embeddings_clip = self.clip_model.get_text_features(clip_text_input)
222
+ text_embeddings_clip = text_embeddings_clip / text_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
223
+ # duplicate text embeddings clip for each generation per prompt
224
+ text_embeddings_clip = text_embeddings_clip.repeat_interleave(num_images_per_prompt, dim=0)
225
+
226
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
227
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
228
+ # corresponds to doing no classifier free guidance.
229
+ do_classifier_free_guidance = guidance_scale > 1.0
230
+ # get unconditional embeddings for classifier free guidance
231
+ if do_classifier_free_guidance:
232
+ max_length = text_input.input_ids.shape[-1]
233
+ uncond_input = self.tokenizer([""], padding="max_length", max_length=max_length, return_tensors="pt")
234
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
235
+ # duplicate unconditional embeddings for each generation per prompt
236
+ uncond_embeddings = uncond_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
237
+
238
+ # For classifier free guidance, we need to do two forward passes.
239
+ # Here we concatenate the unconditional and text embeddings into a single batch
240
+ # to avoid doing two forward passes
241
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
242
+
243
+ # get the initial random noise unless the user supplied it
244
+
245
+ # Unlike in other pipelines, latents need to be generated in the target device
246
+ # for 1-to-1 results reproducibility with the CompVis implementation.
247
+ # However this currently doesn't work in `mps`.
248
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
249
+ latents_dtype = text_embeddings.dtype
250
+ if latents is None:
251
+ if self.device.type == "mps":
252
+ # randn does not work reproducibly on mps
253
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
254
+ self.device
255
+ )
256
+ else:
257
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
258
+ else:
259
+ if latents.shape != latents_shape:
260
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
261
+ latents = latents.to(self.device)
262
+
263
+ # set timesteps
264
+ accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
265
+ extra_set_kwargs = {}
266
+ if accepts_offset:
267
+ extra_set_kwargs["offset"] = 1
268
+
269
+ self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
270
+
271
+ # Some schedulers like PNDM have timesteps as arrays
272
+ # It's more optimized to move all timesteps to correct device beforehand
273
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
274
+
275
+ # scale the initial noise by the standard deviation required by the scheduler
276
+ latents = latents * self.scheduler.init_noise_sigma
277
+
278
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
279
+ # expand the latents if we are doing classifier free guidance
280
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
281
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
282
+
283
+ # predict the noise residual
284
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
285
+
286
+ # perform classifier free guidance
287
+ if do_classifier_free_guidance:
288
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
289
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
290
+
291
+ # perform clip guidance
292
+ if clip_guidance_scale > 0:
293
+ text_embeddings_for_guidance = (
294
+ text_embeddings.chunk(2)[1] if do_classifier_free_guidance else text_embeddings
295
+ )
296
+ noise_pred, latents = self.cond_fn(
297
+ latents,
298
+ t,
299
+ i,
300
+ text_embeddings_for_guidance,
301
+ noise_pred,
302
+ text_embeddings_clip,
303
+ clip_guidance_scale,
304
+ num_cutouts,
305
+ use_cutouts,
306
+ )
307
+
308
+ # compute the previous noisy sample x_t -> x_t-1
309
+ latents = self.scheduler.step(noise_pred, t, latents).prev_sample
310
+
311
+ # scale and decode the image latents with vae
312
+ latents = 1 / 0.18215 * latents
313
+ image = self.vae.decode(latents).sample
314
+
315
+ image = (image / 2 + 0.5).clamp(0, 1)
316
+ image = image.cpu().permute(0, 2, 3, 1).numpy()
317
+
318
+ if output_type == "pil":
319
+ image = self.numpy_to_pil(image)
320
+
321
+ if not return_dict:
322
+ return (image, None)
323
+
324
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None)
v0.7.0/composable_stable_diffusion.py ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ modified based on diffusion library from Huggingface: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
3
+ """
4
+ import inspect
5
+ import warnings
6
+ from typing import List, Optional, Union
7
+
8
+ import torch
9
+
10
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
11
+ from diffusers.pipeline_utils import DiffusionPipeline
12
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
13
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
14
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
15
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
16
+
17
+
18
+ class ComposableStableDiffusionPipeline(DiffusionPipeline):
19
+ r"""
20
+ Pipeline for text-to-image generation using Stable Diffusion.
21
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
22
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
23
+ Args:
24
+ vae ([`AutoencoderKL`]):
25
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
26
+ text_encoder ([`CLIPTextModel`]):
27
+ Frozen text-encoder. Stable Diffusion uses the text portion of
28
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
29
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
30
+ tokenizer (`CLIPTokenizer`):
31
+ Tokenizer of class
32
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
33
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
34
+ scheduler ([`SchedulerMixin`]):
35
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
36
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
37
+ safety_checker ([`StableDiffusionSafetyChecker`]):
38
+ Classification module that estimates whether generated images could be considered offsensive or harmful.
39
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
40
+ feature_extractor ([`CLIPFeatureExtractor`]):
41
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
42
+ """
43
+
44
+ def __init__(
45
+ self,
46
+ vae: AutoencoderKL,
47
+ text_encoder: CLIPTextModel,
48
+ tokenizer: CLIPTokenizer,
49
+ unet: UNet2DConditionModel,
50
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
51
+ safety_checker: StableDiffusionSafetyChecker,
52
+ feature_extractor: CLIPFeatureExtractor,
53
+ ):
54
+ super().__init__()
55
+ self.register_modules(
56
+ vae=vae,
57
+ text_encoder=text_encoder,
58
+ tokenizer=tokenizer,
59
+ unet=unet,
60
+ scheduler=scheduler,
61
+ safety_checker=safety_checker,
62
+ feature_extractor=feature_extractor,
63
+ )
64
+
65
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
66
+ r"""
67
+ Enable sliced attention computation.
68
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
69
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
70
+ Args:
71
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
72
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
73
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
74
+ `attention_head_dim` must be a multiple of `slice_size`.
75
+ """
76
+ if slice_size == "auto":
77
+ # half the attention head size is usually a good trade-off between
78
+ # speed and memory
79
+ slice_size = self.unet.config.attention_head_dim // 2
80
+ self.unet.set_attention_slice(slice_size)
81
+
82
+ def disable_attention_slicing(self):
83
+ r"""
84
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
85
+ back to computing attention in one step.
86
+ """
87
+ # set slice_size = `None` to disable `attention slicing`
88
+ self.enable_attention_slicing(None)
89
+
90
+ @torch.no_grad()
91
+ def __call__(
92
+ self,
93
+ prompt: Union[str, List[str]],
94
+ height: Optional[int] = 512,
95
+ width: Optional[int] = 512,
96
+ num_inference_steps: Optional[int] = 50,
97
+ guidance_scale: Optional[float] = 7.5,
98
+ eta: Optional[float] = 0.0,
99
+ generator: Optional[torch.Generator] = None,
100
+ latents: Optional[torch.FloatTensor] = None,
101
+ output_type: Optional[str] = "pil",
102
+ return_dict: bool = True,
103
+ weights: Optional[str] = "",
104
+ **kwargs,
105
+ ):
106
+ r"""
107
+ Function invoked when calling the pipeline for generation.
108
+ Args:
109
+ prompt (`str` or `List[str]`):
110
+ The prompt or prompts to guide the image generation.
111
+ height (`int`, *optional*, defaults to 512):
112
+ The height in pixels of the generated image.
113
+ width (`int`, *optional*, defaults to 512):
114
+ The width in pixels of the generated image.
115
+ num_inference_steps (`int`, *optional*, defaults to 50):
116
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
117
+ expense of slower inference.
118
+ guidance_scale (`float`, *optional*, defaults to 7.5):
119
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
120
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
121
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
122
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
123
+ usually at the expense of lower image quality.
124
+ eta (`float`, *optional*, defaults to 0.0):
125
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
126
+ [`schedulers.DDIMScheduler`], will be ignored for others.
127
+ generator (`torch.Generator`, *optional*):
128
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
129
+ deterministic.
130
+ latents (`torch.FloatTensor`, *optional*):
131
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
132
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
133
+ tensor will ge generated by sampling using the supplied random `generator`.
134
+ output_type (`str`, *optional*, defaults to `"pil"`):
135
+ The output format of the generate image. Choose between
136
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
137
+ return_dict (`bool`, *optional*, defaults to `True`):
138
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
139
+ plain tuple.
140
+ Returns:
141
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
142
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
143
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
144
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
145
+ (nsfw) content, according to the `safety_checker`.
146
+ """
147
+
148
+ if "torch_device" in kwargs:
149
+ device = kwargs.pop("torch_device")
150
+ warnings.warn(
151
+ "`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0."
152
+ " Consider using `pipe.to(torch_device)` instead."
153
+ )
154
+
155
+ # Set device as before (to be removed in 0.3.0)
156
+ if device is None:
157
+ device = "cuda" if torch.cuda.is_available() else "cpu"
158
+ self.to(device)
159
+
160
+ if isinstance(prompt, str):
161
+ batch_size = 1
162
+ elif isinstance(prompt, list):
163
+ batch_size = len(prompt)
164
+ else:
165
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
166
+
167
+ if height % 8 != 0 or width % 8 != 0:
168
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
169
+
170
+ if "|" in prompt:
171
+ prompt = [x.strip() for x in prompt.split("|")]
172
+ print(f"composing {prompt}...")
173
+
174
+ # get prompt text embeddings
175
+ text_input = self.tokenizer(
176
+ prompt,
177
+ padding="max_length",
178
+ max_length=self.tokenizer.model_max_length,
179
+ truncation=True,
180
+ return_tensors="pt",
181
+ )
182
+ text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0]
183
+
184
+ if not weights:
185
+ # specify weights for prompts (excluding the unconditional score)
186
+ print("using equal weights for all prompts...")
187
+ pos_weights = torch.tensor(
188
+ [1 / (text_embeddings.shape[0] - 1)] * (text_embeddings.shape[0] - 1), device=self.device
189
+ ).reshape(-1, 1, 1, 1)
190
+ neg_weights = torch.tensor([1.0], device=self.device).reshape(-1, 1, 1, 1)
191
+ mask = torch.tensor([False] + [True] * pos_weights.shape[0], dtype=torch.bool)
192
+ else:
193
+ # set prompt weight for each
194
+ num_prompts = len(prompt) if isinstance(prompt, list) else 1
195
+ weights = [float(w.strip()) for w in weights.split("|")]
196
+ if len(weights) < num_prompts:
197
+ weights.append(1.0)
198
+ weights = torch.tensor(weights, device=self.device)
199
+ assert len(weights) == text_embeddings.shape[0], "weights specified are not equal to the number of prompts"
200
+ pos_weights = []
201
+ neg_weights = []
202
+ mask = [] # first one is unconditional score
203
+ for w in weights:
204
+ if w > 0:
205
+ pos_weights.append(w)
206
+ mask.append(True)
207
+ else:
208
+ neg_weights.append(abs(w))
209
+ mask.append(False)
210
+ # normalize the weights
211
+ pos_weights = torch.tensor(pos_weights, device=self.device).reshape(-1, 1, 1, 1)
212
+ pos_weights = pos_weights / pos_weights.sum()
213
+ neg_weights = torch.tensor(neg_weights, device=self.device).reshape(-1, 1, 1, 1)
214
+ neg_weights = neg_weights / neg_weights.sum()
215
+ mask = torch.tensor(mask, device=self.device, dtype=torch.bool)
216
+
217
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
218
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
219
+ # corresponds to doing no classifier free guidance.
220
+ do_classifier_free_guidance = guidance_scale > 1.0
221
+ # get unconditional embeddings for classifier free guidance
222
+ if do_classifier_free_guidance:
223
+ max_length = text_input.input_ids.shape[-1]
224
+
225
+ if torch.all(mask):
226
+ # no negative prompts, so we use empty string as the negative prompt
227
+ uncond_input = self.tokenizer(
228
+ [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt"
229
+ )
230
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
231
+
232
+ # For classifier free guidance, we need to do two forward passes.
233
+ # Here we concatenate the unconditional and text embeddings into a single batch
234
+ # to avoid doing two forward passes
235
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
236
+
237
+ # update negative weights
238
+ neg_weights = torch.tensor([1.0], device=self.device)
239
+ mask = torch.tensor([False] + mask.detach().tolist(), device=self.device, dtype=torch.bool)
240
+
241
+ # get the initial random noise unless the user supplied it
242
+
243
+ # Unlike in other pipelines, latents need to be generated in the target device
244
+ # for 1-to-1 results reproducibility with the CompVis implementation.
245
+ # However this currently doesn't work in `mps`.
246
+ latents_device = "cpu" if self.device.type == "mps" else self.device
247
+ latents_shape = (batch_size, self.unet.in_channels, height // 8, width // 8)
248
+ if latents is None:
249
+ latents = torch.randn(
250
+ latents_shape,
251
+ generator=generator,
252
+ device=latents_device,
253
+ )
254
+ else:
255
+ if latents.shape != latents_shape:
256
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
257
+ latents = latents.to(self.device)
258
+
259
+ # set timesteps
260
+ accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
261
+ extra_set_kwargs = {}
262
+ if accepts_offset:
263
+ extra_set_kwargs["offset"] = 1
264
+
265
+ self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
266
+
267
+ # if we use LMSDiscreteScheduler, let's make sure latents are multiplied by sigmas
268
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
269
+ latents = latents * self.scheduler.sigmas[0]
270
+
271
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
272
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
273
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
274
+ # and should be between [0, 1]
275
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
276
+ extra_step_kwargs = {}
277
+ if accepts_eta:
278
+ extra_step_kwargs["eta"] = eta
279
+
280
+ for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)):
281
+ # expand the latents if we are doing classifier free guidance
282
+ latent_model_input = (
283
+ torch.cat([latents] * text_embeddings.shape[0]) if do_classifier_free_guidance else latents
284
+ )
285
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
286
+ sigma = self.scheduler.sigmas[i]
287
+ # the model input needs to be scaled to match the continuous ODE formulation in K-LMS
288
+ latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5)
289
+
290
+ # reduce memory by predicting each score sequentially
291
+ noise_preds = []
292
+ # predict the noise residual
293
+ for latent_in, text_embedding_in in zip(
294
+ torch.chunk(latent_model_input, chunks=latent_model_input.shape[0], dim=0),
295
+ torch.chunk(text_embeddings, chunks=text_embeddings.shape[0], dim=0),
296
+ ):
297
+ noise_preds.append(self.unet(latent_in, t, encoder_hidden_states=text_embedding_in).sample)
298
+ noise_preds = torch.cat(noise_preds, dim=0)
299
+
300
+ # perform guidance
301
+ if do_classifier_free_guidance:
302
+ noise_pred_uncond = (noise_preds[~mask] * neg_weights).sum(dim=0, keepdims=True)
303
+ noise_pred_text = (noise_preds[mask] * pos_weights).sum(dim=0, keepdims=True)
304
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
305
+
306
+ # compute the previous noisy sample x_t -> x_t-1
307
+ if isinstance(self.scheduler, LMSDiscreteScheduler):
308
+ latents = self.scheduler.step(noise_pred, i, latents, **extra_step_kwargs).prev_sample
309
+ else:
310
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
311
+
312
+ # scale and decode the image latents with vae
313
+ latents = 1 / 0.18215 * latents
314
+ image = self.vae.decode(latents).sample
315
+
316
+ image = (image / 2 + 0.5).clamp(0, 1)
317
+ image = image.cpu().permute(0, 2, 3, 1).numpy()
318
+
319
+ # run safety checker
320
+ safety_cheker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device)
321
+ image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values)
322
+
323
+ if output_type == "pil":
324
+ image = self.numpy_to_pil(image)
325
+
326
+ if not return_dict:
327
+ return (image, has_nsfw_concept)
328
+
329
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.7.0/imagic_stable_diffusion.py ADDED
@@ -0,0 +1,476 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ modeled after the textual_inversion.py / train_dreambooth.py and the work
3
+ of justinpinkney here: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb
4
+ """
5
+ import inspect
6
+ import warnings
7
+ from typing import List, Optional, Union
8
+
9
+ import numpy as np
10
+ import torch
11
+ import torch.nn.functional as F
12
+
13
+ import PIL
14
+ from accelerate import Accelerator
15
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
16
+ from diffusers.pipeline_utils import DiffusionPipeline
17
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
18
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
19
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
20
+ from diffusers.utils import logging
21
+ from tqdm.auto import tqdm
22
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
23
+
24
+
25
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
26
+
27
+
28
+ def preprocess(image):
29
+ w, h = image.size
30
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
31
+ image = image.resize((w, h), resample=PIL.Image.LANCZOS)
32
+ image = np.array(image).astype(np.float32) / 255.0
33
+ image = image[None].transpose(0, 3, 1, 2)
34
+ image = torch.from_numpy(image)
35
+ return 2.0 * image - 1.0
36
+
37
+
38
+ class ImagicStableDiffusionPipeline(DiffusionPipeline):
39
+ r"""
40
+ Pipeline for imagic image editing.
41
+ See paper here: https://arxiv.org/pdf/2210.09276.pdf
42
+
43
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
44
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
45
+ Args:
46
+ vae ([`AutoencoderKL`]):
47
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
48
+ text_encoder ([`CLIPTextModel`]):
49
+ Frozen text-encoder. Stable Diffusion uses the text portion of
50
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
51
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
52
+ tokenizer (`CLIPTokenizer`):
53
+ Tokenizer of class
54
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
55
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
56
+ scheduler ([`SchedulerMixin`]):
57
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
58
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
59
+ safety_checker ([`StableDiffusionSafetyChecker`]):
60
+ Classification module that estimates whether generated images could be considered offsensive or harmful.
61
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
62
+ feature_extractor ([`CLIPFeatureExtractor`]):
63
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
64
+ """
65
+
66
+ def __init__(
67
+ self,
68
+ vae: AutoencoderKL,
69
+ text_encoder: CLIPTextModel,
70
+ tokenizer: CLIPTokenizer,
71
+ unet: UNet2DConditionModel,
72
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
73
+ safety_checker: StableDiffusionSafetyChecker,
74
+ feature_extractor: CLIPFeatureExtractor,
75
+ ):
76
+ super().__init__()
77
+ self.register_modules(
78
+ vae=vae,
79
+ text_encoder=text_encoder,
80
+ tokenizer=tokenizer,
81
+ unet=unet,
82
+ scheduler=scheduler,
83
+ safety_checker=safety_checker,
84
+ feature_extractor=feature_extractor,
85
+ )
86
+
87
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
88
+ r"""
89
+ Enable sliced attention computation.
90
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
91
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
92
+ Args:
93
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
94
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
95
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
96
+ `attention_head_dim` must be a multiple of `slice_size`.
97
+ """
98
+ if slice_size == "auto":
99
+ # half the attention head size is usually a good trade-off between
100
+ # speed and memory
101
+ slice_size = self.unet.config.attention_head_dim // 2
102
+ self.unet.set_attention_slice(slice_size)
103
+
104
+ def disable_attention_slicing(self):
105
+ r"""
106
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
107
+ back to computing attention in one step.
108
+ """
109
+ # set slice_size = `None` to disable `attention slicing`
110
+ self.enable_attention_slicing(None)
111
+
112
+ def train(
113
+ self,
114
+ prompt: Union[str, List[str]],
115
+ init_image: Union[torch.FloatTensor, PIL.Image.Image],
116
+ height: Optional[int] = 512,
117
+ width: Optional[int] = 512,
118
+ generator: Optional[torch.Generator] = None,
119
+ embedding_learning_rate: float = 0.001,
120
+ diffusion_model_learning_rate: float = 2e-6,
121
+ text_embedding_optimization_steps: int = 500,
122
+ model_fine_tuning_optimization_steps: int = 1000,
123
+ **kwargs,
124
+ ):
125
+ r"""
126
+ Function invoked when calling the pipeline for generation.
127
+ Args:
128
+ prompt (`str` or `List[str]`):
129
+ The prompt or prompts to guide the image generation.
130
+ height (`int`, *optional*, defaults to 512):
131
+ The height in pixels of the generated image.
132
+ width (`int`, *optional*, defaults to 512):
133
+ The width in pixels of the generated image.
134
+ num_inference_steps (`int`, *optional*, defaults to 50):
135
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
136
+ expense of slower inference.
137
+ guidance_scale (`float`, *optional*, defaults to 7.5):
138
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
139
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
140
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
141
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
142
+ usually at the expense of lower image quality.
143
+ eta (`float`, *optional*, defaults to 0.0):
144
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
145
+ [`schedulers.DDIMScheduler`], will be ignored for others.
146
+ generator (`torch.Generator`, *optional*):
147
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
148
+ deterministic.
149
+ latents (`torch.FloatTensor`, *optional*):
150
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
151
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
152
+ tensor will ge generated by sampling using the supplied random `generator`.
153
+ output_type (`str`, *optional*, defaults to `"pil"`):
154
+ The output format of the generate image. Choose between
155
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
156
+ return_dict (`bool`, *optional*, defaults to `True`):
157
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
158
+ plain tuple.
159
+ Returns:
160
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
161
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
162
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
163
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
164
+ (nsfw) content, according to the `safety_checker`.
165
+ """
166
+ accelerator = Accelerator(
167
+ gradient_accumulation_steps=1,
168
+ mixed_precision="fp16",
169
+ )
170
+
171
+ if "torch_device" in kwargs:
172
+ device = kwargs.pop("torch_device")
173
+ warnings.warn(
174
+ "`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0."
175
+ " Consider using `pipe.to(torch_device)` instead."
176
+ )
177
+
178
+ if device is None:
179
+ device = "cuda" if torch.cuda.is_available() else "cpu"
180
+ self.to(device)
181
+
182
+ if height % 8 != 0 or width % 8 != 0:
183
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
184
+
185
+ # Freeze vae and unet
186
+ self.vae.requires_grad_(False)
187
+ self.unet.requires_grad_(False)
188
+ self.text_encoder.requires_grad_(False)
189
+ self.unet.eval()
190
+ self.vae.eval()
191
+ self.text_encoder.eval()
192
+
193
+ if accelerator.is_main_process:
194
+ accelerator.init_trackers(
195
+ "imagic",
196
+ config={
197
+ "embedding_learning_rate": embedding_learning_rate,
198
+ "text_embedding_optimization_steps": text_embedding_optimization_steps,
199
+ },
200
+ )
201
+
202
+ # get text embeddings for prompt
203
+ text_input = self.tokenizer(
204
+ prompt,
205
+ padding="max_length",
206
+ max_length=self.tokenizer.model_max_length,
207
+ truncaton=True,
208
+ return_tensors="pt",
209
+ )
210
+ text_embeddings = torch.nn.Parameter(
211
+ self.text_encoder(text_input.input_ids.to(self.device))[0], requires_grad=True
212
+ )
213
+ text_embeddings = text_embeddings.detach()
214
+ text_embeddings.requires_grad_()
215
+ text_embeddings_orig = text_embeddings.clone()
216
+
217
+ # Initialize the optimizer
218
+ optimizer = torch.optim.Adam(
219
+ [text_embeddings], # only optimize the embeddings
220
+ lr=embedding_learning_rate,
221
+ )
222
+
223
+ if isinstance(init_image, PIL.Image.Image):
224
+ init_image = preprocess(init_image)
225
+
226
+ latents_dtype = text_embeddings.dtype
227
+ init_image = init_image.to(device=self.device, dtype=latents_dtype)
228
+ init_latent_image_dist = self.vae.encode(init_image).latent_dist
229
+ init_image_latents = init_latent_image_dist.sample(generator=generator)
230
+ init_image_latents = 0.18215 * init_image_latents
231
+
232
+ progress_bar = tqdm(range(text_embedding_optimization_steps), disable=not accelerator.is_local_main_process)
233
+ progress_bar.set_description("Steps")
234
+
235
+ global_step = 0
236
+
237
+ logger.info("First optimizing the text embedding to better reconstruct the init image")
238
+ for _ in range(text_embedding_optimization_steps):
239
+ with accelerator.accumulate(text_embeddings):
240
+ # Sample noise that we'll add to the latents
241
+ noise = torch.randn(init_image_latents.shape).to(init_image_latents.device)
242
+ timesteps = torch.randint(1000, (1,), device=init_image_latents.device)
243
+
244
+ # Add noise to the latents according to the noise magnitude at each timestep
245
+ # (this is the forward diffusion process)
246
+ noisy_latents = self.scheduler.add_noise(init_image_latents, noise, timesteps)
247
+
248
+ # Predict the noise residual
249
+ noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample
250
+
251
+ loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
252
+ accelerator.backward(loss)
253
+
254
+ optimizer.step()
255
+ optimizer.zero_grad()
256
+
257
+ # Checks if the accelerator has performed an optimization step behind the scenes
258
+ if accelerator.sync_gradients:
259
+ progress_bar.update(1)
260
+ global_step += 1
261
+
262
+ logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]}
263
+ progress_bar.set_postfix(**logs)
264
+ accelerator.log(logs, step=global_step)
265
+
266
+ accelerator.wait_for_everyone()
267
+
268
+ text_embeddings.requires_grad_(False)
269
+
270
+ # Now we fine tune the unet to better reconstruct the image
271
+ self.unet.requires_grad_(True)
272
+ self.unet.train()
273
+ optimizer = torch.optim.Adam(
274
+ self.unet.parameters(), # only optimize unet
275
+ lr=diffusion_model_learning_rate,
276
+ )
277
+ progress_bar = tqdm(range(model_fine_tuning_optimization_steps), disable=not accelerator.is_local_main_process)
278
+
279
+ logger.info("Next fine tuning the entire model to better reconstruct the init image")
280
+ for _ in range(model_fine_tuning_optimization_steps):
281
+ with accelerator.accumulate(self.unet.parameters()):
282
+ # Sample noise that we'll add to the latents
283
+ noise = torch.randn(init_image_latents.shape).to(init_image_latents.device)
284
+ timesteps = torch.randint(1000, (1,), device=init_image_latents.device)
285
+
286
+ # Add noise to the latents according to the noise magnitude at each timestep
287
+ # (this is the forward diffusion process)
288
+ noisy_latents = self.scheduler.add_noise(init_image_latents, noise, timesteps)
289
+
290
+ # Predict the noise residual
291
+ noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample
292
+
293
+ loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
294
+ accelerator.backward(loss)
295
+
296
+ optimizer.step()
297
+ optimizer.zero_grad()
298
+
299
+ # Checks if the accelerator has performed an optimization step behind the scenes
300
+ if accelerator.sync_gradients:
301
+ progress_bar.update(1)
302
+ global_step += 1
303
+
304
+ logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]}
305
+ progress_bar.set_postfix(**logs)
306
+ accelerator.log(logs, step=global_step)
307
+
308
+ accelerator.wait_for_everyone()
309
+ self.text_embeddings_orig = text_embeddings_orig
310
+ self.text_embeddings = text_embeddings
311
+
312
+ @torch.no_grad()
313
+ def __call__(
314
+ self,
315
+ alpha: float = 1.2,
316
+ height: Optional[int] = 512,
317
+ width: Optional[int] = 512,
318
+ num_inference_steps: Optional[int] = 50,
319
+ generator: Optional[torch.Generator] = None,
320
+ output_type: Optional[str] = "pil",
321
+ return_dict: bool = True,
322
+ guidance_scale: float = 7.5,
323
+ eta: float = 0.0,
324
+ **kwargs,
325
+ ):
326
+ r"""
327
+ Function invoked when calling the pipeline for generation.
328
+ Args:
329
+ prompt (`str` or `List[str]`):
330
+ The prompt or prompts to guide the image generation.
331
+ height (`int`, *optional*, defaults to 512):
332
+ The height in pixels of the generated image.
333
+ width (`int`, *optional*, defaults to 512):
334
+ The width in pixels of the generated image.
335
+ num_inference_steps (`int`, *optional*, defaults to 50):
336
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
337
+ expense of slower inference.
338
+ guidance_scale (`float`, *optional*, defaults to 7.5):
339
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
340
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
341
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
342
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
343
+ usually at the expense of lower image quality.
344
+ eta (`float`, *optional*, defaults to 0.0):
345
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
346
+ [`schedulers.DDIMScheduler`], will be ignored for others.
347
+ generator (`torch.Generator`, *optional*):
348
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
349
+ deterministic.
350
+ latents (`torch.FloatTensor`, *optional*):
351
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
352
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
353
+ tensor will ge generated by sampling using the supplied random `generator`.
354
+ output_type (`str`, *optional*, defaults to `"pil"`):
355
+ The output format of the generate image. Choose between
356
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
357
+ return_dict (`bool`, *optional*, defaults to `True`):
358
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
359
+ plain tuple.
360
+ Returns:
361
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
362
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
363
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
364
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
365
+ (nsfw) content, according to the `safety_checker`.
366
+ """
367
+ if height % 8 != 0 or width % 8 != 0:
368
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
369
+ if self.text_embeddings is None:
370
+ raise ValueError("Please run the pipe.train() before trying to generate an image.")
371
+ if self.text_embeddings_orig is None:
372
+ raise ValueError("Please run the pipe.train() before trying to generate an image.")
373
+
374
+ text_embeddings = alpha * self.text_embeddings_orig + (1 - alpha) * self.text_embeddings
375
+
376
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
377
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
378
+ # corresponds to doing no classifier free guidance.
379
+ do_classifier_free_guidance = guidance_scale > 1.0
380
+ # get unconditional embeddings for classifier free guidance
381
+ if do_classifier_free_guidance:
382
+ uncond_tokens = [""]
383
+ max_length = self.tokenizer.model_max_length
384
+ uncond_input = self.tokenizer(
385
+ uncond_tokens,
386
+ padding="max_length",
387
+ max_length=max_length,
388
+ truncation=True,
389
+ return_tensors="pt",
390
+ )
391
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
392
+
393
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
394
+ seq_len = uncond_embeddings.shape[1]
395
+ uncond_embeddings = uncond_embeddings.view(1, seq_len, -1)
396
+
397
+ # For classifier free guidance, we need to do two forward passes.
398
+ # Here we concatenate the unconditional and text embeddings into a single batch
399
+ # to avoid doing two forward passes
400
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
401
+
402
+ # get the initial random noise unless the user supplied it
403
+
404
+ # Unlike in other pipelines, latents need to be generated in the target device
405
+ # for 1-to-1 results reproducibility with the CompVis implementation.
406
+ # However this currently doesn't work in `mps`.
407
+ latents_shape = (1, self.unet.in_channels, height // 8, width // 8)
408
+ latents_dtype = text_embeddings.dtype
409
+ if self.device.type == "mps":
410
+ # randn does not exist on mps
411
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
412
+ self.device
413
+ )
414
+ else:
415
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
416
+
417
+ # set timesteps
418
+ self.scheduler.set_timesteps(num_inference_steps)
419
+
420
+ # Some schedulers like PNDM have timesteps as arrays
421
+ # It's more optimized to move all timesteps to correct device beforehand
422
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
423
+
424
+ # scale the initial noise by the standard deviation required by the scheduler
425
+ latents = latents * self.scheduler.init_noise_sigma
426
+
427
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
428
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
429
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
430
+ # and should be between [0, 1]
431
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
432
+ extra_step_kwargs = {}
433
+ if accepts_eta:
434
+ extra_step_kwargs["eta"] = eta
435
+
436
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
437
+ # expand the latents if we are doing classifier free guidance
438
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
439
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
440
+
441
+ # predict the noise residual
442
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
443
+
444
+ # perform guidance
445
+ if do_classifier_free_guidance:
446
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
447
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
448
+
449
+ # compute the previous noisy sample x_t -> x_t-1
450
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
451
+
452
+ latents = 1 / 0.18215 * latents
453
+ image = self.vae.decode(latents).sample
454
+
455
+ image = (image / 2 + 0.5).clamp(0, 1)
456
+
457
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
458
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
459
+
460
+ if self.safety_checker is not None:
461
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
462
+ self.device
463
+ )
464
+ image, has_nsfw_concept = self.safety_checker(
465
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
466
+ )
467
+ else:
468
+ has_nsfw_concept = None
469
+
470
+ if output_type == "pil":
471
+ image = self.numpy_to_pil(image)
472
+
473
+ if not return_dict:
474
+ return (image, has_nsfw_concept)
475
+
476
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.7.0/interpolate_stable_diffusion.py ADDED
@@ -0,0 +1,524 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import time
3
+ from pathlib import Path
4
+ from typing import Callable, List, Optional, Union
5
+
6
+ import numpy as np
7
+ import torch
8
+
9
+ from diffusers.configuration_utils import FrozenDict
10
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
11
+ from diffusers.pipeline_utils import DiffusionPipeline
12
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
13
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
14
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
15
+ from diffusers.utils import deprecate, logging
16
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
17
+
18
+
19
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
20
+
21
+
22
+ def slerp(t, v0, v1, DOT_THRESHOLD=0.9995):
23
+ """helper function to spherically interpolate two arrays v1 v2"""
24
+
25
+ if not isinstance(v0, np.ndarray):
26
+ inputs_are_torch = True
27
+ input_device = v0.device
28
+ v0 = v0.cpu().numpy()
29
+ v1 = v1.cpu().numpy()
30
+
31
+ dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
32
+ if np.abs(dot) > DOT_THRESHOLD:
33
+ v2 = (1 - t) * v0 + t * v1
34
+ else:
35
+ theta_0 = np.arccos(dot)
36
+ sin_theta_0 = np.sin(theta_0)
37
+ theta_t = theta_0 * t
38
+ sin_theta_t = np.sin(theta_t)
39
+ s0 = np.sin(theta_0 - theta_t) / sin_theta_0
40
+ s1 = sin_theta_t / sin_theta_0
41
+ v2 = s0 * v0 + s1 * v1
42
+
43
+ if inputs_are_torch:
44
+ v2 = torch.from_numpy(v2).to(input_device)
45
+
46
+ return v2
47
+
48
+
49
+ class StableDiffusionWalkPipeline(DiffusionPipeline):
50
+ r"""
51
+ Pipeline for text-to-image generation using Stable Diffusion.
52
+
53
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
54
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
55
+
56
+ Args:
57
+ vae ([`AutoencoderKL`]):
58
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
59
+ text_encoder ([`CLIPTextModel`]):
60
+ Frozen text-encoder. Stable Diffusion uses the text portion of
61
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
62
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
63
+ tokenizer (`CLIPTokenizer`):
64
+ Tokenizer of class
65
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
66
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
67
+ scheduler ([`SchedulerMixin`]):
68
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
69
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
70
+ safety_checker ([`StableDiffusionSafetyChecker`]):
71
+ Classification module that estimates whether generated images could be considered offensive or harmful.
72
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
73
+ feature_extractor ([`CLIPFeatureExtractor`]):
74
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
75
+ """
76
+
77
+ def __init__(
78
+ self,
79
+ vae: AutoencoderKL,
80
+ text_encoder: CLIPTextModel,
81
+ tokenizer: CLIPTokenizer,
82
+ unet: UNet2DConditionModel,
83
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
84
+ safety_checker: StableDiffusionSafetyChecker,
85
+ feature_extractor: CLIPFeatureExtractor,
86
+ ):
87
+ super().__init__()
88
+
89
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
90
+ deprecation_message = (
91
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
92
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
93
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
94
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
95
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
96
+ " file"
97
+ )
98
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
99
+ new_config = dict(scheduler.config)
100
+ new_config["steps_offset"] = 1
101
+ scheduler._internal_dict = FrozenDict(new_config)
102
+
103
+ if safety_checker is None:
104
+ logger.warn(
105
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
106
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
107
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
108
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
109
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
110
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
111
+ )
112
+
113
+ self.register_modules(
114
+ vae=vae,
115
+ text_encoder=text_encoder,
116
+ tokenizer=tokenizer,
117
+ unet=unet,
118
+ scheduler=scheduler,
119
+ safety_checker=safety_checker,
120
+ feature_extractor=feature_extractor,
121
+ )
122
+
123
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
124
+ r"""
125
+ Enable sliced attention computation.
126
+
127
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
128
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
129
+
130
+ Args:
131
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
132
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
133
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
134
+ `attention_head_dim` must be a multiple of `slice_size`.
135
+ """
136
+ if slice_size == "auto":
137
+ # half the attention head size is usually a good trade-off between
138
+ # speed and memory
139
+ slice_size = self.unet.config.attention_head_dim // 2
140
+ self.unet.set_attention_slice(slice_size)
141
+
142
+ def disable_attention_slicing(self):
143
+ r"""
144
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
145
+ back to computing attention in one step.
146
+ """
147
+ # set slice_size = `None` to disable `attention slicing`
148
+ self.enable_attention_slicing(None)
149
+
150
+ @torch.no_grad()
151
+ def __call__(
152
+ self,
153
+ prompt: Optional[Union[str, List[str]]] = None,
154
+ height: int = 512,
155
+ width: int = 512,
156
+ num_inference_steps: int = 50,
157
+ guidance_scale: float = 7.5,
158
+ negative_prompt: Optional[Union[str, List[str]]] = None,
159
+ num_images_per_prompt: Optional[int] = 1,
160
+ eta: float = 0.0,
161
+ generator: Optional[torch.Generator] = None,
162
+ latents: Optional[torch.FloatTensor] = None,
163
+ output_type: Optional[str] = "pil",
164
+ return_dict: bool = True,
165
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
166
+ callback_steps: Optional[int] = 1,
167
+ text_embeddings: Optional[torch.FloatTensor] = None,
168
+ **kwargs,
169
+ ):
170
+ r"""
171
+ Function invoked when calling the pipeline for generation.
172
+
173
+ Args:
174
+ prompt (`str` or `List[str]`, *optional*, defaults to `None`):
175
+ The prompt or prompts to guide the image generation. If not provided, `text_embeddings` is required.
176
+ height (`int`, *optional*, defaults to 512):
177
+ The height in pixels of the generated image.
178
+ width (`int`, *optional*, defaults to 512):
179
+ The width in pixels of the generated image.
180
+ num_inference_steps (`int`, *optional*, defaults to 50):
181
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
182
+ expense of slower inference.
183
+ guidance_scale (`float`, *optional*, defaults to 7.5):
184
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
185
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
186
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
187
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
188
+ usually at the expense of lower image quality.
189
+ negative_prompt (`str` or `List[str]`, *optional*):
190
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
191
+ if `guidance_scale` is less than `1`).
192
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
193
+ The number of images to generate per prompt.
194
+ eta (`float`, *optional*, defaults to 0.0):
195
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
196
+ [`schedulers.DDIMScheduler`], will be ignored for others.
197
+ generator (`torch.Generator`, *optional*):
198
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
199
+ deterministic.
200
+ latents (`torch.FloatTensor`, *optional*):
201
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
202
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
203
+ tensor will ge generated by sampling using the supplied random `generator`.
204
+ output_type (`str`, *optional*, defaults to `"pil"`):
205
+ The output format of the generate image. Choose between
206
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
207
+ return_dict (`bool`, *optional*, defaults to `True`):
208
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
209
+ plain tuple.
210
+ callback (`Callable`, *optional*):
211
+ A function that will be called every `callback_steps` steps during inference. The function will be
212
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
213
+ callback_steps (`int`, *optional*, defaults to 1):
214
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
215
+ called at every step.
216
+ text_embeddings (`torch.FloatTensor`, *optional*, defaults to `None`):
217
+ Pre-generated text embeddings to be used as inputs for image generation. Can be used in place of
218
+ `prompt` to avoid re-computing the embeddings. If not provided, the embeddings will be generated from
219
+ the supplied `prompt`.
220
+
221
+ Returns:
222
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
223
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
224
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
225
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
226
+ (nsfw) content, according to the `safety_checker`.
227
+ """
228
+
229
+ if height % 8 != 0 or width % 8 != 0:
230
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
231
+
232
+ if (callback_steps is None) or (
233
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
234
+ ):
235
+ raise ValueError(
236
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
237
+ f" {type(callback_steps)}."
238
+ )
239
+
240
+ if text_embeddings is None:
241
+ if isinstance(prompt, str):
242
+ batch_size = 1
243
+ elif isinstance(prompt, list):
244
+ batch_size = len(prompt)
245
+ else:
246
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
247
+
248
+ # get prompt text embeddings
249
+ text_inputs = self.tokenizer(
250
+ prompt,
251
+ padding="max_length",
252
+ max_length=self.tokenizer.model_max_length,
253
+ return_tensors="pt",
254
+ )
255
+ text_input_ids = text_inputs.input_ids
256
+
257
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
258
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
259
+ print(
260
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
261
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
262
+ )
263
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
264
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
265
+ else:
266
+ batch_size = text_embeddings.shape[0]
267
+
268
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
269
+ bs_embed, seq_len, _ = text_embeddings.shape
270
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
271
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
272
+
273
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
274
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
275
+ # corresponds to doing no classifier free guidance.
276
+ do_classifier_free_guidance = guidance_scale > 1.0
277
+ # get unconditional embeddings for classifier free guidance
278
+ if do_classifier_free_guidance:
279
+ uncond_tokens: List[str]
280
+ if negative_prompt is None:
281
+ uncond_tokens = [""] * batch_size
282
+ elif type(prompt) is not type(negative_prompt):
283
+ raise TypeError(
284
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
285
+ f" {type(prompt)}."
286
+ )
287
+ elif isinstance(negative_prompt, str):
288
+ uncond_tokens = [negative_prompt]
289
+ elif batch_size != len(negative_prompt):
290
+ raise ValueError(
291
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
292
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
293
+ " the batch size of `prompt`."
294
+ )
295
+ else:
296
+ uncond_tokens = negative_prompt
297
+
298
+ max_length = self.tokenizer.model_max_length
299
+ uncond_input = self.tokenizer(
300
+ uncond_tokens,
301
+ padding="max_length",
302
+ max_length=max_length,
303
+ truncation=True,
304
+ return_tensors="pt",
305
+ )
306
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
307
+
308
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
309
+ seq_len = uncond_embeddings.shape[1]
310
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
311
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
312
+
313
+ # For classifier free guidance, we need to do two forward passes.
314
+ # Here we concatenate the unconditional and text embeddings into a single batch
315
+ # to avoid doing two forward passes
316
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
317
+
318
+ # get the initial random noise unless the user supplied it
319
+
320
+ # Unlike in other pipelines, latents need to be generated in the target device
321
+ # for 1-to-1 results reproducibility with the CompVis implementation.
322
+ # However this currently doesn't work in `mps`.
323
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
324
+ latents_dtype = text_embeddings.dtype
325
+ if latents is None:
326
+ if self.device.type == "mps":
327
+ # randn does not work reproducibly on mps
328
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
329
+ self.device
330
+ )
331
+ else:
332
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
333
+ else:
334
+ if latents.shape != latents_shape:
335
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
336
+ latents = latents.to(self.device)
337
+
338
+ # set timesteps
339
+ self.scheduler.set_timesteps(num_inference_steps)
340
+
341
+ # Some schedulers like PNDM have timesteps as arrays
342
+ # It's more optimized to move all timesteps to correct device beforehand
343
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
344
+
345
+ # scale the initial noise by the standard deviation required by the scheduler
346
+ latents = latents * self.scheduler.init_noise_sigma
347
+
348
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
349
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
350
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
351
+ # and should be between [0, 1]
352
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
353
+ extra_step_kwargs = {}
354
+ if accepts_eta:
355
+ extra_step_kwargs["eta"] = eta
356
+
357
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
358
+ # expand the latents if we are doing classifier free guidance
359
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
360
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
361
+
362
+ # predict the noise residual
363
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
364
+
365
+ # perform guidance
366
+ if do_classifier_free_guidance:
367
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
368
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
369
+
370
+ # compute the previous noisy sample x_t -> x_t-1
371
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
372
+
373
+ # call the callback, if provided
374
+ if callback is not None and i % callback_steps == 0:
375
+ callback(i, t, latents)
376
+
377
+ latents = 1 / 0.18215 * latents
378
+ image = self.vae.decode(latents).sample
379
+
380
+ image = (image / 2 + 0.5).clamp(0, 1)
381
+
382
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
383
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
384
+
385
+ if self.safety_checker is not None:
386
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
387
+ self.device
388
+ )
389
+ image, has_nsfw_concept = self.safety_checker(
390
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
391
+ )
392
+ else:
393
+ has_nsfw_concept = None
394
+
395
+ if output_type == "pil":
396
+ image = self.numpy_to_pil(image)
397
+
398
+ if not return_dict:
399
+ return (image, has_nsfw_concept)
400
+
401
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
402
+
403
+ def embed_text(self, text):
404
+ """takes in text and turns it into text embeddings"""
405
+ text_input = self.tokenizer(
406
+ text,
407
+ padding="max_length",
408
+ max_length=self.tokenizer.model_max_length,
409
+ truncation=True,
410
+ return_tensors="pt",
411
+ )
412
+ with torch.no_grad():
413
+ embed = self.text_encoder(text_input.input_ids.to(self.device))[0]
414
+ return embed
415
+
416
+ def get_noise(self, seed, dtype=torch.float32, height=512, width=512):
417
+ """Takes in random seed and returns corresponding noise vector"""
418
+ return torch.randn(
419
+ (1, self.unet.in_channels, height // 8, width // 8),
420
+ generator=torch.Generator(device=self.device).manual_seed(seed),
421
+ device=self.device,
422
+ dtype=dtype,
423
+ )
424
+
425
+ def walk(
426
+ self,
427
+ prompts: List[str],
428
+ seeds: List[int],
429
+ num_interpolation_steps: Optional[int] = 6,
430
+ output_dir: Optional[str] = "./dreams",
431
+ name: Optional[str] = None,
432
+ batch_size: Optional[int] = 1,
433
+ height: Optional[int] = 512,
434
+ width: Optional[int] = 512,
435
+ guidance_scale: Optional[float] = 7.5,
436
+ num_inference_steps: Optional[int] = 50,
437
+ eta: Optional[float] = 0.0,
438
+ ) -> List[str]:
439
+ """
440
+ Walks through a series of prompts and seeds, interpolating between them and saving the results to disk.
441
+
442
+ Args:
443
+ prompts (`List[str]`):
444
+ List of prompts to generate images for.
445
+ seeds (`List[int]`):
446
+ List of seeds corresponding to provided prompts. Must be the same length as prompts.
447
+ num_interpolation_steps (`int`, *optional*, defaults to 6):
448
+ Number of interpolation steps to take between prompts.
449
+ output_dir (`str`, *optional*, defaults to `./dreams`):
450
+ Directory to save the generated images to.
451
+ name (`str`, *optional*, defaults to `None`):
452
+ Subdirectory of `output_dir` to save the generated images to. If `None`, the name will
453
+ be the current time.
454
+ batch_size (`int`, *optional*, defaults to 1):
455
+ Number of images to generate at once.
456
+ height (`int`, *optional*, defaults to 512):
457
+ Height of the generated images.
458
+ width (`int`, *optional*, defaults to 512):
459
+ Width of the generated images.
460
+ guidance_scale (`float`, *optional*, defaults to 7.5):
461
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
462
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
463
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
464
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
465
+ usually at the expense of lower image quality.
466
+ num_inference_steps (`int`, *optional*, defaults to 50):
467
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
468
+ expense of slower inference.
469
+ eta (`float`, *optional*, defaults to 0.0):
470
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
471
+ [`schedulers.DDIMScheduler`], will be ignored for others.
472
+
473
+ Returns:
474
+ `List[str]`: List of paths to the generated images.
475
+ """
476
+ if not len(prompts) == len(seeds):
477
+ raise ValueError(
478
+ f"Number of prompts and seeds must be equalGot {len(prompts)} prompts and {len(seeds)} seeds"
479
+ )
480
+
481
+ name = name or time.strftime("%Y%m%d-%H%M%S")
482
+ save_path = Path(output_dir) / name
483
+ save_path.mkdir(exist_ok=True, parents=True)
484
+
485
+ frame_idx = 0
486
+ frame_filepaths = []
487
+ for prompt_a, prompt_b, seed_a, seed_b in zip(prompts, prompts[1:], seeds, seeds[1:]):
488
+ # Embed Text
489
+ embed_a = self.embed_text(prompt_a)
490
+ embed_b = self.embed_text(prompt_b)
491
+
492
+ # Get Noise
493
+ noise_dtype = embed_a.dtype
494
+ noise_a = self.get_noise(seed_a, noise_dtype, height, width)
495
+ noise_b = self.get_noise(seed_b, noise_dtype, height, width)
496
+
497
+ noise_batch, embeds_batch = None, None
498
+ T = np.linspace(0.0, 1.0, num_interpolation_steps)
499
+ for i, t in enumerate(T):
500
+ noise = slerp(float(t), noise_a, noise_b)
501
+ embed = torch.lerp(embed_a, embed_b, t)
502
+
503
+ noise_batch = noise if noise_batch is None else torch.cat([noise_batch, noise], dim=0)
504
+ embeds_batch = embed if embeds_batch is None else torch.cat([embeds_batch, embed], dim=0)
505
+
506
+ batch_is_ready = embeds_batch.shape[0] == batch_size or i + 1 == T.shape[0]
507
+ if batch_is_ready:
508
+ outputs = self(
509
+ latents=noise_batch,
510
+ text_embeddings=embeds_batch,
511
+ height=height,
512
+ width=width,
513
+ guidance_scale=guidance_scale,
514
+ eta=eta,
515
+ num_inference_steps=num_inference_steps,
516
+ )
517
+ noise_batch, embeds_batch = None, None
518
+
519
+ for image in outputs["images"]:
520
+ frame_filepath = str(save_path / f"frame_{frame_idx:06d}.png")
521
+ image.save(frame_filepath)
522
+ frame_filepaths.append(frame_filepath)
523
+ frame_idx += 1
524
+ return frame_filepaths
v0.7.0/lpw_stable_diffusion.py ADDED
@@ -0,0 +1,1076 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import re
3
+ from typing import Callable, List, Optional, Union
4
+
5
+ import numpy as np
6
+ import torch
7
+
8
+ import PIL
9
+ from diffusers.configuration_utils import FrozenDict
10
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
11
+ from diffusers.pipeline_utils import DiffusionPipeline
12
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
13
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
14
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
15
+ from diffusers.utils import deprecate, logging
16
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
17
+
18
+
19
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
20
+
21
+ re_attention = re.compile(
22
+ r"""
23
+ \\\(|
24
+ \\\)|
25
+ \\\[|
26
+ \\]|
27
+ \\\\|
28
+ \\|
29
+ \(|
30
+ \[|
31
+ :([+-]?[.\d]+)\)|
32
+ \)|
33
+ ]|
34
+ [^\\()\[\]:]+|
35
+ :
36
+ """,
37
+ re.X,
38
+ )
39
+
40
+
41
+ def parse_prompt_attention(text):
42
+ """
43
+ Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
44
+ Accepted tokens are:
45
+ (abc) - increases attention to abc by a multiplier of 1.1
46
+ (abc:3.12) - increases attention to abc by a multiplier of 3.12
47
+ [abc] - decreases attention to abc by a multiplier of 1.1
48
+ \( - literal character '('
49
+ \[ - literal character '['
50
+ \) - literal character ')'
51
+ \] - literal character ']'
52
+ \\ - literal character '\'
53
+ anything else - just text
54
+ >>> parse_prompt_attention('normal text')
55
+ [['normal text', 1.0]]
56
+ >>> parse_prompt_attention('an (important) word')
57
+ [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
58
+ >>> parse_prompt_attention('(unbalanced')
59
+ [['unbalanced', 1.1]]
60
+ >>> parse_prompt_attention('\(literal\]')
61
+ [['(literal]', 1.0]]
62
+ >>> parse_prompt_attention('(unnecessary)(parens)')
63
+ [['unnecessaryparens', 1.1]]
64
+ >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
65
+ [['a ', 1.0],
66
+ ['house', 1.5730000000000004],
67
+ [' ', 1.1],
68
+ ['on', 1.0],
69
+ [' a ', 1.1],
70
+ ['hill', 0.55],
71
+ [', sun, ', 1.1],
72
+ ['sky', 1.4641000000000006],
73
+ ['.', 1.1]]
74
+ """
75
+
76
+ res = []
77
+ round_brackets = []
78
+ square_brackets = []
79
+
80
+ round_bracket_multiplier = 1.1
81
+ square_bracket_multiplier = 1 / 1.1
82
+
83
+ def multiply_range(start_position, multiplier):
84
+ for p in range(start_position, len(res)):
85
+ res[p][1] *= multiplier
86
+
87
+ for m in re_attention.finditer(text):
88
+ text = m.group(0)
89
+ weight = m.group(1)
90
+
91
+ if text.startswith("\\"):
92
+ res.append([text[1:], 1.0])
93
+ elif text == "(":
94
+ round_brackets.append(len(res))
95
+ elif text == "[":
96
+ square_brackets.append(len(res))
97
+ elif weight is not None and len(round_brackets) > 0:
98
+ multiply_range(round_brackets.pop(), float(weight))
99
+ elif text == ")" and len(round_brackets) > 0:
100
+ multiply_range(round_brackets.pop(), round_bracket_multiplier)
101
+ elif text == "]" and len(square_brackets) > 0:
102
+ multiply_range(square_brackets.pop(), square_bracket_multiplier)
103
+ else:
104
+ res.append([text, 1.0])
105
+
106
+ for pos in round_brackets:
107
+ multiply_range(pos, round_bracket_multiplier)
108
+
109
+ for pos in square_brackets:
110
+ multiply_range(pos, square_bracket_multiplier)
111
+
112
+ if len(res) == 0:
113
+ res = [["", 1.0]]
114
+
115
+ # merge runs of identical weights
116
+ i = 0
117
+ while i + 1 < len(res):
118
+ if res[i][1] == res[i + 1][1]:
119
+ res[i][0] += res[i + 1][0]
120
+ res.pop(i + 1)
121
+ else:
122
+ i += 1
123
+
124
+ return res
125
+
126
+
127
+ def get_prompts_with_weights(pipe: DiffusionPipeline, prompt: List[str], max_length: int):
128
+ r"""
129
+ Tokenize a list of prompts and return its tokens with weights of each token.
130
+
131
+ No padding, starting or ending token is included.
132
+ """
133
+ tokens = []
134
+ weights = []
135
+ truncated = False
136
+ for text in prompt:
137
+ texts_and_weights = parse_prompt_attention(text)
138
+ text_token = []
139
+ text_weight = []
140
+ for word, weight in texts_and_weights:
141
+ # tokenize and discard the starting and the ending token
142
+ token = pipe.tokenizer(word).input_ids[1:-1]
143
+ text_token += token
144
+ # copy the weight by length of token
145
+ text_weight += [weight] * len(token)
146
+ # stop if the text is too long (longer than truncation limit)
147
+ if len(text_token) > max_length:
148
+ truncated = True
149
+ break
150
+ # truncate
151
+ if len(text_token) > max_length:
152
+ truncated = True
153
+ text_token = text_token[:max_length]
154
+ text_weight = text_weight[:max_length]
155
+ tokens.append(text_token)
156
+ weights.append(text_weight)
157
+ if truncated:
158
+ logger.warning("Prompt was truncated. Try to shorten the prompt or increase max_embeddings_multiples")
159
+ return tokens, weights
160
+
161
+
162
+ def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, no_boseos_middle=True, chunk_length=77):
163
+ r"""
164
+ Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length.
165
+ """
166
+ max_embeddings_multiples = (max_length - 2) // (chunk_length - 2)
167
+ weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length
168
+ for i in range(len(tokens)):
169
+ tokens[i] = [bos] + tokens[i] + [eos] * (max_length - 1 - len(tokens[i]))
170
+ if no_boseos_middle:
171
+ weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i]))
172
+ else:
173
+ w = []
174
+ if len(weights[i]) == 0:
175
+ w = [1.0] * weights_length
176
+ else:
177
+ for j in range(max_embeddings_multiples):
178
+ w.append(1.0) # weight for starting token in this chunk
179
+ w += weights[i][j * (chunk_length - 2) : min(len(weights[i]), (j + 1) * (chunk_length - 2))]
180
+ w.append(1.0) # weight for ending token in this chunk
181
+ w += [1.0] * (weights_length - len(w))
182
+ weights[i] = w[:]
183
+
184
+ return tokens, weights
185
+
186
+
187
+ def get_unweighted_text_embeddings(
188
+ pipe: DiffusionPipeline,
189
+ text_input: torch.Tensor,
190
+ chunk_length: int,
191
+ no_boseos_middle: Optional[bool] = True,
192
+ ):
193
+ """
194
+ When the length of tokens is a multiple of the capacity of the text encoder,
195
+ it should be split into chunks and sent to the text encoder individually.
196
+ """
197
+ max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2)
198
+ if max_embeddings_multiples > 1:
199
+ text_embeddings = []
200
+ for i in range(max_embeddings_multiples):
201
+ # extract the i-th chunk
202
+ text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].clone()
203
+
204
+ # cover the head and the tail by the starting and the ending tokens
205
+ text_input_chunk[:, 0] = text_input[0, 0]
206
+ text_input_chunk[:, -1] = text_input[0, -1]
207
+ text_embedding = pipe.text_encoder(text_input_chunk)[0]
208
+
209
+ if no_boseos_middle:
210
+ if i == 0:
211
+ # discard the ending token
212
+ text_embedding = text_embedding[:, :-1]
213
+ elif i == max_embeddings_multiples - 1:
214
+ # discard the starting token
215
+ text_embedding = text_embedding[:, 1:]
216
+ else:
217
+ # discard both starting and ending tokens
218
+ text_embedding = text_embedding[:, 1:-1]
219
+
220
+ text_embeddings.append(text_embedding)
221
+ text_embeddings = torch.concat(text_embeddings, axis=1)
222
+ else:
223
+ text_embeddings = pipe.text_encoder(text_input)[0]
224
+ return text_embeddings
225
+
226
+
227
+ def get_weighted_text_embeddings(
228
+ pipe: DiffusionPipeline,
229
+ prompt: Union[str, List[str]],
230
+ uncond_prompt: Optional[Union[str, List[str]]] = None,
231
+ max_embeddings_multiples: Optional[int] = 1,
232
+ no_boseos_middle: Optional[bool] = False,
233
+ skip_parsing: Optional[bool] = False,
234
+ skip_weighting: Optional[bool] = False,
235
+ **kwargs,
236
+ ):
237
+ r"""
238
+ Prompts can be assigned with local weights using brackets. For example,
239
+ prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful',
240
+ and the embedding tokens corresponding to the words get multiplied by a constant, 1.1.
241
+
242
+ Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean.
243
+
244
+ Args:
245
+ pipe (`DiffusionPipeline`):
246
+ Pipe to provide access to the tokenizer and the text encoder.
247
+ prompt (`str` or `List[str]`):
248
+ The prompt or prompts to guide the image generation.
249
+ uncond_prompt (`str` or `List[str]`):
250
+ The unconditional prompt or prompts for guide the image generation. If unconditional prompt
251
+ is provided, the embeddings of prompt and uncond_prompt are concatenated.
252
+ max_embeddings_multiples (`int`, *optional*, defaults to `1`):
253
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
254
+ no_boseos_middle (`bool`, *optional*, defaults to `False`):
255
+ If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and
256
+ ending token in each of the chunk in the middle.
257
+ skip_parsing (`bool`, *optional*, defaults to `False`):
258
+ Skip the parsing of brackets.
259
+ skip_weighting (`bool`, *optional*, defaults to `False`):
260
+ Skip the weighting. When the parsing is skipped, it is forced True.
261
+ """
262
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
263
+ if isinstance(prompt, str):
264
+ prompt = [prompt]
265
+
266
+ if not skip_parsing:
267
+ prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2)
268
+ if uncond_prompt is not None:
269
+ if isinstance(uncond_prompt, str):
270
+ uncond_prompt = [uncond_prompt]
271
+ uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2)
272
+ else:
273
+ prompt_tokens = [
274
+ token[1:-1] for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True).input_ids
275
+ ]
276
+ prompt_weights = [[1.0] * len(token) for token in prompt_tokens]
277
+ if uncond_prompt is not None:
278
+ if isinstance(uncond_prompt, str):
279
+ uncond_prompt = [uncond_prompt]
280
+ uncond_tokens = [
281
+ token[1:-1]
282
+ for token in pipe.tokenizer(uncond_prompt, max_length=max_length, truncation=True).input_ids
283
+ ]
284
+ uncond_weights = [[1.0] * len(token) for token in uncond_tokens]
285
+
286
+ # round up the longest length of tokens to a multiple of (model_max_length - 2)
287
+ max_length = max([len(token) for token in prompt_tokens])
288
+ if uncond_prompt is not None:
289
+ max_length = max(max_length, max([len(token) for token in uncond_tokens]))
290
+
291
+ max_embeddings_multiples = min(
292
+ max_embeddings_multiples,
293
+ (max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1,
294
+ )
295
+ max_embeddings_multiples = max(1, max_embeddings_multiples)
296
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
297
+
298
+ # pad the length of tokens and weights
299
+ bos = pipe.tokenizer.bos_token_id
300
+ eos = pipe.tokenizer.eos_token_id
301
+ prompt_tokens, prompt_weights = pad_tokens_and_weights(
302
+ prompt_tokens,
303
+ prompt_weights,
304
+ max_length,
305
+ bos,
306
+ eos,
307
+ no_boseos_middle=no_boseos_middle,
308
+ chunk_length=pipe.tokenizer.model_max_length,
309
+ )
310
+ prompt_tokens = torch.tensor(prompt_tokens, dtype=torch.long, device=pipe.device)
311
+ if uncond_prompt is not None:
312
+ uncond_tokens, uncond_weights = pad_tokens_and_weights(
313
+ uncond_tokens,
314
+ uncond_weights,
315
+ max_length,
316
+ bos,
317
+ eos,
318
+ no_boseos_middle=no_boseos_middle,
319
+ chunk_length=pipe.tokenizer.model_max_length,
320
+ )
321
+ uncond_tokens = torch.tensor(uncond_tokens, dtype=torch.long, device=pipe.device)
322
+
323
+ # get the embeddings
324
+ text_embeddings = get_unweighted_text_embeddings(
325
+ pipe,
326
+ prompt_tokens,
327
+ pipe.tokenizer.model_max_length,
328
+ no_boseos_middle=no_boseos_middle,
329
+ )
330
+ prompt_weights = torch.tensor(prompt_weights, dtype=text_embeddings.dtype, device=pipe.device)
331
+ if uncond_prompt is not None:
332
+ uncond_embeddings = get_unweighted_text_embeddings(
333
+ pipe,
334
+ uncond_tokens,
335
+ pipe.tokenizer.model_max_length,
336
+ no_boseos_middle=no_boseos_middle,
337
+ )
338
+ uncond_weights = torch.tensor(uncond_weights, dtype=uncond_embeddings.dtype, device=pipe.device)
339
+
340
+ # assign weights to the prompts and normalize in the sense of mean
341
+ # TODO: should we normalize by chunk or in a whole (current implementation)?
342
+ if (not skip_parsing) and (not skip_weighting):
343
+ previous_mean = text_embeddings.mean(axis=[-2, -1])
344
+ text_embeddings *= prompt_weights.unsqueeze(-1)
345
+ text_embeddings *= (previous_mean / text_embeddings.mean(axis=[-2, -1])).unsqueeze(-1).unsqueeze(-1)
346
+ if uncond_prompt is not None:
347
+ previous_mean = uncond_embeddings.mean(axis=[-2, -1])
348
+ uncond_embeddings *= uncond_weights.unsqueeze(-1)
349
+ uncond_embeddings *= (previous_mean / uncond_embeddings.mean(axis=[-2, -1])).unsqueeze(-1).unsqueeze(-1)
350
+
351
+ if uncond_prompt is not None:
352
+ return text_embeddings, uncond_embeddings
353
+ return text_embeddings, None
354
+
355
+
356
+ def preprocess_image(image):
357
+ w, h = image.size
358
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
359
+ image = image.resize((w, h), resample=PIL.Image.LANCZOS)
360
+ image = np.array(image).astype(np.float32) / 255.0
361
+ image = image[None].transpose(0, 3, 1, 2)
362
+ image = torch.from_numpy(image)
363
+ return 2.0 * image - 1.0
364
+
365
+
366
+ def preprocess_mask(mask):
367
+ mask = mask.convert("L")
368
+ w, h = mask.size
369
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
370
+ mask = mask.resize((w // 8, h // 8), resample=PIL.Image.NEAREST)
371
+ mask = np.array(mask).astype(np.float32) / 255.0
372
+ mask = np.tile(mask, (4, 1, 1))
373
+ mask = mask[None].transpose(0, 1, 2, 3) # what does this step do?
374
+ mask = 1 - mask # repaint white, keep black
375
+ mask = torch.from_numpy(mask)
376
+ return mask
377
+
378
+
379
+ class StableDiffusionLongPromptWeightingPipeline(DiffusionPipeline):
380
+ r"""
381
+ Pipeline for text-to-image generation using Stable Diffusion without tokens length limit, and support parsing
382
+ weighting in prompt.
383
+
384
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
385
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
386
+
387
+ Args:
388
+ vae ([`AutoencoderKL`]):
389
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
390
+ text_encoder ([`CLIPTextModel`]):
391
+ Frozen text-encoder. Stable Diffusion uses the text portion of
392
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
393
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
394
+ tokenizer (`CLIPTokenizer`):
395
+ Tokenizer of class
396
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
397
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
398
+ scheduler ([`SchedulerMixin`]):
399
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
400
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
401
+ safety_checker ([`StableDiffusionSafetyChecker`]):
402
+ Classification module that estimates whether generated images could be considered offensive or harmful.
403
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
404
+ feature_extractor ([`CLIPFeatureExtractor`]):
405
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
406
+ """
407
+
408
+ def __init__(
409
+ self,
410
+ vae: AutoencoderKL,
411
+ text_encoder: CLIPTextModel,
412
+ tokenizer: CLIPTokenizer,
413
+ unet: UNet2DConditionModel,
414
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
415
+ safety_checker: StableDiffusionSafetyChecker,
416
+ feature_extractor: CLIPFeatureExtractor,
417
+ ):
418
+ super().__init__()
419
+
420
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
421
+ deprecation_message = (
422
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
423
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
424
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
425
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
426
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
427
+ " file"
428
+ )
429
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
430
+ new_config = dict(scheduler.config)
431
+ new_config["steps_offset"] = 1
432
+ scheduler._internal_dict = FrozenDict(new_config)
433
+
434
+ if safety_checker is None:
435
+ logger.warn(
436
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
437
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
438
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
439
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
440
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
441
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
442
+ )
443
+
444
+ self.register_modules(
445
+ vae=vae,
446
+ text_encoder=text_encoder,
447
+ tokenizer=tokenizer,
448
+ unet=unet,
449
+ scheduler=scheduler,
450
+ safety_checker=safety_checker,
451
+ feature_extractor=feature_extractor,
452
+ )
453
+
454
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
455
+ r"""
456
+ Enable sliced attention computation.
457
+
458
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
459
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
460
+
461
+ Args:
462
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
463
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
464
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
465
+ `attention_head_dim` must be a multiple of `slice_size`.
466
+ """
467
+ if slice_size == "auto":
468
+ # half the attention head size is usually a good trade-off between
469
+ # speed and memory
470
+ slice_size = self.unet.config.attention_head_dim // 2
471
+ self.unet.set_attention_slice(slice_size)
472
+
473
+ def disable_attention_slicing(self):
474
+ r"""
475
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
476
+ back to computing attention in one step.
477
+ """
478
+ # set slice_size = `None` to disable `attention slicing`
479
+ self.enable_attention_slicing(None)
480
+
481
+ @torch.no_grad()
482
+ def __call__(
483
+ self,
484
+ prompt: Union[str, List[str]],
485
+ negative_prompt: Optional[Union[str, List[str]]] = None,
486
+ init_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
487
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
488
+ height: int = 512,
489
+ width: int = 512,
490
+ num_inference_steps: int = 50,
491
+ guidance_scale: float = 7.5,
492
+ strength: float = 0.8,
493
+ num_images_per_prompt: Optional[int] = 1,
494
+ eta: float = 0.0,
495
+ generator: Optional[torch.Generator] = None,
496
+ latents: Optional[torch.FloatTensor] = None,
497
+ max_embeddings_multiples: Optional[int] = 3,
498
+ output_type: Optional[str] = "pil",
499
+ return_dict: bool = True,
500
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
501
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
502
+ callback_steps: Optional[int] = 1,
503
+ **kwargs,
504
+ ):
505
+ r"""
506
+ Function invoked when calling the pipeline for generation.
507
+
508
+ Args:
509
+ prompt (`str` or `List[str]`):
510
+ The prompt or prompts to guide the image generation.
511
+ negative_prompt (`str` or `List[str]`, *optional*):
512
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
513
+ if `guidance_scale` is less than `1`).
514
+ init_image (`torch.FloatTensor` or `PIL.Image.Image`):
515
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
516
+ process.
517
+ mask_image (`torch.FloatTensor` or `PIL.Image.Image`):
518
+ `Image`, or tensor representing an image batch, to mask `init_image`. White pixels in the mask will be
519
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
520
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
521
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
522
+ height (`int`, *optional*, defaults to 512):
523
+ The height in pixels of the generated image.
524
+ width (`int`, *optional*, defaults to 512):
525
+ The width in pixels of the generated image.
526
+ num_inference_steps (`int`, *optional*, defaults to 50):
527
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
528
+ expense of slower inference.
529
+ guidance_scale (`float`, *optional*, defaults to 7.5):
530
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
531
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
532
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
533
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
534
+ usually at the expense of lower image quality.
535
+ strength (`float`, *optional*, defaults to 0.8):
536
+ Conceptually, indicates how much to transform the reference `init_image`. Must be between 0 and 1.
537
+ `init_image` will be used as a starting point, adding more noise to it the larger the `strength`. The
538
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
539
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
540
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `init_image`.
541
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
542
+ The number of images to generate per prompt.
543
+ eta (`float`, *optional*, defaults to 0.0):
544
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
545
+ [`schedulers.DDIMScheduler`], will be ignored for others.
546
+ generator (`torch.Generator`, *optional*):
547
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
548
+ deterministic.
549
+ latents (`torch.FloatTensor`, *optional*):
550
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
551
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
552
+ tensor will ge generated by sampling using the supplied random `generator`.
553
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
554
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
555
+ output_type (`str`, *optional*, defaults to `"pil"`):
556
+ The output format of the generate image. Choose between
557
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
558
+ return_dict (`bool`, *optional*, defaults to `True`):
559
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
560
+ plain tuple.
561
+ callback (`Callable`, *optional*):
562
+ A function that will be called every `callback_steps` steps during inference. The function will be
563
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
564
+ is_cancelled_callback (`Callable`, *optional*):
565
+ A function that will be called every `callback_steps` steps during inference. If the function returns
566
+ `True`, the inference will be cancelled.
567
+ callback_steps (`int`, *optional*, defaults to 1):
568
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
569
+ called at every step.
570
+
571
+ Returns:
572
+ `None` if cancelled by `is_cancelled_callback`,
573
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
574
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
575
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
576
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
577
+ (nsfw) content, according to the `safety_checker`.
578
+ """
579
+
580
+ if isinstance(prompt, str):
581
+ batch_size = 1
582
+ prompt = [prompt]
583
+ elif isinstance(prompt, list):
584
+ batch_size = len(prompt)
585
+ else:
586
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
587
+
588
+ if strength < 0 or strength > 1:
589
+ raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
590
+
591
+ if height % 8 != 0 or width % 8 != 0:
592
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
593
+
594
+ if (callback_steps is None) or (
595
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
596
+ ):
597
+ raise ValueError(
598
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
599
+ f" {type(callback_steps)}."
600
+ )
601
+
602
+ # get prompt text embeddings
603
+
604
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
605
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
606
+ # corresponds to doing no classifier free guidance.
607
+ do_classifier_free_guidance = guidance_scale > 1.0
608
+ # get unconditional embeddings for classifier free guidance
609
+ if negative_prompt is None:
610
+ negative_prompt = [""] * batch_size
611
+ elif isinstance(negative_prompt, str):
612
+ negative_prompt = [negative_prompt] * batch_size
613
+ if batch_size != len(negative_prompt):
614
+ raise ValueError(
615
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
616
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
617
+ " the batch size of `prompt`."
618
+ )
619
+
620
+ text_embeddings, uncond_embeddings = get_weighted_text_embeddings(
621
+ pipe=self,
622
+ prompt=prompt,
623
+ uncond_prompt=negative_prompt if do_classifier_free_guidance else None,
624
+ max_embeddings_multiples=max_embeddings_multiples,
625
+ **kwargs,
626
+ )
627
+ bs_embed, seq_len, _ = text_embeddings.shape
628
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
629
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
630
+
631
+ if do_classifier_free_guidance:
632
+ bs_embed, seq_len, _ = uncond_embeddings.shape
633
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
634
+ uncond_embeddings = uncond_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
635
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
636
+
637
+ # set timesteps
638
+ self.scheduler.set_timesteps(num_inference_steps)
639
+
640
+ latents_dtype = text_embeddings.dtype
641
+ init_latents_orig = None
642
+ mask = None
643
+ noise = None
644
+
645
+ if init_image is None:
646
+ # get the initial random noise unless the user supplied it
647
+
648
+ # Unlike in other pipelines, latents need to be generated in the target device
649
+ # for 1-to-1 results reproducibility with the CompVis implementation.
650
+ # However this currently doesn't work in `mps`.
651
+ latents_shape = (
652
+ batch_size * num_images_per_prompt,
653
+ self.unet.in_channels,
654
+ height // 8,
655
+ width // 8,
656
+ )
657
+
658
+ if latents is None:
659
+ if self.device.type == "mps":
660
+ # randn does not exist on mps
661
+ latents = torch.randn(
662
+ latents_shape,
663
+ generator=generator,
664
+ device="cpu",
665
+ dtype=latents_dtype,
666
+ ).to(self.device)
667
+ else:
668
+ latents = torch.randn(
669
+ latents_shape,
670
+ generator=generator,
671
+ device=self.device,
672
+ dtype=latents_dtype,
673
+ )
674
+ else:
675
+ if latents.shape != latents_shape:
676
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
677
+ latents = latents.to(self.device)
678
+
679
+ timesteps = self.scheduler.timesteps.to(self.device)
680
+
681
+ # scale the initial noise by the standard deviation required by the scheduler
682
+ latents = latents * self.scheduler.init_noise_sigma
683
+ else:
684
+ if isinstance(init_image, PIL.Image.Image):
685
+ init_image = preprocess_image(init_image)
686
+ # encode the init image into latents and scale the latents
687
+ init_image = init_image.to(device=self.device, dtype=latents_dtype)
688
+ init_latent_dist = self.vae.encode(init_image).latent_dist
689
+ init_latents = init_latent_dist.sample(generator=generator)
690
+ init_latents = 0.18215 * init_latents
691
+ init_latents = torch.cat([init_latents] * batch_size * num_images_per_prompt, dim=0)
692
+ init_latents_orig = init_latents
693
+
694
+ # preprocess mask
695
+ if mask_image is not None:
696
+ if isinstance(mask_image, PIL.Image.Image):
697
+ mask_image = preprocess_mask(mask_image)
698
+ mask_image = mask_image.to(device=self.device, dtype=latents_dtype)
699
+ mask = torch.cat([mask_image] * batch_size * num_images_per_prompt)
700
+
701
+ # check sizes
702
+ if not mask.shape == init_latents.shape:
703
+ raise ValueError("The mask and init_image should be the same size!")
704
+
705
+ # get the original timestep using init_timestep
706
+ offset = self.scheduler.config.get("steps_offset", 0)
707
+ init_timestep = int(num_inference_steps * strength) + offset
708
+ init_timestep = min(init_timestep, num_inference_steps)
709
+
710
+ timesteps = self.scheduler.timesteps[-init_timestep]
711
+ timesteps = torch.tensor([timesteps] * batch_size * num_images_per_prompt, device=self.device)
712
+
713
+ # add noise to latents using the timesteps
714
+ if self.device.type == "mps":
715
+ # randn does not exist on mps
716
+ noise = torch.randn(
717
+ init_latents.shape,
718
+ generator=generator,
719
+ device="cpu",
720
+ dtype=latents_dtype,
721
+ ).to(self.device)
722
+ else:
723
+ noise = torch.randn(
724
+ init_latents.shape,
725
+ generator=generator,
726
+ device=self.device,
727
+ dtype=latents_dtype,
728
+ )
729
+ latents = self.scheduler.add_noise(init_latents, noise, timesteps)
730
+
731
+ t_start = max(num_inference_steps - init_timestep + offset, 0)
732
+ timesteps = self.scheduler.timesteps[t_start:].to(self.device)
733
+
734
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
735
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
736
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
737
+ # and should be between [0, 1]
738
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
739
+ extra_step_kwargs = {}
740
+ if accepts_eta:
741
+ extra_step_kwargs["eta"] = eta
742
+
743
+ for i, t in enumerate(self.progress_bar(timesteps)):
744
+ # expand the latents if we are doing classifier free guidance
745
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
746
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
747
+
748
+ # predict the noise residual
749
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
750
+
751
+ # perform guidance
752
+ if do_classifier_free_guidance:
753
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
754
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
755
+
756
+ # compute the previous noisy sample x_t -> x_t-1
757
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
758
+
759
+ if mask is not None:
760
+ # masking
761
+ init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, torch.tensor([t]))
762
+ latents = (init_latents_proper * mask) + (latents * (1 - mask))
763
+
764
+ # call the callback, if provided
765
+ if i % callback_steps == 0:
766
+ if callback is not None:
767
+ callback(i, t, latents)
768
+ if is_cancelled_callback is not None and is_cancelled_callback():
769
+ return None
770
+
771
+ latents = 1 / 0.18215 * latents
772
+ image = self.vae.decode(latents).sample
773
+
774
+ image = (image / 2 + 0.5).clamp(0, 1)
775
+
776
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
777
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
778
+
779
+ if self.safety_checker is not None:
780
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
781
+ self.device
782
+ )
783
+ image, has_nsfw_concept = self.safety_checker(
784
+ images=image,
785
+ clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype),
786
+ )
787
+ else:
788
+ has_nsfw_concept = None
789
+
790
+ if output_type == "pil":
791
+ image = self.numpy_to_pil(image)
792
+
793
+ if not return_dict:
794
+ return (image, has_nsfw_concept)
795
+
796
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
797
+
798
+ def text2img(
799
+ self,
800
+ prompt: Union[str, List[str]],
801
+ negative_prompt: Optional[Union[str, List[str]]] = None,
802
+ height: int = 512,
803
+ width: int = 512,
804
+ num_inference_steps: int = 50,
805
+ guidance_scale: float = 7.5,
806
+ num_images_per_prompt: Optional[int] = 1,
807
+ eta: float = 0.0,
808
+ generator: Optional[torch.Generator] = None,
809
+ latents: Optional[torch.FloatTensor] = None,
810
+ max_embeddings_multiples: Optional[int] = 3,
811
+ output_type: Optional[str] = "pil",
812
+ return_dict: bool = True,
813
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
814
+ callback_steps: Optional[int] = 1,
815
+ **kwargs,
816
+ ):
817
+ r"""
818
+ Function for text-to-image generation.
819
+ Args:
820
+ prompt (`str` or `List[str]`):
821
+ The prompt or prompts to guide the image generation.
822
+ negative_prompt (`str` or `List[str]`, *optional*):
823
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
824
+ if `guidance_scale` is less than `1`).
825
+ height (`int`, *optional*, defaults to 512):
826
+ The height in pixels of the generated image.
827
+ width (`int`, *optional*, defaults to 512):
828
+ The width in pixels of the generated image.
829
+ num_inference_steps (`int`, *optional*, defaults to 50):
830
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
831
+ expense of slower inference.
832
+ guidance_scale (`float`, *optional*, defaults to 7.5):
833
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
834
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
835
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
836
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
837
+ usually at the expense of lower image quality.
838
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
839
+ The number of images to generate per prompt.
840
+ eta (`float`, *optional*, defaults to 0.0):
841
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
842
+ [`schedulers.DDIMScheduler`], will be ignored for others.
843
+ generator (`torch.Generator`, *optional*):
844
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
845
+ deterministic.
846
+ latents (`torch.FloatTensor`, *optional*):
847
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
848
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
849
+ tensor will ge generated by sampling using the supplied random `generator`.
850
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
851
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
852
+ output_type (`str`, *optional*, defaults to `"pil"`):
853
+ The output format of the generate image. Choose between
854
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
855
+ return_dict (`bool`, *optional*, defaults to `True`):
856
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
857
+ plain tuple.
858
+ callback (`Callable`, *optional*):
859
+ A function that will be called every `callback_steps` steps during inference. The function will be
860
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
861
+ callback_steps (`int`, *optional*, defaults to 1):
862
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
863
+ called at every step.
864
+ Returns:
865
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
866
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
867
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
868
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
869
+ (nsfw) content, according to the `safety_checker`.
870
+ """
871
+ return self.__call__(
872
+ prompt=prompt,
873
+ negative_prompt=negative_prompt,
874
+ height=height,
875
+ width=width,
876
+ num_inference_steps=num_inference_steps,
877
+ guidance_scale=guidance_scale,
878
+ num_images_per_prompt=num_images_per_prompt,
879
+ eta=eta,
880
+ generator=generator,
881
+ latents=latents,
882
+ max_embeddings_multiples=max_embeddings_multiples,
883
+ output_type=output_type,
884
+ return_dict=return_dict,
885
+ callback=callback,
886
+ callback_steps=callback_steps,
887
+ **kwargs,
888
+ )
889
+
890
+ def img2img(
891
+ self,
892
+ init_image: Union[torch.FloatTensor, PIL.Image.Image],
893
+ prompt: Union[str, List[str]],
894
+ negative_prompt: Optional[Union[str, List[str]]] = None,
895
+ strength: float = 0.8,
896
+ num_inference_steps: Optional[int] = 50,
897
+ guidance_scale: Optional[float] = 7.5,
898
+ num_images_per_prompt: Optional[int] = 1,
899
+ eta: Optional[float] = 0.0,
900
+ generator: Optional[torch.Generator] = None,
901
+ max_embeddings_multiples: Optional[int] = 3,
902
+ output_type: Optional[str] = "pil",
903
+ return_dict: bool = True,
904
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
905
+ callback_steps: Optional[int] = 1,
906
+ **kwargs,
907
+ ):
908
+ r"""
909
+ Function for image-to-image generation.
910
+ Args:
911
+ init_image (`torch.FloatTensor` or `PIL.Image.Image`):
912
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
913
+ process.
914
+ prompt (`str` or `List[str]`):
915
+ The prompt or prompts to guide the image generation.
916
+ negative_prompt (`str` or `List[str]`, *optional*):
917
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
918
+ if `guidance_scale` is less than `1`).
919
+ strength (`float`, *optional*, defaults to 0.8):
920
+ Conceptually, indicates how much to transform the reference `init_image`. Must be between 0 and 1.
921
+ `init_image` will be used as a starting point, adding more noise to it the larger the `strength`. The
922
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
923
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
924
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `init_image`.
925
+ num_inference_steps (`int`, *optional*, defaults to 50):
926
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
927
+ expense of slower inference. This parameter will be modulated by `strength`.
928
+ guidance_scale (`float`, *optional*, defaults to 7.5):
929
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
930
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
931
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
932
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
933
+ usually at the expense of lower image quality.
934
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
935
+ The number of images to generate per prompt.
936
+ eta (`float`, *optional*, defaults to 0.0):
937
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
938
+ [`schedulers.DDIMScheduler`], will be ignored for others.
939
+ generator (`torch.Generator`, *optional*):
940
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
941
+ deterministic.
942
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
943
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
944
+ output_type (`str`, *optional*, defaults to `"pil"`):
945
+ The output format of the generate image. Choose between
946
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
947
+ return_dict (`bool`, *optional*, defaults to `True`):
948
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
949
+ plain tuple.
950
+ callback (`Callable`, *optional*):
951
+ A function that will be called every `callback_steps` steps during inference. The function will be
952
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
953
+ callback_steps (`int`, *optional*, defaults to 1):
954
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
955
+ called at every step.
956
+ Returns:
957
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
958
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
959
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
960
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
961
+ (nsfw) content, according to the `safety_checker`.
962
+ """
963
+ return self.__call__(
964
+ prompt=prompt,
965
+ negative_prompt=negative_prompt,
966
+ init_image=init_image,
967
+ num_inference_steps=num_inference_steps,
968
+ guidance_scale=guidance_scale,
969
+ strength=strength,
970
+ num_images_per_prompt=num_images_per_prompt,
971
+ eta=eta,
972
+ generator=generator,
973
+ max_embeddings_multiples=max_embeddings_multiples,
974
+ output_type=output_type,
975
+ return_dict=return_dict,
976
+ callback=callback,
977
+ callback_steps=callback_steps,
978
+ **kwargs,
979
+ )
980
+
981
+ def inpaint(
982
+ self,
983
+ init_image: Union[torch.FloatTensor, PIL.Image.Image],
984
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image],
985
+ prompt: Union[str, List[str]],
986
+ negative_prompt: Optional[Union[str, List[str]]] = None,
987
+ strength: float = 0.8,
988
+ num_inference_steps: Optional[int] = 50,
989
+ guidance_scale: Optional[float] = 7.5,
990
+ num_images_per_prompt: Optional[int] = 1,
991
+ eta: Optional[float] = 0.0,
992
+ generator: Optional[torch.Generator] = None,
993
+ max_embeddings_multiples: Optional[int] = 3,
994
+ output_type: Optional[str] = "pil",
995
+ return_dict: bool = True,
996
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
997
+ callback_steps: Optional[int] = 1,
998
+ **kwargs,
999
+ ):
1000
+ r"""
1001
+ Function for inpaint.
1002
+ Args:
1003
+ init_image (`torch.FloatTensor` or `PIL.Image.Image`):
1004
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
1005
+ process. This is the image whose masked region will be inpainted.
1006
+ mask_image (`torch.FloatTensor` or `PIL.Image.Image`):
1007
+ `Image`, or tensor representing an image batch, to mask `init_image`. White pixels in the mask will be
1008
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
1009
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
1010
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
1011
+ prompt (`str` or `List[str]`):
1012
+ The prompt or prompts to guide the image generation.
1013
+ negative_prompt (`str` or `List[str]`, *optional*):
1014
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
1015
+ if `guidance_scale` is less than `1`).
1016
+ strength (`float`, *optional*, defaults to 0.8):
1017
+ Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength`
1018
+ is 1, the denoising process will be run on the masked area for the full number of iterations specified
1019
+ in `num_inference_steps`. `init_image` will be used as a reference for the masked area, adding more
1020
+ noise to that region the larger the `strength`. If `strength` is 0, no inpainting will occur.
1021
+ num_inference_steps (`int`, *optional*, defaults to 50):
1022
+ The reference number of denoising steps. More denoising steps usually lead to a higher quality image at
1023
+ the expense of slower inference. This parameter will be modulated by `strength`, as explained above.
1024
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1025
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1026
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1027
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1028
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1029
+ usually at the expense of lower image quality.
1030
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1031
+ The number of images to generate per prompt.
1032
+ eta (`float`, *optional*, defaults to 0.0):
1033
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1034
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1035
+ generator (`torch.Generator`, *optional*):
1036
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
1037
+ deterministic.
1038
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1039
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1040
+ output_type (`str`, *optional*, defaults to `"pil"`):
1041
+ The output format of the generate image. Choose between
1042
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1043
+ return_dict (`bool`, *optional*, defaults to `True`):
1044
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1045
+ plain tuple.
1046
+ callback (`Callable`, *optional*):
1047
+ A function that will be called every `callback_steps` steps during inference. The function will be
1048
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
1049
+ callback_steps (`int`, *optional*, defaults to 1):
1050
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1051
+ called at every step.
1052
+ Returns:
1053
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1054
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1055
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1056
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1057
+ (nsfw) content, according to the `safety_checker`.
1058
+ """
1059
+ return self.__call__(
1060
+ prompt=prompt,
1061
+ negative_prompt=negative_prompt,
1062
+ init_image=init_image,
1063
+ mask_image=mask_image,
1064
+ num_inference_steps=num_inference_steps,
1065
+ guidance_scale=guidance_scale,
1066
+ strength=strength,
1067
+ num_images_per_prompt=num_images_per_prompt,
1068
+ eta=eta,
1069
+ generator=generator,
1070
+ max_embeddings_multiples=max_embeddings_multiples,
1071
+ output_type=output_type,
1072
+ return_dict=return_dict,
1073
+ callback=callback,
1074
+ callback_steps=callback_steps,
1075
+ **kwargs,
1076
+ )
v0.7.0/lpw_stable_diffusion_onnx.py ADDED
@@ -0,0 +1,992 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import re
3
+ from typing import Callable, List, Optional, Union
4
+
5
+ import numpy as np
6
+ import torch
7
+
8
+ import PIL
9
+ from diffusers.onnx_utils import OnnxRuntimeModel
10
+ from diffusers.pipeline_utils import DiffusionPipeline
11
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
12
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
13
+ from diffusers.utils import logging
14
+ from transformers import CLIPFeatureExtractor, CLIPTokenizer
15
+
16
+
17
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
18
+
19
+ re_attention = re.compile(
20
+ r"""
21
+ \\\(|
22
+ \\\)|
23
+ \\\[|
24
+ \\]|
25
+ \\\\|
26
+ \\|
27
+ \(|
28
+ \[|
29
+ :([+-]?[.\d]+)\)|
30
+ \)|
31
+ ]|
32
+ [^\\()\[\]:]+|
33
+ :
34
+ """,
35
+ re.X,
36
+ )
37
+
38
+
39
+ def parse_prompt_attention(text):
40
+ """
41
+ Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
42
+ Accepted tokens are:
43
+ (abc) - increases attention to abc by a multiplier of 1.1
44
+ (abc:3.12) - increases attention to abc by a multiplier of 3.12
45
+ [abc] - decreases attention to abc by a multiplier of 1.1
46
+ \( - literal character '('
47
+ \[ - literal character '['
48
+ \) - literal character ')'
49
+ \] - literal character ']'
50
+ \\ - literal character '\'
51
+ anything else - just text
52
+ >>> parse_prompt_attention('normal text')
53
+ [['normal text', 1.0]]
54
+ >>> parse_prompt_attention('an (important) word')
55
+ [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
56
+ >>> parse_prompt_attention('(unbalanced')
57
+ [['unbalanced', 1.1]]
58
+ >>> parse_prompt_attention('\(literal\]')
59
+ [['(literal]', 1.0]]
60
+ >>> parse_prompt_attention('(unnecessary)(parens)')
61
+ [['unnecessaryparens', 1.1]]
62
+ >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
63
+ [['a ', 1.0],
64
+ ['house', 1.5730000000000004],
65
+ [' ', 1.1],
66
+ ['on', 1.0],
67
+ [' a ', 1.1],
68
+ ['hill', 0.55],
69
+ [', sun, ', 1.1],
70
+ ['sky', 1.4641000000000006],
71
+ ['.', 1.1]]
72
+ """
73
+
74
+ res = []
75
+ round_brackets = []
76
+ square_brackets = []
77
+
78
+ round_bracket_multiplier = 1.1
79
+ square_bracket_multiplier = 1 / 1.1
80
+
81
+ def multiply_range(start_position, multiplier):
82
+ for p in range(start_position, len(res)):
83
+ res[p][1] *= multiplier
84
+
85
+ for m in re_attention.finditer(text):
86
+ text = m.group(0)
87
+ weight = m.group(1)
88
+
89
+ if text.startswith("\\"):
90
+ res.append([text[1:], 1.0])
91
+ elif text == "(":
92
+ round_brackets.append(len(res))
93
+ elif text == "[":
94
+ square_brackets.append(len(res))
95
+ elif weight is not None and len(round_brackets) > 0:
96
+ multiply_range(round_brackets.pop(), float(weight))
97
+ elif text == ")" and len(round_brackets) > 0:
98
+ multiply_range(round_brackets.pop(), round_bracket_multiplier)
99
+ elif text == "]" and len(square_brackets) > 0:
100
+ multiply_range(square_brackets.pop(), square_bracket_multiplier)
101
+ else:
102
+ res.append([text, 1.0])
103
+
104
+ for pos in round_brackets:
105
+ multiply_range(pos, round_bracket_multiplier)
106
+
107
+ for pos in square_brackets:
108
+ multiply_range(pos, square_bracket_multiplier)
109
+
110
+ if len(res) == 0:
111
+ res = [["", 1.0]]
112
+
113
+ # merge runs of identical weights
114
+ i = 0
115
+ while i + 1 < len(res):
116
+ if res[i][1] == res[i + 1][1]:
117
+ res[i][0] += res[i + 1][0]
118
+ res.pop(i + 1)
119
+ else:
120
+ i += 1
121
+
122
+ return res
123
+
124
+
125
+ def get_prompts_with_weights(pipe, prompt: List[str], max_length: int):
126
+ r"""
127
+ Tokenize a list of prompts and return its tokens with weights of each token.
128
+
129
+ No padding, starting or ending token is included.
130
+ """
131
+ tokens = []
132
+ weights = []
133
+ truncated = False
134
+ for text in prompt:
135
+ texts_and_weights = parse_prompt_attention(text)
136
+ text_token = []
137
+ text_weight = []
138
+ for word, weight in texts_and_weights:
139
+ # tokenize and discard the starting and the ending token
140
+ token = pipe.tokenizer(word, return_tensors="np").input_ids[0, 1:-1]
141
+ text_token += list(token)
142
+ # copy the weight by length of token
143
+ text_weight += [weight] * len(token)
144
+ # stop if the text is too long (longer than truncation limit)
145
+ if len(text_token) > max_length:
146
+ truncated = True
147
+ break
148
+ # truncate
149
+ if len(text_token) > max_length:
150
+ truncated = True
151
+ text_token = text_token[:max_length]
152
+ text_weight = text_weight[:max_length]
153
+ tokens.append(text_token)
154
+ weights.append(text_weight)
155
+ if truncated:
156
+ logger.warning("Prompt was truncated. Try to shorten the prompt or increase max_embeddings_multiples")
157
+ return tokens, weights
158
+
159
+
160
+ def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, no_boseos_middle=True, chunk_length=77):
161
+ r"""
162
+ Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length.
163
+ """
164
+ max_embeddings_multiples = (max_length - 2) // (chunk_length - 2)
165
+ weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length
166
+ for i in range(len(tokens)):
167
+ tokens[i] = [bos] + tokens[i] + [eos] * (max_length - 1 - len(tokens[i]))
168
+ if no_boseos_middle:
169
+ weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i]))
170
+ else:
171
+ w = []
172
+ if len(weights[i]) == 0:
173
+ w = [1.0] * weights_length
174
+ else:
175
+ for j in range(max_embeddings_multiples):
176
+ w.append(1.0) # weight for starting token in this chunk
177
+ w += weights[i][j * (chunk_length - 2) : min(len(weights[i]), (j + 1) * (chunk_length - 2))]
178
+ w.append(1.0) # weight for ending token in this chunk
179
+ w += [1.0] * (weights_length - len(w))
180
+ weights[i] = w[:]
181
+
182
+ return tokens, weights
183
+
184
+
185
+ def get_unweighted_text_embeddings(
186
+ pipe,
187
+ text_input: np.array,
188
+ chunk_length: int,
189
+ no_boseos_middle: Optional[bool] = True,
190
+ ):
191
+ """
192
+ When the length of tokens is a multiple of the capacity of the text encoder,
193
+ it should be split into chunks and sent to the text encoder individually.
194
+ """
195
+ max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2)
196
+ if max_embeddings_multiples > 1:
197
+ text_embeddings = []
198
+ for i in range(max_embeddings_multiples):
199
+ # extract the i-th chunk
200
+ text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].copy()
201
+
202
+ # cover the head and the tail by the starting and the ending tokens
203
+ text_input_chunk[:, 0] = text_input[0, 0]
204
+ text_input_chunk[:, -1] = text_input[0, -1]
205
+
206
+ text_embedding = pipe.text_encoder(input_ids=text_input_chunk)[0]
207
+
208
+ if no_boseos_middle:
209
+ if i == 0:
210
+ # discard the ending token
211
+ text_embedding = text_embedding[:, :-1]
212
+ elif i == max_embeddings_multiples - 1:
213
+ # discard the starting token
214
+ text_embedding = text_embedding[:, 1:]
215
+ else:
216
+ # discard both starting and ending tokens
217
+ text_embedding = text_embedding[:, 1:-1]
218
+
219
+ text_embeddings.append(text_embedding)
220
+ text_embeddings = np.concatenate(text_embeddings, axis=1)
221
+ else:
222
+ text_embeddings = pipe.text_encoder(input_ids=text_input)[0]
223
+ return text_embeddings
224
+
225
+
226
+ def get_weighted_text_embeddings(
227
+ pipe,
228
+ prompt: Union[str, List[str]],
229
+ uncond_prompt: Optional[Union[str, List[str]]] = None,
230
+ max_embeddings_multiples: Optional[int] = 4,
231
+ no_boseos_middle: Optional[bool] = False,
232
+ skip_parsing: Optional[bool] = False,
233
+ skip_weighting: Optional[bool] = False,
234
+ **kwargs,
235
+ ):
236
+ r"""
237
+ Prompts can be assigned with local weights using brackets. For example,
238
+ prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful',
239
+ and the embedding tokens corresponding to the words get multiplied by a constant, 1.1.
240
+
241
+ Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean.
242
+
243
+ Args:
244
+ pipe (`DiffusionPipeline`):
245
+ Pipe to provide access to the tokenizer and the text encoder.
246
+ prompt (`str` or `List[str]`):
247
+ The prompt or prompts to guide the image generation.
248
+ uncond_prompt (`str` or `List[str]`):
249
+ The unconditional prompt or prompts for guide the image generation. If unconditional prompt
250
+ is provided, the embeddings of prompt and uncond_prompt are concatenated.
251
+ max_embeddings_multiples (`int`, *optional*, defaults to `1`):
252
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
253
+ no_boseos_middle (`bool`, *optional*, defaults to `False`):
254
+ If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and
255
+ ending token in each of the chunk in the middle.
256
+ skip_parsing (`bool`, *optional*, defaults to `False`):
257
+ Skip the parsing of brackets.
258
+ skip_weighting (`bool`, *optional*, defaults to `False`):
259
+ Skip the weighting. When the parsing is skipped, it is forced True.
260
+ """
261
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
262
+ if isinstance(prompt, str):
263
+ prompt = [prompt]
264
+
265
+ if not skip_parsing:
266
+ prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2)
267
+ if uncond_prompt is not None:
268
+ if isinstance(uncond_prompt, str):
269
+ uncond_prompt = [uncond_prompt]
270
+ uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2)
271
+ else:
272
+ prompt_tokens = [
273
+ token[1:-1]
274
+ for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True, return_tensors="np").input_ids
275
+ ]
276
+ prompt_weights = [[1.0] * len(token) for token in prompt_tokens]
277
+ if uncond_prompt is not None:
278
+ if isinstance(uncond_prompt, str):
279
+ uncond_prompt = [uncond_prompt]
280
+ uncond_tokens = [
281
+ token[1:-1]
282
+ for token in pipe.tokenizer(
283
+ uncond_prompt,
284
+ max_length=max_length,
285
+ truncation=True,
286
+ return_tensors="np",
287
+ ).input_ids
288
+ ]
289
+ uncond_weights = [[1.0] * len(token) for token in uncond_tokens]
290
+
291
+ # round up the longest length of tokens to a multiple of (model_max_length - 2)
292
+ max_length = max([len(token) for token in prompt_tokens])
293
+ if uncond_prompt is not None:
294
+ max_length = max(max_length, max([len(token) for token in uncond_tokens]))
295
+
296
+ max_embeddings_multiples = min(
297
+ max_embeddings_multiples,
298
+ (max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1,
299
+ )
300
+ max_embeddings_multiples = max(1, max_embeddings_multiples)
301
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
302
+
303
+ # pad the length of tokens and weights
304
+ bos = pipe.tokenizer.bos_token_id
305
+ eos = pipe.tokenizer.eos_token_id
306
+ prompt_tokens, prompt_weights = pad_tokens_and_weights(
307
+ prompt_tokens,
308
+ prompt_weights,
309
+ max_length,
310
+ bos,
311
+ eos,
312
+ no_boseos_middle=no_boseos_middle,
313
+ chunk_length=pipe.tokenizer.model_max_length,
314
+ )
315
+ prompt_tokens = np.array(prompt_tokens, dtype=np.int32)
316
+ if uncond_prompt is not None:
317
+ uncond_tokens, uncond_weights = pad_tokens_and_weights(
318
+ uncond_tokens,
319
+ uncond_weights,
320
+ max_length,
321
+ bos,
322
+ eos,
323
+ no_boseos_middle=no_boseos_middle,
324
+ chunk_length=pipe.tokenizer.model_max_length,
325
+ )
326
+ uncond_tokens = np.array(uncond_tokens, dtype=np.int32)
327
+
328
+ # get the embeddings
329
+ text_embeddings = get_unweighted_text_embeddings(
330
+ pipe,
331
+ prompt_tokens,
332
+ pipe.tokenizer.model_max_length,
333
+ no_boseos_middle=no_boseos_middle,
334
+ )
335
+ prompt_weights = np.array(prompt_weights, dtype=text_embeddings.dtype)
336
+ if uncond_prompt is not None:
337
+ uncond_embeddings = get_unweighted_text_embeddings(
338
+ pipe,
339
+ uncond_tokens,
340
+ pipe.tokenizer.model_max_length,
341
+ no_boseos_middle=no_boseos_middle,
342
+ )
343
+ uncond_weights = np.array(uncond_weights, dtype=uncond_embeddings.dtype)
344
+
345
+ # assign weights to the prompts and normalize in the sense of mean
346
+ # TODO: should we normalize by chunk or in a whole (current implementation)?
347
+ if (not skip_parsing) and (not skip_weighting):
348
+ previous_mean = text_embeddings.mean(axis=(-2, -1))
349
+ text_embeddings *= prompt_weights[:, :, None]
350
+ text_embeddings *= (previous_mean / text_embeddings.mean(axis=(-2, -1)))[:, None, None]
351
+ if uncond_prompt is not None:
352
+ previous_mean = uncond_embeddings.mean(axis=(-2, -1))
353
+ uncond_embeddings *= uncond_weights[:, :, None]
354
+ uncond_embeddings *= (previous_mean / uncond_embeddings.mean(axis=(-2, -1)))[:, None, None]
355
+
356
+ # For classifier free guidance, we need to do two forward passes.
357
+ # Here we concatenate the unconditional and text embeddings into a single batch
358
+ # to avoid doing two forward passes
359
+ if uncond_prompt is not None:
360
+ return text_embeddings, uncond_embeddings
361
+
362
+ return text_embeddings
363
+
364
+
365
+ def preprocess_image(image):
366
+ w, h = image.size
367
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
368
+ image = image.resize((w, h), resample=PIL.Image.LANCZOS)
369
+ image = np.array(image).astype(np.float32) / 255.0
370
+ image = image[None].transpose(0, 3, 1, 2)
371
+ return 2.0 * image - 1.0
372
+
373
+
374
+ def preprocess_mask(mask):
375
+ mask = mask.convert("L")
376
+ w, h = mask.size
377
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
378
+ mask = mask.resize((w // 8, h // 8), resample=PIL.Image.NEAREST)
379
+ mask = np.array(mask).astype(np.float32) / 255.0
380
+ mask = np.tile(mask, (4, 1, 1))
381
+ mask = mask[None].transpose(0, 1, 2, 3) # what does this step do?
382
+ mask = 1 - mask # repaint white, keep black
383
+ return mask
384
+
385
+
386
+ class OnnxStableDiffusionLongPromptWeightingPipeline(DiffusionPipeline):
387
+ r"""
388
+ Pipeline for text-to-image generation using Stable Diffusion without tokens length limit, and support parsing
389
+ weighting in prompt.
390
+
391
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
392
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
393
+ """
394
+
395
+ def __init__(
396
+ self,
397
+ vae_encoder: OnnxRuntimeModel,
398
+ vae_decoder: OnnxRuntimeModel,
399
+ text_encoder: OnnxRuntimeModel,
400
+ tokenizer: CLIPTokenizer,
401
+ unet: OnnxRuntimeModel,
402
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
403
+ safety_checker: OnnxRuntimeModel,
404
+ feature_extractor: CLIPFeatureExtractor,
405
+ ):
406
+ super().__init__()
407
+ self.register_modules(
408
+ vae_encoder=vae_encoder,
409
+ vae_decoder=vae_decoder,
410
+ text_encoder=text_encoder,
411
+ tokenizer=tokenizer,
412
+ unet=unet,
413
+ scheduler=scheduler,
414
+ safety_checker=safety_checker,
415
+ feature_extractor=feature_extractor,
416
+ )
417
+
418
+ @torch.no_grad()
419
+ def __call__(
420
+ self,
421
+ prompt: Union[str, List[str]],
422
+ negative_prompt: Optional[Union[str, List[str]]] = None,
423
+ init_image: Union[np.ndarray, PIL.Image.Image] = None,
424
+ mask_image: Union[np.ndarray, PIL.Image.Image] = None,
425
+ height: int = 512,
426
+ width: int = 512,
427
+ num_inference_steps: int = 50,
428
+ guidance_scale: float = 7.5,
429
+ strength: float = 0.8,
430
+ num_images_per_prompt: Optional[int] = 1,
431
+ eta: float = 0.0,
432
+ generator: Optional[np.random.RandomState] = None,
433
+ latents: Optional[np.ndarray] = None,
434
+ max_embeddings_multiples: Optional[int] = 3,
435
+ output_type: Optional[str] = "pil",
436
+ return_dict: bool = True,
437
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
438
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
439
+ callback_steps: Optional[int] = 1,
440
+ **kwargs,
441
+ ):
442
+ r"""
443
+ Function invoked when calling the pipeline for generation.
444
+
445
+ Args:
446
+ prompt (`str` or `List[str]`):
447
+ The prompt or prompts to guide the image generation.
448
+ negative_prompt (`str` or `List[str]`, *optional*):
449
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
450
+ if `guidance_scale` is less than `1`).
451
+ init_image (`np.ndarray` or `PIL.Image.Image`):
452
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
453
+ process.
454
+ mask_image (`np.ndarray` or `PIL.Image.Image`):
455
+ `Image`, or tensor representing an image batch, to mask `init_image`. White pixels in the mask will be
456
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
457
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
458
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
459
+ height (`int`, *optional*, defaults to 512):
460
+ The height in pixels of the generated image.
461
+ width (`int`, *optional*, defaults to 512):
462
+ The width in pixels of the generated image.
463
+ num_inference_steps (`int`, *optional*, defaults to 50):
464
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
465
+ expense of slower inference.
466
+ guidance_scale (`float`, *optional*, defaults to 7.5):
467
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
468
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
469
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
470
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
471
+ usually at the expense of lower image quality.
472
+ strength (`float`, *optional*, defaults to 0.8):
473
+ Conceptually, indicates how much to transform the reference `init_image`. Must be between 0 and 1.
474
+ `init_image` will be used as a starting point, adding more noise to it the larger the `strength`. The
475
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
476
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
477
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `init_image`.
478
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
479
+ The number of images to generate per prompt.
480
+ eta (`float`, *optional*, defaults to 0.0):
481
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
482
+ [`schedulers.DDIMScheduler`], will be ignored for others.
483
+ generator (`np.random.RandomState`, *optional*):
484
+ A np.random.RandomState to make generation deterministic.
485
+ latents (`np.ndarray`, *optional*):
486
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
487
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
488
+ tensor will ge generated by sampling using the supplied random `generator`.
489
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
490
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
491
+ output_type (`str`, *optional*, defaults to `"pil"`):
492
+ The output format of the generate image. Choose between
493
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
494
+ return_dict (`bool`, *optional*, defaults to `True`):
495
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
496
+ plain tuple.
497
+ callback (`Callable`, *optional*):
498
+ A function that will be called every `callback_steps` steps during inference. The function will be
499
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
500
+ is_cancelled_callback (`Callable`, *optional*):
501
+ A function that will be called every `callback_steps` steps during inference. If the function returns
502
+ `True`, the inference will be cancelled.
503
+ callback_steps (`int`, *optional*, defaults to 1):
504
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
505
+ called at every step.
506
+
507
+ Returns:
508
+ `None` if cancelled by `is_cancelled_callback`,
509
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
510
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
511
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
512
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
513
+ (nsfw) content, according to the `safety_checker`.
514
+ """
515
+
516
+ if isinstance(prompt, str):
517
+ batch_size = 1
518
+ prompt = [prompt]
519
+ elif isinstance(prompt, list):
520
+ batch_size = len(prompt)
521
+ else:
522
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
523
+
524
+ if strength < 0 or strength > 1:
525
+ raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
526
+
527
+ if height % 8 != 0 or width % 8 != 0:
528
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
529
+
530
+ if (callback_steps is None) or (
531
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
532
+ ):
533
+ raise ValueError(
534
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
535
+ f" {type(callback_steps)}."
536
+ )
537
+
538
+ # get prompt text embeddings
539
+
540
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
541
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
542
+ # corresponds to doing no classifier free guidance.
543
+ do_classifier_free_guidance = guidance_scale > 1.0
544
+ # get unconditional embeddings for classifier free guidance
545
+ if negative_prompt is None:
546
+ negative_prompt = [""] * batch_size
547
+ elif isinstance(negative_prompt, str):
548
+ negative_prompt = [negative_prompt] * batch_size
549
+ if batch_size != len(negative_prompt):
550
+ raise ValueError(
551
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
552
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
553
+ " the batch size of `prompt`."
554
+ )
555
+
556
+ if generator is None:
557
+ generator = np.random
558
+
559
+ text_embeddings, uncond_embeddings = get_weighted_text_embeddings(
560
+ pipe=self,
561
+ prompt=prompt,
562
+ uncond_prompt=negative_prompt if do_classifier_free_guidance else None,
563
+ max_embeddings_multiples=max_embeddings_multiples,
564
+ **kwargs,
565
+ )
566
+
567
+ text_embeddings = text_embeddings.repeat(num_images_per_prompt, 0)
568
+ if do_classifier_free_guidance:
569
+ uncond_embeddings = uncond_embeddings.repeat(num_images_per_prompt, 0)
570
+ text_embeddings = np.concatenate([uncond_embeddings, text_embeddings])
571
+
572
+ # set timesteps
573
+ self.scheduler.set_timesteps(num_inference_steps)
574
+
575
+ latents_dtype = text_embeddings.dtype
576
+ init_latents_orig = None
577
+ mask = None
578
+ noise = None
579
+
580
+ if init_image is None:
581
+ latents_shape = (
582
+ batch_size * num_images_per_prompt,
583
+ 4,
584
+ height // 8,
585
+ width // 8,
586
+ )
587
+
588
+ if latents is None:
589
+ latents = generator.randn(*latents_shape).astype(latents_dtype)
590
+ elif latents.shape != latents_shape:
591
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
592
+
593
+ timesteps = self.scheduler.timesteps.to(self.device)
594
+
595
+ # scale the initial noise by the standard deviation required by the scheduler
596
+ latents = latents * self.scheduler.init_noise_sigma
597
+ else:
598
+ if isinstance(init_image, PIL.Image.Image):
599
+ init_image = preprocess_image(init_image)
600
+ # encode the init image into latents and scale the latents
601
+ init_image = init_image.astype(latents_dtype)
602
+ init_latents = self.vae_encoder(sample=init_image)[0]
603
+ init_latents = 0.18215 * init_latents
604
+ init_latents = np.concatenate([init_latents] * batch_size * num_images_per_prompt)
605
+ init_latents_orig = init_latents
606
+
607
+ # preprocess mask
608
+ if mask_image is not None:
609
+ if isinstance(mask_image, PIL.Image.Image):
610
+ mask_image = preprocess_mask(mask_image)
611
+ mask_image = mask_image.astype(latents_dtype)
612
+ mask = np.concatenate([mask_image] * batch_size * num_images_per_prompt)
613
+
614
+ # check sizes
615
+ if not mask.shape == init_latents.shape:
616
+ print(mask.shape, init_latents.shape)
617
+ raise ValueError("The mask and init_image should be the same size!")
618
+
619
+ # get the original timestep using init_timestep
620
+ offset = self.scheduler.config.get("steps_offset", 0)
621
+ init_timestep = int(num_inference_steps * strength) + offset
622
+ init_timestep = min(init_timestep, num_inference_steps)
623
+
624
+ timesteps = self.scheduler.timesteps[-init_timestep]
625
+ timesteps = torch.tensor([timesteps] * batch_size * num_images_per_prompt)
626
+
627
+ # add noise to latents using the timesteps
628
+ noise = generator.randn(*init_latents.shape).astype(latents_dtype)
629
+ latents = self.scheduler.add_noise(
630
+ torch.from_numpy(init_latents), torch.from_numpy(noise), timesteps
631
+ ).numpy()
632
+
633
+ t_start = max(num_inference_steps - init_timestep + offset, 0)
634
+ timesteps = self.scheduler.timesteps[t_start:]
635
+
636
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
637
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
638
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
639
+ # and should be between [0, 1]
640
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
641
+ extra_step_kwargs = {}
642
+ if accepts_eta:
643
+ extra_step_kwargs["eta"] = eta
644
+
645
+ for i, t in enumerate(self.progress_bar(timesteps)):
646
+ # expand the latents if we are doing classifier free guidance
647
+ latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents
648
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
649
+
650
+ # predict the noise residual
651
+ noise_pred = self.unet(
652
+ sample=latent_model_input,
653
+ timestep=np.array([t]),
654
+ encoder_hidden_states=text_embeddings,
655
+ )
656
+ noise_pred = noise_pred[0]
657
+
658
+ # perform guidance
659
+ if do_classifier_free_guidance:
660
+ noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2)
661
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
662
+
663
+ # compute the previous noisy sample x_t -> x_t-1
664
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample.numpy()
665
+
666
+ if mask is not None:
667
+ # masking
668
+ init_latents_proper = self.scheduler.add_noise(
669
+ torch.from_numpy(init_latents_orig),
670
+ torch.from_numpy(noise),
671
+ torch.tensor([t]),
672
+ ).numpy()
673
+ latents = (init_latents_proper * mask) + (latents * (1 - mask))
674
+
675
+ # call the callback, if provided
676
+ if i % callback_steps == 0:
677
+ if callback is not None:
678
+ callback(i, t, latents)
679
+ if is_cancelled_callback is not None and is_cancelled_callback():
680
+ return None
681
+
682
+ latents = 1 / 0.18215 * latents
683
+ # image = self.vae_decoder(latent_sample=latents)[0]
684
+ # it seems likes there is a problem for using half-precision vae decoder if batchsize>1
685
+ image = []
686
+ for i in range(latents.shape[0]):
687
+ image.append(self.vae_decoder(latent_sample=latents[i : i + 1])[0])
688
+ image = np.concatenate(image)
689
+
690
+ image = np.clip(image / 2 + 0.5, 0, 1)
691
+ image = image.transpose((0, 2, 3, 1))
692
+
693
+ if self.safety_checker is not None:
694
+ safety_checker_input = self.feature_extractor(
695
+ self.numpy_to_pil(image), return_tensors="np"
696
+ ).pixel_values.astype(image.dtype)
697
+ # There will throw an error if use safety_checker directly and batchsize>1
698
+ images, has_nsfw_concept = [], []
699
+ for i in range(image.shape[0]):
700
+ image_i, has_nsfw_concept_i = self.safety_checker(
701
+ clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1]
702
+ )
703
+ images.append(image_i)
704
+ has_nsfw_concept.append(has_nsfw_concept_i)
705
+ image = np.concatenate(images)
706
+ else:
707
+ has_nsfw_concept = None
708
+
709
+ if output_type == "pil":
710
+ image = self.numpy_to_pil(image)
711
+
712
+ if not return_dict:
713
+ return (image, has_nsfw_concept)
714
+
715
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
716
+
717
+ def text2img(
718
+ self,
719
+ prompt: Union[str, List[str]],
720
+ negative_prompt: Optional[Union[str, List[str]]] = None,
721
+ height: int = 512,
722
+ width: int = 512,
723
+ num_inference_steps: int = 50,
724
+ guidance_scale: float = 7.5,
725
+ num_images_per_prompt: Optional[int] = 1,
726
+ eta: float = 0.0,
727
+ generator: Optional[np.random.RandomState] = None,
728
+ latents: Optional[np.ndarray] = None,
729
+ max_embeddings_multiples: Optional[int] = 3,
730
+ output_type: Optional[str] = "pil",
731
+ return_dict: bool = True,
732
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
733
+ callback_steps: Optional[int] = 1,
734
+ **kwargs,
735
+ ):
736
+ r"""
737
+ Function for text-to-image generation.
738
+ Args:
739
+ prompt (`str` or `List[str]`):
740
+ The prompt or prompts to guide the image generation.
741
+ negative_prompt (`str` or `List[str]`, *optional*):
742
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
743
+ if `guidance_scale` is less than `1`).
744
+ height (`int`, *optional*, defaults to 512):
745
+ The height in pixels of the generated image.
746
+ width (`int`, *optional*, defaults to 512):
747
+ The width in pixels of the generated image.
748
+ num_inference_steps (`int`, *optional*, defaults to 50):
749
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
750
+ expense of slower inference.
751
+ guidance_scale (`float`, *optional*, defaults to 7.5):
752
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
753
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
754
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
755
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
756
+ usually at the expense of lower image quality.
757
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
758
+ The number of images to generate per prompt.
759
+ eta (`float`, *optional*, defaults to 0.0):
760
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
761
+ [`schedulers.DDIMScheduler`], will be ignored for others.
762
+ generator (`np.random.RandomState`, *optional*):
763
+ A np.random.RandomState to make generation deterministic.
764
+ latents (`np.ndarray`, *optional*):
765
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
766
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
767
+ tensor will ge generated by sampling using the supplied random `generator`.
768
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
769
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
770
+ output_type (`str`, *optional*, defaults to `"pil"`):
771
+ The output format of the generate image. Choose between
772
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
773
+ return_dict (`bool`, *optional*, defaults to `True`):
774
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
775
+ plain tuple.
776
+ callback (`Callable`, *optional*):
777
+ A function that will be called every `callback_steps` steps during inference. The function will be
778
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
779
+ callback_steps (`int`, *optional*, defaults to 1):
780
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
781
+ called at every step.
782
+ Returns:
783
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
784
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
785
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
786
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
787
+ (nsfw) content, according to the `safety_checker`.
788
+ """
789
+ return self.__call__(
790
+ prompt=prompt,
791
+ negative_prompt=negative_prompt,
792
+ height=height,
793
+ width=width,
794
+ num_inference_steps=num_inference_steps,
795
+ guidance_scale=guidance_scale,
796
+ num_images_per_prompt=num_images_per_prompt,
797
+ eta=eta,
798
+ generator=generator,
799
+ latents=latents,
800
+ max_embeddings_multiples=max_embeddings_multiples,
801
+ output_type=output_type,
802
+ return_dict=return_dict,
803
+ callback=callback,
804
+ callback_steps=callback_steps,
805
+ **kwargs,
806
+ )
807
+
808
+ def img2img(
809
+ self,
810
+ init_image: Union[np.ndarray, PIL.Image.Image],
811
+ prompt: Union[str, List[str]],
812
+ negative_prompt: Optional[Union[str, List[str]]] = None,
813
+ strength: float = 0.8,
814
+ num_inference_steps: Optional[int] = 50,
815
+ guidance_scale: Optional[float] = 7.5,
816
+ num_images_per_prompt: Optional[int] = 1,
817
+ eta: Optional[float] = 0.0,
818
+ generator: Optional[np.random.RandomState] = None,
819
+ max_embeddings_multiples: Optional[int] = 3,
820
+ output_type: Optional[str] = "pil",
821
+ return_dict: bool = True,
822
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
823
+ callback_steps: Optional[int] = 1,
824
+ **kwargs,
825
+ ):
826
+ r"""
827
+ Function for image-to-image generation.
828
+ Args:
829
+ init_image (`np.ndarray` or `PIL.Image.Image`):
830
+ `Image`, or ndarray representing an image batch, that will be used as the starting point for the
831
+ process.
832
+ prompt (`str` or `List[str]`):
833
+ The prompt or prompts to guide the image generation.
834
+ negative_prompt (`str` or `List[str]`, *optional*):
835
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
836
+ if `guidance_scale` is less than `1`).
837
+ strength (`float`, *optional*, defaults to 0.8):
838
+ Conceptually, indicates how much to transform the reference `init_image`. Must be between 0 and 1.
839
+ `init_image` will be used as a starting point, adding more noise to it the larger the `strength`. The
840
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
841
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
842
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `init_image`.
843
+ num_inference_steps (`int`, *optional*, defaults to 50):
844
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
845
+ expense of slower inference. This parameter will be modulated by `strength`.
846
+ guidance_scale (`float`, *optional*, defaults to 7.5):
847
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
848
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
849
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
850
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
851
+ usually at the expense of lower image quality.
852
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
853
+ The number of images to generate per prompt.
854
+ eta (`float`, *optional*, defaults to 0.0):
855
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
856
+ [`schedulers.DDIMScheduler`], will be ignored for others.
857
+ generator (`np.random.RandomState`, *optional*):
858
+ A np.random.RandomState to make generation deterministic.
859
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
860
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
861
+ output_type (`str`, *optional*, defaults to `"pil"`):
862
+ The output format of the generate image. Choose between
863
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
864
+ return_dict (`bool`, *optional*, defaults to `True`):
865
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
866
+ plain tuple.
867
+ callback (`Callable`, *optional*):
868
+ A function that will be called every `callback_steps` steps during inference. The function will be
869
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
870
+ callback_steps (`int`, *optional*, defaults to 1):
871
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
872
+ called at every step.
873
+ Returns:
874
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
875
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
876
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
877
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
878
+ (nsfw) content, according to the `safety_checker`.
879
+ """
880
+ return self.__call__(
881
+ prompt=prompt,
882
+ negative_prompt=negative_prompt,
883
+ init_image=init_image,
884
+ num_inference_steps=num_inference_steps,
885
+ guidance_scale=guidance_scale,
886
+ strength=strength,
887
+ num_images_per_prompt=num_images_per_prompt,
888
+ eta=eta,
889
+ generator=generator,
890
+ max_embeddings_multiples=max_embeddings_multiples,
891
+ output_type=output_type,
892
+ return_dict=return_dict,
893
+ callback=callback,
894
+ callback_steps=callback_steps,
895
+ **kwargs,
896
+ )
897
+
898
+ def inpaint(
899
+ self,
900
+ init_image: Union[np.ndarray, PIL.Image.Image],
901
+ mask_image: Union[np.ndarray, PIL.Image.Image],
902
+ prompt: Union[str, List[str]],
903
+ negative_prompt: Optional[Union[str, List[str]]] = None,
904
+ strength: float = 0.8,
905
+ num_inference_steps: Optional[int] = 50,
906
+ guidance_scale: Optional[float] = 7.5,
907
+ num_images_per_prompt: Optional[int] = 1,
908
+ eta: Optional[float] = 0.0,
909
+ generator: Optional[np.random.RandomState] = None,
910
+ max_embeddings_multiples: Optional[int] = 3,
911
+ output_type: Optional[str] = "pil",
912
+ return_dict: bool = True,
913
+ callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
914
+ callback_steps: Optional[int] = 1,
915
+ **kwargs,
916
+ ):
917
+ r"""
918
+ Function for inpaint.
919
+ Args:
920
+ init_image (`np.ndarray` or `PIL.Image.Image`):
921
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
922
+ process. This is the image whose masked region will be inpainted.
923
+ mask_image (`np.ndarray` or `PIL.Image.Image`):
924
+ `Image`, or tensor representing an image batch, to mask `init_image`. White pixels in the mask will be
925
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
926
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
927
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
928
+ prompt (`str` or `List[str]`):
929
+ The prompt or prompts to guide the image generation.
930
+ negative_prompt (`str` or `List[str]`, *optional*):
931
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
932
+ if `guidance_scale` is less than `1`).
933
+ strength (`float`, *optional*, defaults to 0.8):
934
+ Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength`
935
+ is 1, the denoising process will be run on the masked area for the full number of iterations specified
936
+ in `num_inference_steps`. `init_image` will be used as a reference for the masked area, adding more
937
+ noise to that region the larger the `strength`. If `strength` is 0, no inpainting will occur.
938
+ num_inference_steps (`int`, *optional*, defaults to 50):
939
+ The reference number of denoising steps. More denoising steps usually lead to a higher quality image at
940
+ the expense of slower inference. This parameter will be modulated by `strength`, as explained above.
941
+ guidance_scale (`float`, *optional*, defaults to 7.5):
942
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
943
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
944
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
945
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
946
+ usually at the expense of lower image quality.
947
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
948
+ The number of images to generate per prompt.
949
+ eta (`float`, *optional*, defaults to 0.0):
950
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
951
+ [`schedulers.DDIMScheduler`], will be ignored for others.
952
+ generator (`np.random.RandomState`, *optional*):
953
+ A np.random.RandomState to make generation deterministic.
954
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
955
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
956
+ output_type (`str`, *optional*, defaults to `"pil"`):
957
+ The output format of the generate image. Choose between
958
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
959
+ return_dict (`bool`, *optional*, defaults to `True`):
960
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
961
+ plain tuple.
962
+ callback (`Callable`, *optional*):
963
+ A function that will be called every `callback_steps` steps during inference. The function will be
964
+ called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
965
+ callback_steps (`int`, *optional*, defaults to 1):
966
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
967
+ called at every step.
968
+ Returns:
969
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
970
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
971
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
972
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
973
+ (nsfw) content, according to the `safety_checker`.
974
+ """
975
+ return self.__call__(
976
+ prompt=prompt,
977
+ negative_prompt=negative_prompt,
978
+ init_image=init_image,
979
+ mask_image=mask_image,
980
+ num_inference_steps=num_inference_steps,
981
+ guidance_scale=guidance_scale,
982
+ strength=strength,
983
+ num_images_per_prompt=num_images_per_prompt,
984
+ eta=eta,
985
+ generator=generator,
986
+ max_embeddings_multiples=max_embeddings_multiples,
987
+ output_type=output_type,
988
+ return_dict=return_dict,
989
+ callback=callback,
990
+ callback_steps=callback_steps,
991
+ **kwargs,
992
+ )
v0.7.0/one_step_unet.py ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ import torch
3
+
4
+ from diffusers import DiffusionPipeline
5
+
6
+
7
+ class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
8
+ def __init__(self, unet, scheduler):
9
+ super().__init__()
10
+
11
+ self.register_modules(unet=unet, scheduler=scheduler)
12
+
13
+ def __call__(self):
14
+ image = torch.randn(
15
+ (1, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size),
16
+ )
17
+ timestep = 1
18
+
19
+ model_output = self.unet(image, timestep).sample
20
+ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample
21
+
22
+ return scheduler_output
v0.7.0/seed_resize_stable_diffusion.py ADDED
@@ -0,0 +1,366 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ modified based on diffusion library from Huggingface: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
3
+ """
4
+ import inspect
5
+ from typing import Callable, List, Optional, Union
6
+
7
+ import torch
8
+
9
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
10
+ from diffusers.pipeline_utils import DiffusionPipeline
11
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
12
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
13
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
14
+ from diffusers.utils import logging
15
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
16
+
17
+
18
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
19
+
20
+
21
+ class SeedResizeStableDiffusionPipeline(DiffusionPipeline):
22
+ r"""
23
+ Pipeline for text-to-image generation using Stable Diffusion.
24
+
25
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
26
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
27
+
28
+ Args:
29
+ vae ([`AutoencoderKL`]):
30
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
31
+ text_encoder ([`CLIPTextModel`]):
32
+ Frozen text-encoder. Stable Diffusion uses the text portion of
33
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
34
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
35
+ tokenizer (`CLIPTokenizer`):
36
+ Tokenizer of class
37
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
38
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
39
+ scheduler ([`SchedulerMixin`]):
40
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
41
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
42
+ safety_checker ([`StableDiffusionSafetyChecker`]):
43
+ Classification module that estimates whether generated images could be considered offensive or harmful.
44
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
45
+ feature_extractor ([`CLIPFeatureExtractor`]):
46
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
47
+ """
48
+
49
+ def __init__(
50
+ self,
51
+ vae: AutoencoderKL,
52
+ text_encoder: CLIPTextModel,
53
+ tokenizer: CLIPTokenizer,
54
+ unet: UNet2DConditionModel,
55
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
56
+ safety_checker: StableDiffusionSafetyChecker,
57
+ feature_extractor: CLIPFeatureExtractor,
58
+ ):
59
+ super().__init__()
60
+ self.register_modules(
61
+ vae=vae,
62
+ text_encoder=text_encoder,
63
+ tokenizer=tokenizer,
64
+ unet=unet,
65
+ scheduler=scheduler,
66
+ safety_checker=safety_checker,
67
+ feature_extractor=feature_extractor,
68
+ )
69
+
70
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
71
+ r"""
72
+ Enable sliced attention computation.
73
+
74
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
75
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
76
+
77
+ Args:
78
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
79
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
80
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
81
+ `attention_head_dim` must be a multiple of `slice_size`.
82
+ """
83
+ if slice_size == "auto":
84
+ # half the attention head size is usually a good trade-off between
85
+ # speed and memory
86
+ slice_size = self.unet.config.attention_head_dim // 2
87
+ self.unet.set_attention_slice(slice_size)
88
+
89
+ def disable_attention_slicing(self):
90
+ r"""
91
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
92
+ back to computing attention in one step.
93
+ """
94
+ # set slice_size = `None` to disable `attention slicing`
95
+ self.enable_attention_slicing(None)
96
+
97
+ @torch.no_grad()
98
+ def __call__(
99
+ self,
100
+ prompt: Union[str, List[str]],
101
+ height: int = 512,
102
+ width: int = 512,
103
+ num_inference_steps: int = 50,
104
+ guidance_scale: float = 7.5,
105
+ negative_prompt: Optional[Union[str, List[str]]] = None,
106
+ num_images_per_prompt: Optional[int] = 1,
107
+ eta: float = 0.0,
108
+ generator: Optional[torch.Generator] = None,
109
+ latents: Optional[torch.FloatTensor] = None,
110
+ output_type: Optional[str] = "pil",
111
+ return_dict: bool = True,
112
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
113
+ callback_steps: Optional[int] = 1,
114
+ text_embeddings: Optional[torch.FloatTensor] = None,
115
+ **kwargs,
116
+ ):
117
+ r"""
118
+ Function invoked when calling the pipeline for generation.
119
+
120
+ Args:
121
+ prompt (`str` or `List[str]`):
122
+ The prompt or prompts to guide the image generation.
123
+ height (`int`, *optional*, defaults to 512):
124
+ The height in pixels of the generated image.
125
+ width (`int`, *optional*, defaults to 512):
126
+ The width in pixels of the generated image.
127
+ num_inference_steps (`int`, *optional*, defaults to 50):
128
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
129
+ expense of slower inference.
130
+ guidance_scale (`float`, *optional*, defaults to 7.5):
131
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
132
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
133
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
134
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
135
+ usually at the expense of lower image quality.
136
+ negative_prompt (`str` or `List[str]`, *optional*):
137
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
138
+ if `guidance_scale` is less than `1`).
139
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
140
+ The number of images to generate per prompt.
141
+ eta (`float`, *optional*, defaults to 0.0):
142
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
143
+ [`schedulers.DDIMScheduler`], will be ignored for others.
144
+ generator (`torch.Generator`, *optional*):
145
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
146
+ deterministic.
147
+ latents (`torch.FloatTensor`, *optional*):
148
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
149
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
150
+ tensor will ge generated by sampling using the supplied random `generator`.
151
+ output_type (`str`, *optional*, defaults to `"pil"`):
152
+ The output format of the generate image. Choose between
153
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
154
+ return_dict (`bool`, *optional*, defaults to `True`):
155
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
156
+ plain tuple.
157
+ callback (`Callable`, *optional*):
158
+ A function that will be called every `callback_steps` steps during inference. The function will be
159
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
160
+ callback_steps (`int`, *optional*, defaults to 1):
161
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
162
+ called at every step.
163
+
164
+ Returns:
165
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
166
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
167
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
168
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
169
+ (nsfw) content, according to the `safety_checker`.
170
+ """
171
+
172
+ if isinstance(prompt, str):
173
+ batch_size = 1
174
+ elif isinstance(prompt, list):
175
+ batch_size = len(prompt)
176
+ else:
177
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
178
+
179
+ if height % 8 != 0 or width % 8 != 0:
180
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
181
+
182
+ if (callback_steps is None) or (
183
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
184
+ ):
185
+ raise ValueError(
186
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
187
+ f" {type(callback_steps)}."
188
+ )
189
+
190
+ # get prompt text embeddings
191
+ text_inputs = self.tokenizer(
192
+ prompt,
193
+ padding="max_length",
194
+ max_length=self.tokenizer.model_max_length,
195
+ return_tensors="pt",
196
+ )
197
+ text_input_ids = text_inputs.input_ids
198
+
199
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
200
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
201
+ logger.warning(
202
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
203
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
204
+ )
205
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
206
+
207
+ if text_embeddings is None:
208
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
209
+
210
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
211
+ bs_embed, seq_len, _ = text_embeddings.shape
212
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
213
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
214
+
215
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
216
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
217
+ # corresponds to doing no classifier free guidance.
218
+ do_classifier_free_guidance = guidance_scale > 1.0
219
+ # get unconditional embeddings for classifier free guidance
220
+ if do_classifier_free_guidance:
221
+ uncond_tokens: List[str]
222
+ if negative_prompt is None:
223
+ uncond_tokens = [""]
224
+ elif type(prompt) is not type(negative_prompt):
225
+ raise TypeError(
226
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
227
+ f" {type(prompt)}."
228
+ )
229
+ elif isinstance(negative_prompt, str):
230
+ uncond_tokens = [negative_prompt]
231
+ elif batch_size != len(negative_prompt):
232
+ raise ValueError(
233
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
234
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
235
+ " the batch size of `prompt`."
236
+ )
237
+ else:
238
+ uncond_tokens = negative_prompt
239
+
240
+ max_length = text_input_ids.shape[-1]
241
+ uncond_input = self.tokenizer(
242
+ uncond_tokens,
243
+ padding="max_length",
244
+ max_length=max_length,
245
+ truncation=True,
246
+ return_tensors="pt",
247
+ )
248
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
249
+
250
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
251
+ seq_len = uncond_embeddings.shape[1]
252
+ uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1)
253
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
254
+
255
+ # For classifier free guidance, we need to do two forward passes.
256
+ # Here we concatenate the unconditional and text embeddings into a single batch
257
+ # to avoid doing two forward passes
258
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
259
+
260
+ # get the initial random noise unless the user supplied it
261
+
262
+ # Unlike in other pipelines, latents need to be generated in the target device
263
+ # for 1-to-1 results reproducibility with the CompVis implementation.
264
+ # However this currently doesn't work in `mps`.
265
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
266
+ latents_shape_reference = (batch_size * num_images_per_prompt, self.unet.in_channels, 64, 64)
267
+ latents_dtype = text_embeddings.dtype
268
+ if latents is None:
269
+ if self.device.type == "mps":
270
+ # randn does not exist on mps
271
+ latents_reference = torch.randn(
272
+ latents_shape_reference, generator=generator, device="cpu", dtype=latents_dtype
273
+ ).to(self.device)
274
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
275
+ self.device
276
+ )
277
+ else:
278
+ latents_reference = torch.randn(
279
+ latents_shape_reference, generator=generator, device=self.device, dtype=latents_dtype
280
+ )
281
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
282
+ else:
283
+ if latents_reference.shape != latents_shape:
284
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
285
+ latents_reference = latents_reference.to(self.device)
286
+ latents = latents.to(self.device)
287
+
288
+ # This is the key part of the pipeline where we
289
+ # try to ensure that the generated images w/ the same seed
290
+ # but different sizes actually result in similar images
291
+ dx = (latents_shape[3] - latents_shape_reference[3]) // 2
292
+ dy = (latents_shape[2] - latents_shape_reference[2]) // 2
293
+ w = latents_shape_reference[3] if dx >= 0 else latents_shape_reference[3] + 2 * dx
294
+ h = latents_shape_reference[2] if dy >= 0 else latents_shape_reference[2] + 2 * dy
295
+ tx = 0 if dx < 0 else dx
296
+ ty = 0 if dy < 0 else dy
297
+ dx = max(-dx, 0)
298
+ dy = max(-dy, 0)
299
+ # import pdb
300
+ # pdb.set_trace()
301
+ latents[:, :, ty : ty + h, tx : tx + w] = latents_reference[:, :, dy : dy + h, dx : dx + w]
302
+
303
+ # set timesteps
304
+ self.scheduler.set_timesteps(num_inference_steps)
305
+
306
+ # Some schedulers like PNDM have timesteps as arrays
307
+ # It's more optimized to move all timesteps to correct device beforehand
308
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
309
+
310
+ # scale the initial noise by the standard deviation required by the scheduler
311
+ latents = latents * self.scheduler.init_noise_sigma
312
+
313
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
314
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
315
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
316
+ # and should be between [0, 1]
317
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
318
+ extra_step_kwargs = {}
319
+ if accepts_eta:
320
+ extra_step_kwargs["eta"] = eta
321
+
322
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
323
+ # expand the latents if we are doing classifier free guidance
324
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
325
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
326
+
327
+ # predict the noise residual
328
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
329
+
330
+ # perform guidance
331
+ if do_classifier_free_guidance:
332
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
333
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
334
+
335
+ # compute the previous noisy sample x_t -> x_t-1
336
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
337
+
338
+ # call the callback, if provided
339
+ if callback is not None and i % callback_steps == 0:
340
+ callback(i, t, latents)
341
+
342
+ latents = 1 / 0.18215 * latents
343
+ image = self.vae.decode(latents).sample
344
+
345
+ image = (image / 2 + 0.5).clamp(0, 1)
346
+
347
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
348
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
349
+
350
+ if self.safety_checker is not None:
351
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
352
+ self.device
353
+ )
354
+ image, has_nsfw_concept = self.safety_checker(
355
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
356
+ )
357
+ else:
358
+ has_nsfw_concept = None
359
+
360
+ if output_type == "pil":
361
+ image = self.numpy_to_pil(image)
362
+
363
+ if not return_dict:
364
+ return (image, has_nsfw_concept)
365
+
366
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
v0.7.0/speech_to_image_diffusion.py ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ from typing import Callable, List, Optional, Union
3
+
4
+ import torch
5
+
6
+ from diffusers import (
7
+ AutoencoderKL,
8
+ DDIMScheduler,
9
+ DiffusionPipeline,
10
+ LMSDiscreteScheduler,
11
+ PNDMScheduler,
12
+ UNet2DConditionModel,
13
+ )
14
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
15
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
16
+ from diffusers.utils import logging
17
+ from transformers import (
18
+ CLIPFeatureExtractor,
19
+ CLIPTextModel,
20
+ CLIPTokenizer,
21
+ WhisperForConditionalGeneration,
22
+ WhisperProcessor,
23
+ )
24
+
25
+
26
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
27
+
28
+
29
+ class SpeechToImagePipeline(DiffusionPipeline):
30
+ def __init__(
31
+ self,
32
+ speech_model: WhisperForConditionalGeneration,
33
+ speech_processor: WhisperProcessor,
34
+ vae: AutoencoderKL,
35
+ text_encoder: CLIPTextModel,
36
+ tokenizer: CLIPTokenizer,
37
+ unet: UNet2DConditionModel,
38
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
39
+ safety_checker: StableDiffusionSafetyChecker,
40
+ feature_extractor: CLIPFeatureExtractor,
41
+ ):
42
+ super().__init__()
43
+
44
+ if safety_checker is None:
45
+ logger.warn(
46
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
47
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
48
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
49
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
50
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
51
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
52
+ )
53
+
54
+ self.register_modules(
55
+ speech_model=speech_model,
56
+ speech_processor=speech_processor,
57
+ vae=vae,
58
+ text_encoder=text_encoder,
59
+ tokenizer=tokenizer,
60
+ unet=unet,
61
+ scheduler=scheduler,
62
+ feature_extractor=feature_extractor,
63
+ )
64
+
65
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
66
+ if slice_size == "auto":
67
+ slice_size = self.unet.config.attention_head_dim // 2
68
+ self.unet.set_attention_slice(slice_size)
69
+
70
+ def disable_attention_slicing(self):
71
+ self.enable_attention_slicing(None)
72
+
73
+ @torch.no_grad()
74
+ def __call__(
75
+ self,
76
+ audio,
77
+ sampling_rate=16_000,
78
+ height: int = 512,
79
+ width: int = 512,
80
+ num_inference_steps: int = 50,
81
+ guidance_scale: float = 7.5,
82
+ negative_prompt: Optional[Union[str, List[str]]] = None,
83
+ num_images_per_prompt: Optional[int] = 1,
84
+ eta: float = 0.0,
85
+ generator: Optional[torch.Generator] = None,
86
+ latents: Optional[torch.FloatTensor] = None,
87
+ output_type: Optional[str] = "pil",
88
+ return_dict: bool = True,
89
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
90
+ callback_steps: Optional[int] = 1,
91
+ **kwargs,
92
+ ):
93
+ inputs = self.speech_processor.feature_extractor(
94
+ audio, return_tensors="pt", sampling_rate=sampling_rate
95
+ ).input_features.to(self.device)
96
+ predicted_ids = self.speech_model.generate(inputs, max_length=480_000)
97
+
98
+ prompt = self.speech_processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True, normalize=True)[
99
+ 0
100
+ ]
101
+
102
+ if isinstance(prompt, str):
103
+ batch_size = 1
104
+ elif isinstance(prompt, list):
105
+ batch_size = len(prompt)
106
+ else:
107
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
108
+
109
+ if height % 8 != 0 or width % 8 != 0:
110
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
111
+
112
+ if (callback_steps is None) or (
113
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
114
+ ):
115
+ raise ValueError(
116
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
117
+ f" {type(callback_steps)}."
118
+ )
119
+
120
+ # get prompt text embeddings
121
+ text_inputs = self.tokenizer(
122
+ prompt,
123
+ padding="max_length",
124
+ max_length=self.tokenizer.model_max_length,
125
+ return_tensors="pt",
126
+ )
127
+ text_input_ids = text_inputs.input_ids
128
+
129
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
130
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
131
+ logger.warning(
132
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
133
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
134
+ )
135
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
136
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
137
+
138
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
139
+ bs_embed, seq_len, _ = text_embeddings.shape
140
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
141
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
142
+
143
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
144
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
145
+ # corresponds to doing no classifier free guidance.
146
+ do_classifier_free_guidance = guidance_scale > 1.0
147
+ # get unconditional embeddings for classifier free guidance
148
+ if do_classifier_free_guidance:
149
+ uncond_tokens: List[str]
150
+ if negative_prompt is None:
151
+ uncond_tokens = [""] * batch_size
152
+ elif type(prompt) is not type(negative_prompt):
153
+ raise TypeError(
154
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
155
+ f" {type(prompt)}."
156
+ )
157
+ elif isinstance(negative_prompt, str):
158
+ uncond_tokens = [negative_prompt]
159
+ elif batch_size != len(negative_prompt):
160
+ raise ValueError(
161
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
162
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
163
+ " the batch size of `prompt`."
164
+ )
165
+ else:
166
+ uncond_tokens = negative_prompt
167
+
168
+ max_length = text_input_ids.shape[-1]
169
+ uncond_input = self.tokenizer(
170
+ uncond_tokens,
171
+ padding="max_length",
172
+ max_length=max_length,
173
+ truncation=True,
174
+ return_tensors="pt",
175
+ )
176
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
177
+
178
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
179
+ seq_len = uncond_embeddings.shape[1]
180
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
181
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
182
+
183
+ # For classifier free guidance, we need to do two forward passes.
184
+ # Here we concatenate the unconditional and text embeddings into a single batch
185
+ # to avoid doing two forward passes
186
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
187
+
188
+ # get the initial random noise unless the user supplied it
189
+
190
+ # Unlike in other pipelines, latents need to be generated in the target device
191
+ # for 1-to-1 results reproducibility with the CompVis implementation.
192
+ # However this currently doesn't work in `mps`.
193
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
194
+ latents_dtype = text_embeddings.dtype
195
+ if latents is None:
196
+ if self.device.type == "mps":
197
+ # randn does not exist on mps
198
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
199
+ self.device
200
+ )
201
+ else:
202
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
203
+ else:
204
+ if latents.shape != latents_shape:
205
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
206
+ latents = latents.to(self.device)
207
+
208
+ # set timesteps
209
+ self.scheduler.set_timesteps(num_inference_steps)
210
+
211
+ # Some schedulers like PNDM have timesteps as arrays
212
+ # It's more optimized to move all timesteps to correct device beforehand
213
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
214
+
215
+ # scale the initial noise by the standard deviation required by the scheduler
216
+ latents = latents * self.scheduler.init_noise_sigma
217
+
218
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
219
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
220
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
221
+ # and should be between [0, 1]
222
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
223
+ extra_step_kwargs = {}
224
+ if accepts_eta:
225
+ extra_step_kwargs["eta"] = eta
226
+
227
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
228
+ # expand the latents if we are doing classifier free guidance
229
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
230
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
231
+
232
+ # predict the noise residual
233
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
234
+
235
+ # perform guidance
236
+ if do_classifier_free_guidance:
237
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
238
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
239
+
240
+ # compute the previous noisy sample x_t -> x_t-1
241
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
242
+
243
+ # call the callback, if provided
244
+ if callback is not None and i % callback_steps == 0:
245
+ callback(i, t, latents)
246
+
247
+ latents = 1 / 0.18215 * latents
248
+ image = self.vae.decode(latents).sample
249
+
250
+ image = (image / 2 + 0.5).clamp(0, 1)
251
+
252
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
253
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
254
+
255
+ if output_type == "pil":
256
+ image = self.numpy_to_pil(image)
257
+
258
+ if not return_dict:
259
+ return image
260
+
261
+ return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None)
v0.7.0/stable_diffusion_mega.py ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any, Callable, Dict, List, Optional, Union
2
+
3
+ import torch
4
+
5
+ import PIL.Image
6
+ from diffusers import (
7
+ AutoencoderKL,
8
+ DDIMScheduler,
9
+ DiffusionPipeline,
10
+ LMSDiscreteScheduler,
11
+ PNDMScheduler,
12
+ StableDiffusionImg2ImgPipeline,
13
+ StableDiffusionInpaintPipelineLegacy,
14
+ StableDiffusionPipeline,
15
+ UNet2DConditionModel,
16
+ )
17
+ from diffusers.configuration_utils import FrozenDict
18
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
19
+ from diffusers.utils import deprecate, logging
20
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
21
+
22
+
23
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
24
+
25
+
26
+ class StableDiffusionMegaPipeline(DiffusionPipeline):
27
+ r"""
28
+ Pipeline for text-to-image generation using Stable Diffusion.
29
+
30
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
31
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
32
+
33
+ Args:
34
+ vae ([`AutoencoderKL`]):
35
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
36
+ text_encoder ([`CLIPTextModel`]):
37
+ Frozen text-encoder. Stable Diffusion uses the text portion of
38
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
39
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
40
+ tokenizer (`CLIPTokenizer`):
41
+ Tokenizer of class
42
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
43
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
44
+ scheduler ([`SchedulerMixin`]):
45
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
46
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
47
+ safety_checker ([`StableDiffusionMegaSafetyChecker`]):
48
+ Classification module that estimates whether generated images could be considered offensive or harmful.
49
+ Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
50
+ feature_extractor ([`CLIPFeatureExtractor`]):
51
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
52
+ """
53
+
54
+ def __init__(
55
+ self,
56
+ vae: AutoencoderKL,
57
+ text_encoder: CLIPTextModel,
58
+ tokenizer: CLIPTokenizer,
59
+ unet: UNet2DConditionModel,
60
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
61
+ safety_checker: StableDiffusionSafetyChecker,
62
+ feature_extractor: CLIPFeatureExtractor,
63
+ ):
64
+ super().__init__()
65
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
66
+ deprecation_message = (
67
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
68
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
69
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
70
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
71
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
72
+ " file"
73
+ )
74
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
75
+ new_config = dict(scheduler.config)
76
+ new_config["steps_offset"] = 1
77
+ scheduler._internal_dict = FrozenDict(new_config)
78
+
79
+ self.register_modules(
80
+ vae=vae,
81
+ text_encoder=text_encoder,
82
+ tokenizer=tokenizer,
83
+ unet=unet,
84
+ scheduler=scheduler,
85
+ safety_checker=safety_checker,
86
+ feature_extractor=feature_extractor,
87
+ )
88
+
89
+ @property
90
+ def components(self) -> Dict[str, Any]:
91
+ return {k: getattr(self, k) for k in self.config.keys() if not k.startswith("_")}
92
+
93
+ def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
94
+ r"""
95
+ Enable sliced attention computation.
96
+
97
+ When this option is enabled, the attention module will split the input tensor in slices, to compute attention
98
+ in several steps. This is useful to save some memory in exchange for a small speed decrease.
99
+
100
+ Args:
101
+ slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
102
+ When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
103
+ a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
104
+ `attention_head_dim` must be a multiple of `slice_size`.
105
+ """
106
+ if slice_size == "auto":
107
+ # half the attention head size is usually a good trade-off between
108
+ # speed and memory
109
+ slice_size = self.unet.config.attention_head_dim // 2
110
+ self.unet.set_attention_slice(slice_size)
111
+
112
+ def disable_attention_slicing(self):
113
+ r"""
114
+ Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
115
+ back to computing attention in one step.
116
+ """
117
+ # set slice_size = `None` to disable `attention slicing`
118
+ self.enable_attention_slicing(None)
119
+
120
+ @torch.no_grad()
121
+ def inpaint(
122
+ self,
123
+ prompt: Union[str, List[str]],
124
+ init_image: Union[torch.FloatTensor, PIL.Image.Image],
125
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image],
126
+ strength: float = 0.8,
127
+ num_inference_steps: Optional[int] = 50,
128
+ guidance_scale: Optional[float] = 7.5,
129
+ negative_prompt: Optional[Union[str, List[str]]] = None,
130
+ num_images_per_prompt: Optional[int] = 1,
131
+ eta: Optional[float] = 0.0,
132
+ generator: Optional[torch.Generator] = None,
133
+ output_type: Optional[str] = "pil",
134
+ return_dict: bool = True,
135
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
136
+ callback_steps: Optional[int] = 1,
137
+ ):
138
+ # For more information on how this function works, please see: https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion#diffusers.StableDiffusionImg2ImgPipeline
139
+ return StableDiffusionInpaintPipelineLegacy(**self.components)(
140
+ prompt=prompt,
141
+ init_image=init_image,
142
+ mask_image=mask_image,
143
+ strength=strength,
144
+ num_inference_steps=num_inference_steps,
145
+ guidance_scale=guidance_scale,
146
+ negative_prompt=negative_prompt,
147
+ num_images_per_prompt=num_images_per_prompt,
148
+ eta=eta,
149
+ generator=generator,
150
+ output_type=output_type,
151
+ return_dict=return_dict,
152
+ callback=callback,
153
+ )
154
+
155
+ @torch.no_grad()
156
+ def img2img(
157
+ self,
158
+ prompt: Union[str, List[str]],
159
+ init_image: Union[torch.FloatTensor, PIL.Image.Image],
160
+ strength: float = 0.8,
161
+ num_inference_steps: Optional[int] = 50,
162
+ guidance_scale: Optional[float] = 7.5,
163
+ negative_prompt: Optional[Union[str, List[str]]] = None,
164
+ num_images_per_prompt: Optional[int] = 1,
165
+ eta: Optional[float] = 0.0,
166
+ generator: Optional[torch.Generator] = None,
167
+ output_type: Optional[str] = "pil",
168
+ return_dict: bool = True,
169
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
170
+ callback_steps: Optional[int] = 1,
171
+ **kwargs,
172
+ ):
173
+ # For more information on how this function works, please see: https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion#diffusers.StableDiffusionImg2ImgPipeline
174
+ return StableDiffusionImg2ImgPipeline(**self.components)(
175
+ prompt=prompt,
176
+ init_image=init_image,
177
+ strength=strength,
178
+ num_inference_steps=num_inference_steps,
179
+ guidance_scale=guidance_scale,
180
+ negative_prompt=negative_prompt,
181
+ num_images_per_prompt=num_images_per_prompt,
182
+ eta=eta,
183
+ generator=generator,
184
+ output_type=output_type,
185
+ return_dict=return_dict,
186
+ callback=callback,
187
+ callback_steps=callback_steps,
188
+ )
189
+
190
+ @torch.no_grad()
191
+ def text2img(
192
+ self,
193
+ prompt: Union[str, List[str]],
194
+ height: int = 512,
195
+ width: int = 512,
196
+ num_inference_steps: int = 50,
197
+ guidance_scale: float = 7.5,
198
+ negative_prompt: Optional[Union[str, List[str]]] = None,
199
+ num_images_per_prompt: Optional[int] = 1,
200
+ eta: float = 0.0,
201
+ generator: Optional[torch.Generator] = None,
202
+ latents: Optional[torch.FloatTensor] = None,
203
+ output_type: Optional[str] = "pil",
204
+ return_dict: bool = True,
205
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
206
+ callback_steps: Optional[int] = 1,
207
+ ):
208
+ # For more information on how this function https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion#diffusers.StableDiffusionPipeline
209
+ return StableDiffusionPipeline(**self.components)(
210
+ prompt=prompt,
211
+ height=height,
212
+ width=width,
213
+ num_inference_steps=num_inference_steps,
214
+ guidance_scale=guidance_scale,
215
+ negative_prompt=negative_prompt,
216
+ num_images_per_prompt=num_images_per_prompt,
217
+ eta=eta,
218
+ generator=generator,
219
+ latents=latents,
220
+ output_type=output_type,
221
+ return_dict=return_dict,
222
+ callback=callback,
223
+ callback_steps=callback_steps,
224
+ )
v0.7.0/wildcard_stable_diffusion.py ADDED
@@ -0,0 +1,418 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import os
3
+ import random
4
+ import re
5
+ from dataclasses import dataclass
6
+ from typing import Callable, Dict, List, Optional, Union
7
+
8
+ import torch
9
+
10
+ from diffusers.configuration_utils import FrozenDict
11
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
12
+ from diffusers.pipeline_utils import DiffusionPipeline
13
+ from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
14
+ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
15
+ from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
16
+ from diffusers.utils import deprecate, logging
17
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
18
+
19
+
20
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
21
+
22
+ global_re_wildcard = re.compile(r"__([^_]*)__")
23
+
24
+
25
+ def get_filename(path: str):
26
+ # this doesn't work on Windows
27
+ return os.path.basename(path).split(".txt")[0]
28
+
29
+
30
+ def read_wildcard_values(path: str):
31
+ with open(path, encoding="utf8") as f:
32
+ return f.read().splitlines()
33
+
34
+
35
+ def grab_wildcard_values(wildcard_option_dict: Dict[str, List[str]] = {}, wildcard_files: List[str] = []):
36
+ for wildcard_file in wildcard_files:
37
+ filename = get_filename(wildcard_file)
38
+ read_values = read_wildcard_values(wildcard_file)
39
+ if filename not in wildcard_option_dict:
40
+ wildcard_option_dict[filename] = []
41
+ wildcard_option_dict[filename].extend(read_values)
42
+ return wildcard_option_dict
43
+
44
+
45
+ def replace_prompt_with_wildcards(
46
+ prompt: str, wildcard_option_dict: Dict[str, List[str]] = {}, wildcard_files: List[str] = []
47
+ ):
48
+ new_prompt = prompt
49
+
50
+ # get wildcard options
51
+ wildcard_option_dict = grab_wildcard_values(wildcard_option_dict, wildcard_files)
52
+
53
+ for m in global_re_wildcard.finditer(new_prompt):
54
+ wildcard_value = m.group()
55
+ replace_value = random.choice(wildcard_option_dict[wildcard_value.strip("__")])
56
+ new_prompt = new_prompt.replace(wildcard_value, replace_value, 1)
57
+
58
+ return new_prompt
59
+
60
+
61
+ @dataclass
62
+ class WildcardStableDiffusionOutput(StableDiffusionPipelineOutput):
63
+ prompts: List[str]
64
+
65
+
66
+ class WildcardStableDiffusionPipeline(DiffusionPipeline):
67
+ r"""
68
+ Example Usage:
69
+ pipe = WildcardStableDiffusionPipeline.from_pretrained(
70
+ "CompVis/stable-diffusion-v1-4",
71
+ revision="fp16",
72
+ torch_dtype=torch.float16,
73
+ )
74
+ prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
75
+ out = pipe(
76
+ prompt,
77
+ wildcard_option_dict={
78
+ "clothing":["hat", "shirt", "scarf", "beret"]
79
+ },
80
+ wildcard_files=["object.txt", "animal.txt"],
81
+ num_prompt_samples=1
82
+ )
83
+
84
+
85
+ Pipeline for text-to-image generation with wild cards using Stable Diffusion.
86
+
87
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
88
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
89
+
90
+ Args:
91
+ vae ([`AutoencoderKL`]):
92
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
93
+ text_encoder ([`CLIPTextModel`]):
94
+ Frozen text-encoder. Stable Diffusion uses the text portion of
95
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
96
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
97
+ tokenizer (`CLIPTokenizer`):
98
+ Tokenizer of class
99
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
100
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
101
+ scheduler ([`SchedulerMixin`]):
102
+ A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
103
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
104
+ safety_checker ([`StableDiffusionSafetyChecker`]):
105
+ Classification module that estimates whether generated images could be considered offensive or harmful.
106
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
107
+ feature_extractor ([`CLIPFeatureExtractor`]):
108
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
109
+ """
110
+
111
+ def __init__(
112
+ self,
113
+ vae: AutoencoderKL,
114
+ text_encoder: CLIPTextModel,
115
+ tokenizer: CLIPTokenizer,
116
+ unet: UNet2DConditionModel,
117
+ scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
118
+ safety_checker: StableDiffusionSafetyChecker,
119
+ feature_extractor: CLIPFeatureExtractor,
120
+ ):
121
+ super().__init__()
122
+
123
+ if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
124
+ deprecation_message = (
125
+ f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
126
+ f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
127
+ "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
128
+ " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
129
+ " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
130
+ " file"
131
+ )
132
+ deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
133
+ new_config = dict(scheduler.config)
134
+ new_config["steps_offset"] = 1
135
+ scheduler._internal_dict = FrozenDict(new_config)
136
+
137
+ if safety_checker is None:
138
+ logger.warn(
139
+ f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
140
+ " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
141
+ " results in services or applications open to the public. Both the diffusers team and Hugging Face"
142
+ " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
143
+ " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
144
+ " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
145
+ )
146
+
147
+ self.register_modules(
148
+ vae=vae,
149
+ text_encoder=text_encoder,
150
+ tokenizer=tokenizer,
151
+ unet=unet,
152
+ scheduler=scheduler,
153
+ safety_checker=safety_checker,
154
+ feature_extractor=feature_extractor,
155
+ )
156
+
157
+ @torch.no_grad()
158
+ def __call__(
159
+ self,
160
+ prompt: Union[str, List[str]],
161
+ height: int = 512,
162
+ width: int = 512,
163
+ num_inference_steps: int = 50,
164
+ guidance_scale: float = 7.5,
165
+ negative_prompt: Optional[Union[str, List[str]]] = None,
166
+ num_images_per_prompt: Optional[int] = 1,
167
+ eta: float = 0.0,
168
+ generator: Optional[torch.Generator] = None,
169
+ latents: Optional[torch.FloatTensor] = None,
170
+ output_type: Optional[str] = "pil",
171
+ return_dict: bool = True,
172
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
173
+ callback_steps: Optional[int] = 1,
174
+ wildcard_option_dict: Dict[str, List[str]] = {},
175
+ wildcard_files: List[str] = [],
176
+ num_prompt_samples: Optional[int] = 1,
177
+ **kwargs,
178
+ ):
179
+ r"""
180
+ Function invoked when calling the pipeline for generation.
181
+
182
+ Args:
183
+ prompt (`str` or `List[str]`):
184
+ The prompt or prompts to guide the image generation.
185
+ height (`int`, *optional*, defaults to 512):
186
+ The height in pixels of the generated image.
187
+ width (`int`, *optional*, defaults to 512):
188
+ The width in pixels of the generated image.
189
+ num_inference_steps (`int`, *optional*, defaults to 50):
190
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
191
+ expense of slower inference.
192
+ guidance_scale (`float`, *optional*, defaults to 7.5):
193
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
194
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
195
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
196
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
197
+ usually at the expense of lower image quality.
198
+ negative_prompt (`str` or `List[str]`, *optional*):
199
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
200
+ if `guidance_scale` is less than `1`).
201
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
202
+ The number of images to generate per prompt.
203
+ eta (`float`, *optional*, defaults to 0.0):
204
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
205
+ [`schedulers.DDIMScheduler`], will be ignored for others.
206
+ generator (`torch.Generator`, *optional*):
207
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
208
+ deterministic.
209
+ latents (`torch.FloatTensor`, *optional*):
210
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
211
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
212
+ tensor will ge generated by sampling using the supplied random `generator`.
213
+ output_type (`str`, *optional*, defaults to `"pil"`):
214
+ The output format of the generate image. Choose between
215
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
216
+ return_dict (`bool`, *optional*, defaults to `True`):
217
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
218
+ plain tuple.
219
+ callback (`Callable`, *optional*):
220
+ A function that will be called every `callback_steps` steps during inference. The function will be
221
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
222
+ callback_steps (`int`, *optional*, defaults to 1):
223
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
224
+ called at every step.
225
+ wildcard_option_dict (Dict[str, List[str]]):
226
+ dict with key as `wildcard` and values as a list of possible replacements. For example if a prompt, "A __animal__ sitting on a chair". A wildcard_option_dict can provide possible values for "animal" like this: {"animal":["dog", "cat", "fox"]}
227
+ wildcard_files: (List[str])
228
+ List of filenames of txt files for wildcard replacements. For example if a prompt, "A __animal__ sitting on a chair". A file can be provided ["animal.txt"]
229
+ num_prompt_samples: int
230
+ Number of times to sample wildcards for each prompt provided
231
+
232
+ Returns:
233
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
234
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
235
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
236
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
237
+ (nsfw) content, according to the `safety_checker`.
238
+ """
239
+
240
+ if isinstance(prompt, str):
241
+ prompt = [
242
+ replace_prompt_with_wildcards(prompt, wildcard_option_dict, wildcard_files)
243
+ for i in range(num_prompt_samples)
244
+ ]
245
+ batch_size = len(prompt)
246
+ elif isinstance(prompt, list):
247
+ prompt_list = []
248
+ for p in prompt:
249
+ for i in range(num_prompt_samples):
250
+ prompt_list.append(replace_prompt_with_wildcards(p, wildcard_option_dict, wildcard_files))
251
+ prompt = prompt_list
252
+ batch_size = len(prompt)
253
+ else:
254
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
255
+
256
+ if height % 8 != 0 or width % 8 != 0:
257
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
258
+
259
+ if (callback_steps is None) or (
260
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
261
+ ):
262
+ raise ValueError(
263
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
264
+ f" {type(callback_steps)}."
265
+ )
266
+
267
+ # get prompt text embeddings
268
+ text_inputs = self.tokenizer(
269
+ prompt,
270
+ padding="max_length",
271
+ max_length=self.tokenizer.model_max_length,
272
+ return_tensors="pt",
273
+ )
274
+ text_input_ids = text_inputs.input_ids
275
+
276
+ if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
277
+ removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
278
+ logger.warning(
279
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
280
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
281
+ )
282
+ text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
283
+ text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
284
+
285
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
286
+ bs_embed, seq_len, _ = text_embeddings.shape
287
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
288
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
289
+
290
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
291
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
292
+ # corresponds to doing no classifier free guidance.
293
+ do_classifier_free_guidance = guidance_scale > 1.0
294
+ # get unconditional embeddings for classifier free guidance
295
+ if do_classifier_free_guidance:
296
+ uncond_tokens: List[str]
297
+ if negative_prompt is None:
298
+ uncond_tokens = [""] * batch_size
299
+ elif type(prompt) is not type(negative_prompt):
300
+ raise TypeError(
301
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
302
+ f" {type(prompt)}."
303
+ )
304
+ elif isinstance(negative_prompt, str):
305
+ uncond_tokens = [negative_prompt]
306
+ elif batch_size != len(negative_prompt):
307
+ raise ValueError(
308
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
309
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
310
+ " the batch size of `prompt`."
311
+ )
312
+ else:
313
+ uncond_tokens = negative_prompt
314
+
315
+ max_length = text_input_ids.shape[-1]
316
+ uncond_input = self.tokenizer(
317
+ uncond_tokens,
318
+ padding="max_length",
319
+ max_length=max_length,
320
+ truncation=True,
321
+ return_tensors="pt",
322
+ )
323
+ uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
324
+
325
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
326
+ seq_len = uncond_embeddings.shape[1]
327
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
328
+ uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
329
+
330
+ # For classifier free guidance, we need to do two forward passes.
331
+ # Here we concatenate the unconditional and text embeddings into a single batch
332
+ # to avoid doing two forward passes
333
+ text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
334
+
335
+ # get the initial random noise unless the user supplied it
336
+
337
+ # Unlike in other pipelines, latents need to be generated in the target device
338
+ # for 1-to-1 results reproducibility with the CompVis implementation.
339
+ # However this currently doesn't work in `mps`.
340
+ latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
341
+ latents_dtype = text_embeddings.dtype
342
+ if latents is None:
343
+ if self.device.type == "mps":
344
+ # randn does not exist on mps
345
+ latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
346
+ self.device
347
+ )
348
+ else:
349
+ latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
350
+ else:
351
+ if latents.shape != latents_shape:
352
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
353
+ latents = latents.to(self.device)
354
+
355
+ # set timesteps
356
+ self.scheduler.set_timesteps(num_inference_steps)
357
+
358
+ # Some schedulers like PNDM have timesteps as arrays
359
+ # It's more optimized to move all timesteps to correct device beforehand
360
+ timesteps_tensor = self.scheduler.timesteps.to(self.device)
361
+
362
+ # scale the initial noise by the standard deviation required by the scheduler
363
+ latents = latents * self.scheduler.init_noise_sigma
364
+
365
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
366
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
367
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
368
+ # and should be between [0, 1]
369
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
370
+ extra_step_kwargs = {}
371
+ if accepts_eta:
372
+ extra_step_kwargs["eta"] = eta
373
+
374
+ for i, t in enumerate(self.progress_bar(timesteps_tensor)):
375
+ # expand the latents if we are doing classifier free guidance
376
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
377
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
378
+
379
+ # predict the noise residual
380
+ noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
381
+
382
+ # perform guidance
383
+ if do_classifier_free_guidance:
384
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
385
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
386
+
387
+ # compute the previous noisy sample x_t -> x_t-1
388
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
389
+
390
+ # call the callback, if provided
391
+ if callback is not None and i % callback_steps == 0:
392
+ callback(i, t, latents)
393
+
394
+ latents = 1 / 0.18215 * latents
395
+ image = self.vae.decode(latents).sample
396
+
397
+ image = (image / 2 + 0.5).clamp(0, 1)
398
+
399
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
400
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
401
+
402
+ if self.safety_checker is not None:
403
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
404
+ self.device
405
+ )
406
+ image, has_nsfw_concept = self.safety_checker(
407
+ images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
408
+ )
409
+ else:
410
+ has_nsfw_concept = None
411
+
412
+ if output_type == "pil":
413
+ image = self.numpy_to_pil(image)
414
+
415
+ if not return_dict:
416
+ return (image, has_nsfw_concept)
417
+
418
+ return WildcardStableDiffusionOutput(images=image, nsfw_content_detected=has_nsfw_concept, prompts=prompt)