Datasets:

ArXiv:
Diffusers Bot commited on
Commit
6369af7
·
verified ·
1 Parent(s): f38a4cf

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. main/README.md +112 -0
  2. main/pipeline_animatediff_ipex.py +1002 -0
main/README.md CHANGED
@@ -70,6 +70,7 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
70
  | Stable Diffusion XL IPEX Pipeline | Accelerate Stable Diffusion XL inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [Stable Diffusion XL on IPEX](#stable-diffusion-xl-on-ipex) | - | [Dan Li](https://github.com/ustcuna/) |
71
  | Stable Diffusion BoxDiff Pipeline | Training-free controlled generation with bounding boxes using [BoxDiff](https://github.com/showlab/BoxDiff) | [Stable Diffusion BoxDiff Pipeline](#stable-diffusion-boxdiff) | - | [Jingyang Zhang](https://github.com/zjysteven/) |
72
  | FRESCO V2V Pipeline | Implementation of [[CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation](https://arxiv.org/abs/2403.12962) | [FRESCO V2V Pipeline](#fresco) | - | [Yifan Zhou](https://github.com/SingleZombie) |
 
73
 
74
  To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
75
 
@@ -4099,6 +4100,117 @@ output_frames[0].save(output_video_path, save_all=True,
4099
  append_images=output_frames[1:], duration=100, loop=0)
4100
  ```
4101
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4102
  # Perturbed-Attention Guidance
4103
 
4104
  [Project](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) / [arXiv](https://arxiv.org/abs/2403.17377) / [GitHub](https://github.com/KU-CVLAB/Perturbed-Attention-Guidance)
 
70
  | Stable Diffusion XL IPEX Pipeline | Accelerate Stable Diffusion XL inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [Stable Diffusion XL on IPEX](#stable-diffusion-xl-on-ipex) | - | [Dan Li](https://github.com/ustcuna/) |
71
  | Stable Diffusion BoxDiff Pipeline | Training-free controlled generation with bounding boxes using [BoxDiff](https://github.com/showlab/BoxDiff) | [Stable Diffusion BoxDiff Pipeline](#stable-diffusion-boxdiff) | - | [Jingyang Zhang](https://github.com/zjysteven/) |
72
  | FRESCO V2V Pipeline | Implementation of [[CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation](https://arxiv.org/abs/2403.12962) | [FRESCO V2V Pipeline](#fresco) | - | [Yifan Zhou](https://github.com/SingleZombie) |
73
+ | AnimateDiff IPEX Pipeline | Accelerate AnimateDiff inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [AnimateDiff on IPEX](#animatediff-on-ipex) | - | [Dan Li](https://github.com/ustcuna/) |
74
 
75
  To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
76
 
 
4100
  append_images=output_frames[1:], duration=100, loop=0)
4101
  ```
4102
 
4103
+ ### AnimateDiff on IPEX
4104
+
4105
+ This diffusion pipeline aims to accelerate the inference of AnimateDiff on Intel Xeon CPUs with BF16/FP32 precision using [IPEX](https://github.com/intel/intel-extension-for-pytorch).
4106
+
4107
+ To use this pipeline, you need to:
4108
+ 1. Install [IPEX](https://github.com/intel/intel-extension-for-pytorch)
4109
+
4110
+ **Note:** For each PyTorch release, there is a corresponding release of IPEX. Here is the mapping relationship. It is recommended to install Pytorch/IPEX2.3 to get the best performance.
4111
+
4112
+ |PyTorch Version|IPEX Version|
4113
+ |--|--|
4114
+ |[v2.3.\*](https://github.com/pytorch/pytorch/tree/v2.3.0 "v2.3.0")|[v2.3.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v2.3.0+cpu)|
4115
+ |[v1.13.\*](https://github.com/pytorch/pytorch/tree/v1.13.0 "v1.13.0")|[v1.13.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.100+cpu)|
4116
+
4117
+ You can simply use pip to install IPEX with the latest version.
4118
+ ```python
4119
+ python -m pip install intel_extension_for_pytorch
4120
+ ```
4121
+ **Note:** To install a specific version, run with the following command:
4122
+ ```
4123
+ python -m pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
4124
+ ```
4125
+ 2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX accelaration. Supported inference datatypes are Float32 and BFloat16.
4126
+
4127
+ ```python
4128
+ pipe = AnimateDiffPipelineIpex.from_pretrained(base, motion_adapter=adapter, torch_dtype=dtype).to(device)
4129
+ # For Float32
4130
+ pipe.prepare_for_ipex(torch.float32, prompt="A girl smiling")
4131
+ # For BFloat16
4132
+ pipe.prepare_for_ipex(torch.bfloat16, prompt="A girl smiling")
4133
+ ```
4134
+
4135
+ Then you can use the ipex pipeline in a similar way to the default animatediff pipeline.
4136
+ ```python
4137
+ # For Float32
4138
+ output = pipe(prompt="A girl smiling", guidance_scale=1.0, num_inference_steps=step)
4139
+ # For BFloat16
4140
+ with torch.cpu.amp.autocast(enabled = True, dtype = torch.bfloat16):
4141
+ output = pipe(prompt="A girl smiling", guidance_scale=1.0, num_inference_steps=step)
4142
+ ```
4143
+
4144
+ The following code compares the performance of the original animatediff pipeline with the ipex-optimized pipeline.
4145
+ By using this optimized pipeline, we can get about 1.5-2.2 times performance boost with BFloat16 on the fifth generation of Intel Xeon CPUs, code-named Emerald Rapids.
4146
+
4147
+ ```python
4148
+ import torch
4149
+ from diffusers import MotionAdapter, AnimateDiffPipeline, EulerDiscreteScheduler
4150
+ from safetensors.torch import load_file
4151
+ from pipeline_animatediff_ipex import AnimateDiffPipelineIpex
4152
+ import time
4153
+
4154
+ device = "cpu"
4155
+ dtype = torch.float32
4156
+
4157
+ prompt = "A girl smiling"
4158
+ step = 8 # Options: [1,2,4,8]
4159
+ repo = "ByteDance/AnimateDiff-Lightning"
4160
+ ckpt = f"animatediff_lightning_{step}step_diffusers.safetensors"
4161
+ base = "emilianJR/epiCRealism" # Choose to your favorite base model.
4162
+
4163
+ adapter = MotionAdapter().to(device, dtype)
4164
+ adapter.load_state_dict(load_file(hf_hub_download(repo, ckpt), device=device))
4165
+
4166
+ # Helper function for time evaluation
4167
+ def elapsed_time(pipeline, nb_pass=3, num_inference_steps=1):
4168
+ # warmup
4169
+ for _ in range(2):
4170
+ output = pipeline(prompt = prompt, guidance_scale=1.0, num_inference_steps = num_inference_steps)
4171
+ #time evaluation
4172
+ start = time.time()
4173
+ for _ in range(nb_pass):
4174
+ pipeline(prompt = prompt, guidance_scale=1.0, num_inference_steps = num_inference_steps)
4175
+ end = time.time()
4176
+ return (end - start) / nb_pass
4177
+
4178
+ ############## bf16 inference performance ###############
4179
+
4180
+ # 1. IPEX Pipeline initialization
4181
+ pipe = AnimateDiffPipelineIpex.from_pretrained(base, motion_adapter=adapter, torch_dtype=dtype).to(device)
4182
+ pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing", beta_schedule="linear")
4183
+ pipe.prepare_for_ipex(torch.bfloat16, prompt = prompt)
4184
+
4185
+ # 2. Original Pipeline initialization
4186
+ pipe2 = AnimateDiffPipeline.from_pretrained(base, motion_adapter=adapter, torch_dtype=dtype).to(device)
4187
+ pipe2.scheduler = EulerDiscreteScheduler.from_config(pipe2.scheduler.config, timestep_spacing="trailing", beta_schedule="linear")
4188
+
4189
+ # 3. Compare performance between Original Pipeline and IPEX Pipeline
4190
+ with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
4191
+ latency = elapsed_time(pipe, num_inference_steps=step)
4192
+ print("Latency of AnimateDiffPipelineIpex--bf16", latency, "s for total", step, "steps")
4193
+ latency = elapsed_time(pipe2, num_inference_steps=step)
4194
+ print("Latency of AnimateDiffPipeline--bf16", latency, "s for total", step, "steps")
4195
+
4196
+ ############## fp32 inference performance ###############
4197
+
4198
+ # 1. IPEX Pipeline initialization
4199
+ pipe3 = AnimateDiffPipelineIpex.from_pretrained(base, motion_adapter=adapter, torch_dtype=dtype).to(device)
4200
+ pipe3.scheduler = EulerDiscreteScheduler.from_config(pipe3.scheduler.config, timestep_spacing="trailing", beta_schedule="linear")
4201
+ pipe3.prepare_for_ipex(torch.float32, prompt = prompt)
4202
+
4203
+ # 2. Original Pipeline initialization
4204
+ pipe4 = AnimateDiffPipeline.from_pretrained(base, motion_adapter=adapter, torch_dtype=dtype).to(device)
4205
+ pipe4.scheduler = EulerDiscreteScheduler.from_config(pipe4.scheduler.config, timestep_spacing="trailing", beta_schedule="linear")
4206
+
4207
+ # 3. Compare performance between Original Pipeline and IPEX Pipeline
4208
+ latency = elapsed_time(pipe3, num_inference_steps=step)
4209
+ print("Latency of AnimateDiffPipelineIpex--fp32", latency, "s for total", step, "steps")
4210
+ latency = elapsed_time(pipe4, num_inference_steps=step)
4211
+ print("Latency of AnimateDiffPipeline--fp32",latency, "s for total", step, "steps")
4212
+ ```
4213
+
4214
  # Perturbed-Attention Guidance
4215
 
4216
  [Project](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) / [arXiv](https://arxiv.org/abs/2403.17377) / [GitHub](https://github.com/KU-CVLAB/Perturbed-Attention-Guidance)
main/pipeline_animatediff_ipex.py ADDED
@@ -0,0 +1,1002 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import inspect
16
+ from typing import Any, Callable, Dict, List, Optional, Union
17
+
18
+ import intel_extension_for_pytorch as ipex
19
+ import torch
20
+ from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection
21
+
22
+ from diffusers.image_processor import PipelineImageInput
23
+ from diffusers.loaders import IPAdapterMixin, LoraLoaderMixin, TextualInversionLoaderMixin
24
+ from diffusers.models import AutoencoderKL, ImageProjection, UNet2DConditionModel, UNetMotionModel
25
+ from diffusers.models.lora import adjust_lora_scale_text_encoder
26
+ from diffusers.models.unets.unet_motion_model import MotionAdapter
27
+ from diffusers.pipelines.animatediff.pipeline_output import AnimateDiffPipelineOutput
28
+ from diffusers.pipelines.free_init_utils import FreeInitMixin
29
+ from diffusers.pipelines.pipeline_utils import DiffusionPipeline, StableDiffusionMixin
30
+ from diffusers.schedulers import (
31
+ DDIMScheduler,
32
+ DPMSolverMultistepScheduler,
33
+ EulerAncestralDiscreteScheduler,
34
+ EulerDiscreteScheduler,
35
+ LMSDiscreteScheduler,
36
+ PNDMScheduler,
37
+ )
38
+ from diffusers.utils import (
39
+ USE_PEFT_BACKEND,
40
+ logging,
41
+ replace_example_docstring,
42
+ scale_lora_layers,
43
+ unscale_lora_layers,
44
+ )
45
+ from diffusers.utils.torch_utils import randn_tensor
46
+ from diffusers.video_processor import VideoProcessor
47
+
48
+
49
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
50
+
51
+ EXAMPLE_DOC_STRING = """
52
+ Examples:
53
+ ```py
54
+ >>> import torch
55
+ >>> from diffusers import MotionAdapter, AnimateDiffPipelineIpex, EulerDiscreteScheduler
56
+ >>> from diffusers.utils import export_to_gif
57
+ >>> from safetensors.torch import load_file
58
+
59
+ >>> device = "cpu"
60
+ >>> dtype = torch.float32
61
+
62
+ >>> # ByteDance/AnimateDiff-Lightning, a distilled version of AnimateDiff SD1.5 v2,
63
+ >>> # a lightning-fast text-to-video generation model which can generate videos
64
+ >>> # more than ten times faster than the original AnimateDiff.
65
+ >>> step = 8 # Options: [1,2,4,8]
66
+ >>> repo = "ByteDance/AnimateDiff-Lightning"
67
+ >>> ckpt = f"animatediff_lightning_{step}step_diffusers.safetensors"
68
+ >>> base = "emilianJR/epiCRealism" # Choose to your favorite base model.
69
+
70
+ >>> adapter = MotionAdapter().to(device, dtype)
71
+ >>> adapter.load_state_dict(load_file(hf_hub_download(repo ,ckpt), device=device))
72
+
73
+ >>> pipe = AnimateDiffPipelineIpex.from_pretrained(base, motion_adapter=adapter, torch_dtype=dtype).to(device)
74
+ >>> pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing", beta_schedule="linear")
75
+
76
+ >>> # For Float32
77
+ >>> pipe.prepare_for_ipex(torch.float32, prompt = "A girl smiling")
78
+ >>> # For BFloat16
79
+ >>> pipe.prepare_for_ipex(torch.bfloat16, prompt = "A girl smiling")
80
+
81
+ >>> # For Float32
82
+ >>> output = pipe(prompt = "A girl smiling", guidance_scale=1.0, num_inference_steps = step)
83
+ >>> # For BFloat16
84
+ >>> with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
85
+ >>> output = pipe(prompt = "A girl smiling", guidance_scale=1.0, num_inference_steps = step)
86
+
87
+ >>> frames = output.frames[0]
88
+ >>> export_to_gif(frames, "animation.gif")
89
+ ```
90
+ """
91
+
92
+
93
+ class AnimateDiffPipelineIpex(
94
+ DiffusionPipeline,
95
+ StableDiffusionMixin,
96
+ TextualInversionLoaderMixin,
97
+ IPAdapterMixin,
98
+ LoraLoaderMixin,
99
+ FreeInitMixin,
100
+ ):
101
+ r"""
102
+ Pipeline for text-to-video generation.
103
+
104
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
105
+ implemented for all pipelines (downloading, saving, running on a particular device, etc.).
106
+
107
+ The pipeline also inherits the following loading methods:
108
+ - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings
109
+ - [`~loaders.LoraLoaderMixin.load_lora_weights`] for loading LoRA weights
110
+ - [`~loaders.LoraLoaderMixin.save_lora_weights`] for saving LoRA weights
111
+ - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters
112
+
113
+ Args:
114
+ vae ([`AutoencoderKL`]):
115
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
116
+ text_encoder ([`CLIPTextModel`]):
117
+ Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
118
+ tokenizer (`CLIPTokenizer`):
119
+ A [`~transformers.CLIPTokenizer`] to tokenize text.
120
+ unet ([`UNet2DConditionModel`]):
121
+ A [`UNet2DConditionModel`] used to create a UNetMotionModel to denoise the encoded video latents.
122
+ motion_adapter ([`MotionAdapter`]):
123
+ A [`MotionAdapter`] to be used in combination with `unet` to denoise the encoded video latents.
124
+ scheduler ([`SchedulerMixin`]):
125
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
126
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
127
+ """
128
+
129
+ model_cpu_offload_seq = "text_encoder->image_encoder->unet->vae"
130
+ _optional_components = ["feature_extractor", "image_encoder", "motion_adapter"]
131
+ _callback_tensor_inputs = ["latents", "prompt_embeds", "negative_prompt_embeds"]
132
+
133
+ def __init__(
134
+ self,
135
+ vae: AutoencoderKL,
136
+ text_encoder: CLIPTextModel,
137
+ tokenizer: CLIPTokenizer,
138
+ unet: Union[UNet2DConditionModel, UNetMotionModel],
139
+ motion_adapter: MotionAdapter,
140
+ scheduler: Union[
141
+ DDIMScheduler,
142
+ PNDMScheduler,
143
+ LMSDiscreteScheduler,
144
+ EulerDiscreteScheduler,
145
+ EulerAncestralDiscreteScheduler,
146
+ DPMSolverMultistepScheduler,
147
+ ],
148
+ feature_extractor: CLIPImageProcessor = None,
149
+ image_encoder: CLIPVisionModelWithProjection = None,
150
+ ):
151
+ super().__init__()
152
+ if isinstance(unet, UNet2DConditionModel):
153
+ unet = UNetMotionModel.from_unet2d(unet, motion_adapter)
154
+
155
+ self.register_modules(
156
+ vae=vae,
157
+ text_encoder=text_encoder,
158
+ tokenizer=tokenizer,
159
+ unet=unet,
160
+ motion_adapter=motion_adapter,
161
+ scheduler=scheduler,
162
+ feature_extractor=feature_extractor,
163
+ image_encoder=image_encoder,
164
+ )
165
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
166
+ self.video_processor = VideoProcessor(do_resize=False, vae_scale_factor=self.vae_scale_factor)
167
+
168
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_prompt with num_images_per_prompt -> num_videos_per_prompt
169
+ def encode_prompt(
170
+ self,
171
+ prompt,
172
+ device,
173
+ num_images_per_prompt,
174
+ do_classifier_free_guidance,
175
+ negative_prompt=None,
176
+ prompt_embeds: Optional[torch.Tensor] = None,
177
+ negative_prompt_embeds: Optional[torch.Tensor] = None,
178
+ lora_scale: Optional[float] = None,
179
+ clip_skip: Optional[int] = None,
180
+ ):
181
+ r"""
182
+ Encodes the prompt into text encoder hidden states.
183
+
184
+ Args:
185
+ prompt (`str` or `List[str]`, *optional*):
186
+ prompt to be encoded
187
+ device: (`torch.device`):
188
+ torch device
189
+ num_images_per_prompt (`int`):
190
+ number of images that should be generated per prompt
191
+ do_classifier_free_guidance (`bool`):
192
+ whether to use classifier free guidance or not
193
+ negative_prompt (`str` or `List[str]`, *optional*):
194
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
195
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
196
+ less than `1`).
197
+ prompt_embeds (`torch.Tensor`, *optional*):
198
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
199
+ provided, text embeddings will be generated from `prompt` input argument.
200
+ negative_prompt_embeds (`torch.Tensor`, *optional*):
201
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
202
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
203
+ argument.
204
+ lora_scale (`float`, *optional*):
205
+ A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
206
+ clip_skip (`int`, *optional*):
207
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
208
+ the output of the pre-final layer will be used for computing the prompt embeddings.
209
+ """
210
+ # set lora scale so that monkey patched LoRA
211
+ # function of text encoder can correctly access it
212
+ if lora_scale is not None and isinstance(self, LoraLoaderMixin):
213
+ self._lora_scale = lora_scale
214
+
215
+ # dynamically adjust the LoRA scale
216
+ if not USE_PEFT_BACKEND:
217
+ adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
218
+ else:
219
+ scale_lora_layers(self.text_encoder, lora_scale)
220
+
221
+ if prompt is not None and isinstance(prompt, str):
222
+ batch_size = 1
223
+ elif prompt is not None and isinstance(prompt, list):
224
+ batch_size = len(prompt)
225
+ else:
226
+ batch_size = prompt_embeds.shape[0]
227
+
228
+ if prompt_embeds is None:
229
+ # textual inversion: process multi-vector tokens if necessary
230
+ if isinstance(self, TextualInversionLoaderMixin):
231
+ prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
232
+
233
+ text_inputs = self.tokenizer(
234
+ prompt,
235
+ padding="max_length",
236
+ max_length=self.tokenizer.model_max_length,
237
+ truncation=True,
238
+ return_tensors="pt",
239
+ )
240
+ text_input_ids = text_inputs.input_ids
241
+ untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
242
+
243
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
244
+ text_input_ids, untruncated_ids
245
+ ):
246
+ removed_text = self.tokenizer.batch_decode(
247
+ untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
248
+ )
249
+ logger.warning(
250
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
251
+ f" {self.tokenizer.model_max_length} tokens: {removed_text}"
252
+ )
253
+
254
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
255
+ attention_mask = text_inputs.attention_mask.to(device)
256
+ else:
257
+ attention_mask = None
258
+
259
+ if clip_skip is None:
260
+ prompt_embeds = self.text_encoder(text_input_ids.to(device), attention_mask=attention_mask)
261
+ prompt_embeds = prompt_embeds[0]
262
+ else:
263
+ prompt_embeds = self.text_encoder(
264
+ text_input_ids.to(device), attention_mask=attention_mask, output_hidden_states=True
265
+ )
266
+ # Access the `hidden_states` first, that contains a tuple of
267
+ # all the hidden states from the encoder layers. Then index into
268
+ # the tuple to access the hidden states from the desired layer.
269
+ prompt_embeds = prompt_embeds[-1][-(clip_skip + 1)]
270
+ # We also need to apply the final LayerNorm here to not mess with the
271
+ # representations. The `last_hidden_states` that we typically use for
272
+ # obtaining the final prompt representations passes through the LayerNorm
273
+ # layer.
274
+ prompt_embeds = self.text_encoder.text_model.final_layer_norm(prompt_embeds)
275
+
276
+ if self.text_encoder is not None:
277
+ prompt_embeds_dtype = self.text_encoder.dtype
278
+ elif self.unet is not None:
279
+ prompt_embeds_dtype = self.unet.dtype
280
+ else:
281
+ prompt_embeds_dtype = prompt_embeds.dtype
282
+
283
+ prompt_embeds = prompt_embeds.to(dtype=prompt_embeds_dtype, device=device)
284
+
285
+ bs_embed, seq_len, _ = prompt_embeds.shape
286
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
287
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
288
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
289
+
290
+ # get unconditional embeddings for classifier free guidance
291
+ if do_classifier_free_guidance and negative_prompt_embeds is None:
292
+ uncond_tokens: List[str]
293
+ if negative_prompt is None:
294
+ uncond_tokens = [""] * batch_size
295
+ elif prompt is not None and type(prompt) is not type(negative_prompt):
296
+ raise TypeError(
297
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
298
+ f" {type(prompt)}."
299
+ )
300
+ elif isinstance(negative_prompt, str):
301
+ uncond_tokens = [negative_prompt]
302
+ elif batch_size != len(negative_prompt):
303
+ raise ValueError(
304
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
305
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
306
+ " the batch size of `prompt`."
307
+ )
308
+ else:
309
+ uncond_tokens = negative_prompt
310
+
311
+ # textual inversion: process multi-vector tokens if necessary
312
+ if isinstance(self, TextualInversionLoaderMixin):
313
+ uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer)
314
+
315
+ max_length = prompt_embeds.shape[1]
316
+ uncond_input = self.tokenizer(
317
+ uncond_tokens,
318
+ padding="max_length",
319
+ max_length=max_length,
320
+ truncation=True,
321
+ return_tensors="pt",
322
+ )
323
+
324
+ if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
325
+ attention_mask = uncond_input.attention_mask.to(device)
326
+ else:
327
+ attention_mask = None
328
+
329
+ negative_prompt_embeds = self.text_encoder(
330
+ uncond_input.input_ids.to(device),
331
+ attention_mask=attention_mask,
332
+ )
333
+ negative_prompt_embeds = negative_prompt_embeds[0]
334
+
335
+ if do_classifier_free_guidance:
336
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
337
+ seq_len = negative_prompt_embeds.shape[1]
338
+
339
+ negative_prompt_embeds = negative_prompt_embeds.to(dtype=prompt_embeds_dtype, device=device)
340
+
341
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
342
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
343
+
344
+ if self.text_encoder is not None:
345
+ if isinstance(self, LoraLoaderMixin) and USE_PEFT_BACKEND:
346
+ # Retrieve the original scale by scaling back the LoRA layers
347
+ unscale_lora_layers(self.text_encoder, lora_scale)
348
+
349
+ return prompt_embeds, negative_prompt_embeds
350
+
351
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_image
352
+ def encode_image(self, image, device, num_images_per_prompt, output_hidden_states=None):
353
+ dtype = next(self.image_encoder.parameters()).dtype
354
+
355
+ if not isinstance(image, torch.Tensor):
356
+ image = self.feature_extractor(image, return_tensors="pt").pixel_values
357
+
358
+ image = image.to(device=device, dtype=dtype)
359
+ if output_hidden_states:
360
+ image_enc_hidden_states = self.image_encoder(image, output_hidden_states=True).hidden_states[-2]
361
+ image_enc_hidden_states = image_enc_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)
362
+ uncond_image_enc_hidden_states = self.image_encoder(
363
+ torch.zeros_like(image), output_hidden_states=True
364
+ ).hidden_states[-2]
365
+ uncond_image_enc_hidden_states = uncond_image_enc_hidden_states.repeat_interleave(
366
+ num_images_per_prompt, dim=0
367
+ )
368
+ return image_enc_hidden_states, uncond_image_enc_hidden_states
369
+ else:
370
+ image_embeds = self.image_encoder(image).image_embeds
371
+ image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0)
372
+ uncond_image_embeds = torch.zeros_like(image_embeds)
373
+
374
+ return image_embeds, uncond_image_embeds
375
+
376
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_ip_adapter_image_embeds
377
+ def prepare_ip_adapter_image_embeds(
378
+ self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
379
+ ):
380
+ if ip_adapter_image_embeds is None:
381
+ if not isinstance(ip_adapter_image, list):
382
+ ip_adapter_image = [ip_adapter_image]
383
+
384
+ if len(ip_adapter_image) != len(self.unet.encoder_hid_proj.image_projection_layers):
385
+ raise ValueError(
386
+ f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
387
+ )
388
+
389
+ image_embeds = []
390
+ for single_ip_adapter_image, image_proj_layer in zip(
391
+ ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
392
+ ):
393
+ output_hidden_state = not isinstance(image_proj_layer, ImageProjection)
394
+ single_image_embeds, single_negative_image_embeds = self.encode_image(
395
+ single_ip_adapter_image, device, 1, output_hidden_state
396
+ )
397
+ single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
398
+ single_negative_image_embeds = torch.stack(
399
+ [single_negative_image_embeds] * num_images_per_prompt, dim=0
400
+ )
401
+
402
+ if do_classifier_free_guidance:
403
+ single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
404
+ single_image_embeds = single_image_embeds.to(device)
405
+
406
+ image_embeds.append(single_image_embeds)
407
+ else:
408
+ repeat_dims = [1]
409
+ image_embeds = []
410
+ for single_image_embeds in ip_adapter_image_embeds:
411
+ if do_classifier_free_guidance:
412
+ single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
413
+ single_image_embeds = single_image_embeds.repeat(
414
+ num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
415
+ )
416
+ single_negative_image_embeds = single_negative_image_embeds.repeat(
417
+ num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
418
+ )
419
+ single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
420
+ else:
421
+ single_image_embeds = single_image_embeds.repeat(
422
+ num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
423
+ )
424
+ image_embeds.append(single_image_embeds)
425
+
426
+ return image_embeds
427
+
428
+ # Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents
429
+ def decode_latents(self, latents):
430
+ latents = 1 / self.vae.config.scaling_factor * latents
431
+
432
+ batch_size, channels, num_frames, height, width = latents.shape
433
+ latents = latents.permute(0, 2, 1, 3, 4).reshape(batch_size * num_frames, channels, height, width)
434
+
435
+ image = self.vae.decode(latents).sample
436
+ video = image[None, :].reshape((batch_size, num_frames, -1) + image.shape[2:]).permute(0, 2, 1, 3, 4)
437
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
438
+ video = video.float()
439
+ return video
440
+
441
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
442
+ def prepare_extra_step_kwargs(self, generator, eta):
443
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
444
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
445
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
446
+ # and should be between [0, 1]
447
+
448
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
449
+ extra_step_kwargs = {}
450
+ if accepts_eta:
451
+ extra_step_kwargs["eta"] = eta
452
+
453
+ # check if the scheduler accepts generator
454
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
455
+ if accepts_generator:
456
+ extra_step_kwargs["generator"] = generator
457
+ return extra_step_kwargs
458
+
459
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs
460
+ def check_inputs(
461
+ self,
462
+ prompt,
463
+ height,
464
+ width,
465
+ negative_prompt=None,
466
+ prompt_embeds=None,
467
+ negative_prompt_embeds=None,
468
+ ip_adapter_image=None,
469
+ ip_adapter_image_embeds=None,
470
+ callback_on_step_end_tensor_inputs=None,
471
+ ):
472
+ if height % 8 != 0 or width % 8 != 0:
473
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
474
+
475
+ if callback_on_step_end_tensor_inputs is not None and not all(
476
+ k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs
477
+ ):
478
+ raise ValueError(
479
+ f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}"
480
+ )
481
+
482
+ if prompt is not None and prompt_embeds is not None:
483
+ raise ValueError(
484
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
485
+ " only forward one of the two."
486
+ )
487
+ elif prompt is None and prompt_embeds is None:
488
+ raise ValueError(
489
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
490
+ )
491
+ elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
492
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
493
+
494
+ if negative_prompt is not None and negative_prompt_embeds is not None:
495
+ raise ValueError(
496
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
497
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
498
+ )
499
+
500
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
501
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
502
+ raise ValueError(
503
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
504
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
505
+ f" {negative_prompt_embeds.shape}."
506
+ )
507
+
508
+ if ip_adapter_image is not None and ip_adapter_image_embeds is not None:
509
+ raise ValueError(
510
+ "Provide either `ip_adapter_image` or `ip_adapter_image_embeds`. Cannot leave both `ip_adapter_image` and `ip_adapter_image_embeds` defined."
511
+ )
512
+
513
+ if ip_adapter_image_embeds is not None:
514
+ if not isinstance(ip_adapter_image_embeds, list):
515
+ raise ValueError(
516
+ f"`ip_adapter_image_embeds` has to be of type `list` but is {type(ip_adapter_image_embeds)}"
517
+ )
518
+ elif ip_adapter_image_embeds[0].ndim not in [3, 4]:
519
+ raise ValueError(
520
+ f"`ip_adapter_image_embeds` has to be a list of 3D or 4D tensors but is {ip_adapter_image_embeds[0].ndim}D"
521
+ )
522
+
523
+ # Copied from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_synth.TextToVideoSDPipeline.prepare_latents
524
+ def prepare_latents(
525
+ self, batch_size, num_channels_latents, num_frames, height, width, dtype, device, generator, latents=None
526
+ ):
527
+ shape = (
528
+ batch_size,
529
+ num_channels_latents,
530
+ num_frames,
531
+ height // self.vae_scale_factor,
532
+ width // self.vae_scale_factor,
533
+ )
534
+ if isinstance(generator, list) and len(generator) != batch_size:
535
+ raise ValueError(
536
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
537
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
538
+ )
539
+
540
+ if latents is None:
541
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=torch.float32)
542
+ else:
543
+ latents = latents.to(device)
544
+
545
+ # scale the initial noise by the standard deviation required by the scheduler
546
+ latents = latents * self.scheduler.init_noise_sigma
547
+ return latents
548
+
549
+ @property
550
+ def guidance_scale(self):
551
+ return self._guidance_scale
552
+
553
+ @property
554
+ def clip_skip(self):
555
+ return self._clip_skip
556
+
557
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
558
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
559
+ # corresponds to doing no classifier free guidance.
560
+ @property
561
+ def do_classifier_free_guidance(self):
562
+ return self._guidance_scale > 1
563
+
564
+ @property
565
+ def cross_attention_kwargs(self):
566
+ return self._cross_attention_kwargs
567
+
568
+ @property
569
+ def num_timesteps(self):
570
+ return self._num_timesteps
571
+
572
+ @torch.no_grad()
573
+ @replace_example_docstring(EXAMPLE_DOC_STRING)
574
+ def __call__(
575
+ self,
576
+ prompt: Union[str, List[str]] = None,
577
+ num_frames: Optional[int] = 16,
578
+ height: Optional[int] = None,
579
+ width: Optional[int] = None,
580
+ num_inference_steps: int = 50,
581
+ guidance_scale: float = 7.5,
582
+ negative_prompt: Optional[Union[str, List[str]]] = None,
583
+ num_videos_per_prompt: Optional[int] = 1,
584
+ eta: float = 0.0,
585
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
586
+ latents: Optional[torch.Tensor] = None,
587
+ prompt_embeds: Optional[torch.Tensor] = None,
588
+ negative_prompt_embeds: Optional[torch.Tensor] = None,
589
+ ip_adapter_image: Optional[PipelineImageInput] = None,
590
+ ip_adapter_image_embeds: Optional[List[torch.Tensor]] = None,
591
+ output_type: Optional[str] = "pil",
592
+ return_dict: bool = True,
593
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
594
+ clip_skip: Optional[int] = None,
595
+ callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
596
+ callback_on_step_end_tensor_inputs: List[str] = ["latents"],
597
+ ):
598
+ r"""
599
+ The call function to the pipeline for generation.
600
+
601
+ Args:
602
+ prompt (`str` or `List[str]`, *optional*):
603
+ The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
604
+ height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
605
+ The height in pixels of the generated video.
606
+ width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
607
+ The width in pixels of the generated video.
608
+ num_frames (`int`, *optional*, defaults to 16):
609
+ The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
610
+ amounts to 2 seconds of video.
611
+ num_inference_steps (`int`, *optional*, defaults to 50):
612
+ The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
613
+ expense of slower inference.
614
+ guidance_scale (`float`, *optional*, defaults to 7.5):
615
+ A higher guidance scale value encourages the model to generate images closely linked to the text
616
+ `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
617
+ negative_prompt (`str` or `List[str]`, *optional*):
618
+ The prompt or prompts to guide what to not include in image generation. If not defined, you need to
619
+ pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
620
+ eta (`float`, *optional*, defaults to 0.0):
621
+ Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
622
+ to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
623
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
624
+ A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
625
+ generation deterministic.
626
+ latents (`torch.Tensor`, *optional*):
627
+ Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
628
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
629
+ tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
630
+ `(batch_size, num_channel, num_frames, height, width)`.
631
+ prompt_embeds (`torch.Tensor`, *optional*):
632
+ Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
633
+ provided, text embeddings are generated from the `prompt` input argument.
634
+ negative_prompt_embeds (`torch.Tensor`, *optional*):
635
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
636
+ not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
637
+ ip_adapter_image: (`PipelineImageInput`, *optional*):
638
+ Optional image input to work with IP Adapters.
639
+ ip_adapter_image_embeds (`List[torch.Tensor]`, *optional*):
640
+ Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
641
+ IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
642
+ contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
643
+ provided, embeddings are computed from the `ip_adapter_image` input argument.
644
+ output_type (`str`, *optional*, defaults to `"pil"`):
645
+ The output format of the generated video. Choose between `torch.Tensor`, `PIL.Image` or `np.array`.
646
+ return_dict (`bool`, *optional*, defaults to `True`):
647
+ Whether or not to return a [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] instead
648
+ of a plain tuple.
649
+ cross_attention_kwargs (`dict`, *optional*):
650
+ A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
651
+ [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
652
+ clip_skip (`int`, *optional*):
653
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
654
+ the output of the pre-final layer will be used for computing the prompt embeddings.
655
+ callback_on_step_end (`Callable`, *optional*):
656
+ A function that calls at the end of each denoising steps during the inference. The function is called
657
+ with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
658
+ callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
659
+ `callback_on_step_end_tensor_inputs`.
660
+ callback_on_step_end_tensor_inputs (`List`, *optional*):
661
+ The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
662
+ will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
663
+ `._callback_tensor_inputs` attribute of your pipeline class.
664
+
665
+ Examples:
666
+
667
+ Returns:
668
+ [`~pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput`] or `tuple`:
669
+ If `return_dict` is `True`, [`~pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput`] is
670
+ returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.
671
+ """
672
+
673
+ # 0. Default height and width to unet
674
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
675
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
676
+
677
+ num_videos_per_prompt = 1
678
+
679
+ # 1. Check inputs. Raise error if not correct
680
+ self.check_inputs(
681
+ prompt,
682
+ height,
683
+ width,
684
+ negative_prompt,
685
+ prompt_embeds,
686
+ negative_prompt_embeds,
687
+ ip_adapter_image,
688
+ ip_adapter_image_embeds,
689
+ callback_on_step_end_tensor_inputs,
690
+ )
691
+
692
+ self._guidance_scale = guidance_scale
693
+ self._clip_skip = clip_skip
694
+ self._cross_attention_kwargs = cross_attention_kwargs
695
+
696
+ # 2. Define call parameters
697
+ if prompt is not None and isinstance(prompt, str):
698
+ batch_size = 1
699
+ elif prompt is not None and isinstance(prompt, list):
700
+ batch_size = len(prompt)
701
+ else:
702
+ batch_size = prompt_embeds.shape[0]
703
+
704
+ device = self._execution_device
705
+
706
+ # 3. Encode input prompt
707
+ text_encoder_lora_scale = (
708
+ self.cross_attention_kwargs.get("scale", None) if self.cross_attention_kwargs is not None else None
709
+ )
710
+ prompt_embeds, negative_prompt_embeds = self.encode_prompt(
711
+ prompt,
712
+ device,
713
+ num_videos_per_prompt,
714
+ self.do_classifier_free_guidance,
715
+ negative_prompt,
716
+ prompt_embeds=prompt_embeds,
717
+ negative_prompt_embeds=negative_prompt_embeds,
718
+ lora_scale=text_encoder_lora_scale,
719
+ clip_skip=self.clip_skip,
720
+ )
721
+ # For classifier free guidance, we need to do two forward passes.
722
+ # Here we concatenate the unconditional and text embeddings into a single batch
723
+ # to avoid doing two forward passes
724
+ if self.do_classifier_free_guidance:
725
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
726
+
727
+ if ip_adapter_image is not None or ip_adapter_image_embeds is not None:
728
+ image_embeds = self.prepare_ip_adapter_image_embeds(
729
+ ip_adapter_image,
730
+ ip_adapter_image_embeds,
731
+ device,
732
+ batch_size * num_videos_per_prompt,
733
+ self.do_classifier_free_guidance,
734
+ )
735
+
736
+ # 4. Prepare timesteps
737
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
738
+ timesteps = self.scheduler.timesteps
739
+
740
+ # 5. Prepare latent variables
741
+ num_channels_latents = self.unet.config.in_channels
742
+ latents = self.prepare_latents(
743
+ batch_size * num_videos_per_prompt,
744
+ num_channels_latents,
745
+ num_frames,
746
+ height,
747
+ width,
748
+ prompt_embeds.dtype,
749
+ device,
750
+ generator,
751
+ latents,
752
+ )
753
+
754
+ # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
755
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
756
+
757
+ # 7. Add image embeds for IP-Adapter
758
+ added_cond_kwargs = (
759
+ {"image_embeds": image_embeds}
760
+ if ip_adapter_image is not None or ip_adapter_image_embeds is not None
761
+ else None
762
+ )
763
+
764
+ num_free_init_iters = self._free_init_num_iters if self.free_init_enabled else 1
765
+ for free_init_iter in range(num_free_init_iters):
766
+ if self.free_init_enabled:
767
+ latents, timesteps = self._apply_free_init(
768
+ latents, free_init_iter, num_inference_steps, device, latents.dtype, generator
769
+ )
770
+
771
+ self._num_timesteps = len(timesteps)
772
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
773
+
774
+ # 8. Denoising loop
775
+ with self.progress_bar(total=self._num_timesteps) as progress_bar:
776
+ for i, t in enumerate(timesteps):
777
+ # expand the latents if we are doing classifier free guidance
778
+ latent_model_input = torch.cat([latents] * 2) if self.do_classifier_free_guidance else latents
779
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
780
+
781
+ # predict the noise residual
782
+ noise_pred = self.unet(
783
+ latent_model_input,
784
+ t,
785
+ encoder_hidden_states=prompt_embeds,
786
+ # cross_attention_kwargs=cross_attention_kwargs,
787
+ # added_cond_kwargs=added_cond_kwargs,
788
+ # ).sample
789
+ )["sample"]
790
+
791
+ # perform guidance
792
+ if self.do_classifier_free_guidance:
793
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
794
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
795
+
796
+ # compute the previous noisy sample x_t -> x_t-1
797
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
798
+
799
+ if callback_on_step_end is not None:
800
+ callback_kwargs = {}
801
+ for k in callback_on_step_end_tensor_inputs:
802
+ callback_kwargs[k] = locals()[k]
803
+ callback_outputs = callback_on_step_end(self, i, t, callback_kwargs)
804
+
805
+ latents = callback_outputs.pop("latents", latents)
806
+ prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds)
807
+ negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds)
808
+
809
+ # call the callback, if provided
810
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
811
+ progress_bar.update()
812
+
813
+ # 9. Post processing
814
+ if output_type == "latent":
815
+ video = latents
816
+ else:
817
+ video_tensor = self.decode_latents(latents)
818
+ video = self.video_processor.postprocess_video(video=video_tensor, output_type=output_type)
819
+
820
+ # 10. Offload all models
821
+ self.maybe_free_model_hooks()
822
+
823
+ if not return_dict:
824
+ return (video,)
825
+
826
+ return AnimateDiffPipelineOutput(frames=video)
827
+
828
+ @torch.no_grad()
829
+ def prepare_for_ipex(
830
+ self,
831
+ dtype=torch.float32,
832
+ prompt: Union[str, List[str]] = None,
833
+ num_frames: Optional[int] = 16,
834
+ height: Optional[int] = None,
835
+ width: Optional[int] = None,
836
+ num_inference_steps: int = 50,
837
+ guidance_scale: float = 7.5,
838
+ negative_prompt: Optional[Union[str, List[str]]] = None,
839
+ num_videos_per_prompt: Optional[int] = 1,
840
+ eta: float = 0.0,
841
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
842
+ latents: Optional[torch.Tensor] = None,
843
+ prompt_embeds: Optional[torch.Tensor] = None,
844
+ negative_prompt_embeds: Optional[torch.Tensor] = None,
845
+ ip_adapter_image: Optional[PipelineImageInput] = None,
846
+ ip_adapter_image_embeds: Optional[List[torch.Tensor]] = None,
847
+ output_type: Optional[str] = "pil",
848
+ return_dict: bool = True,
849
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
850
+ clip_skip: Optional[int] = None,
851
+ callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
852
+ callback_on_step_end_tensor_inputs: List[str] = ["latents"],
853
+ ):
854
+ # 0. Default height and width to unet
855
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
856
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
857
+
858
+ num_videos_per_prompt = 1
859
+
860
+ # 1. Check inputs. Raise error if not correct
861
+ self.check_inputs(
862
+ prompt,
863
+ height,
864
+ width,
865
+ negative_prompt,
866
+ prompt_embeds,
867
+ negative_prompt_embeds,
868
+ ip_adapter_image,
869
+ ip_adapter_image_embeds,
870
+ callback_on_step_end_tensor_inputs,
871
+ )
872
+
873
+ self._guidance_scale = guidance_scale
874
+ self._clip_skip = clip_skip
875
+ self._cross_attention_kwargs = cross_attention_kwargs
876
+
877
+ # 2. Define call parameters
878
+ if prompt is not None and isinstance(prompt, str):
879
+ batch_size = 1
880
+ elif prompt is not None and isinstance(prompt, list):
881
+ batch_size = len(prompt)
882
+ else:
883
+ batch_size = prompt_embeds.shape[0]
884
+
885
+ device = self._execution_device
886
+
887
+ # 3. Encode input prompt
888
+ text_encoder_lora_scale = (
889
+ self.cross_attention_kwargs.get("scale", None) if self.cross_attention_kwargs is not None else None
890
+ )
891
+ prompt_embeds, negative_prompt_embeds = self.encode_prompt(
892
+ prompt,
893
+ device,
894
+ num_videos_per_prompt,
895
+ self.do_classifier_free_guidance,
896
+ negative_prompt,
897
+ prompt_embeds=prompt_embeds,
898
+ negative_prompt_embeds=negative_prompt_embeds,
899
+ lora_scale=text_encoder_lora_scale,
900
+ clip_skip=self.clip_skip,
901
+ )
902
+ # For classifier free guidance, we need to do two forward passes.
903
+ # Here we concatenate the unconditional and text embeddings into a single batch
904
+ # to avoid doing two forward passes
905
+ if self.do_classifier_free_guidance:
906
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
907
+
908
+ # 4. Prepare timesteps
909
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
910
+ timesteps = self.scheduler.timesteps
911
+
912
+ # 5. Prepare latent variables
913
+ num_channels_latents = self.unet.config.in_channels
914
+ latents = self.prepare_latents(
915
+ batch_size * num_videos_per_prompt,
916
+ num_channels_latents,
917
+ num_frames,
918
+ height,
919
+ width,
920
+ prompt_embeds.dtype,
921
+ device,
922
+ generator,
923
+ latents,
924
+ )
925
+
926
+ num_free_init_iters = self._free_init_num_iters if self.free_init_enabled else 1
927
+ for free_init_iter in range(num_free_init_iters):
928
+ if self.free_init_enabled:
929
+ latents, timesteps = self._apply_free_init(
930
+ latents, free_init_iter, num_inference_steps, device, latents.dtype, generator
931
+ )
932
+
933
+ self._num_timesteps = len(timesteps)
934
+
935
+ dummy = timesteps[0]
936
+ latent_model_input = torch.cat([latents] * 2) if self.do_classifier_free_guidance else latents
937
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, dummy)
938
+
939
+ self.unet = self.unet.to(memory_format=torch.channels_last)
940
+ self.vae.decoder = self.vae.decoder.to(memory_format=torch.channels_last)
941
+ self.text_encoder = self.text_encoder.to(memory_format=torch.channels_last)
942
+
943
+ unet_input_example = {
944
+ "sample": latent_model_input,
945
+ "timestep": dummy,
946
+ "encoder_hidden_states": prompt_embeds,
947
+ }
948
+
949
+ fake_latents = 1 / self.vae.config.scaling_factor * latents
950
+ batch_size, channels, num_frames, height, width = fake_latents.shape
951
+ fake_latents = fake_latents.permute(0, 2, 1, 3, 4).reshape(batch_size * num_frames, channels, height, width)
952
+ vae_decoder_input_example = fake_latents
953
+
954
+ # optimize with ipex
955
+ if dtype == torch.bfloat16:
956
+ self.unet = ipex.optimize(self.unet.eval(), dtype=torch.bfloat16, inplace=True)
957
+ self.vae.decoder = ipex.optimize(self.vae.decoder.eval(), dtype=torch.bfloat16, inplace=True)
958
+ self.text_encoder = ipex.optimize(self.text_encoder.eval(), dtype=torch.bfloat16, inplace=True)
959
+ elif dtype == torch.float32:
960
+ self.unet = ipex.optimize(
961
+ self.unet.eval(),
962
+ dtype=torch.float32,
963
+ inplace=True,
964
+ # sample_input=unet_input_example,
965
+ level="O1",
966
+ weights_prepack=True,
967
+ auto_kernel_selection=False,
968
+ )
969
+ self.vae.decoder = ipex.optimize(
970
+ self.vae.decoder.eval(),
971
+ dtype=torch.float32,
972
+ inplace=True,
973
+ level="O1",
974
+ weights_prepack=True,
975
+ auto_kernel_selection=False,
976
+ )
977
+ self.text_encoder = ipex.optimize(
978
+ self.text_encoder.eval(),
979
+ dtype=torch.float32,
980
+ inplace=True,
981
+ level="O1",
982
+ weights_prepack=True,
983
+ auto_kernel_selection=False,
984
+ )
985
+ else:
986
+ raise ValueError(" The value of 'dtype' should be 'torch.bfloat16' or 'torch.float32' !")
987
+
988
+ # trace unet model to get better performance on IPEX
989
+ with torch.cpu.amp.autocast(enabled=dtype == torch.bfloat16), torch.no_grad():
990
+ unet_trace_model = torch.jit.trace(
991
+ self.unet, example_kwarg_inputs=unet_input_example, check_trace=False, strict=False
992
+ )
993
+ unet_trace_model = torch.jit.freeze(unet_trace_model)
994
+ self.unet.forward = unet_trace_model.forward
995
+
996
+ # trace vae.decoder model to get better performance on IPEX
997
+ with torch.cpu.amp.autocast(enabled=dtype == torch.bfloat16), torch.no_grad():
998
+ vae_decoder_trace_model = torch.jit.trace(
999
+ self.vae.decoder, vae_decoder_input_example, check_trace=False, strict=False
1000
+ )
1001
+ vae_decoder_trace_model = torch.jit.freeze(vae_decoder_trace_model)
1002
+ self.vae.decoder.forward = vae_decoder_trace_model.forward