Utility and helper functions for working with 🤗 Diffusers.
Convert a numpy image or a batch of images to a PIL image.
Convert a torch image to a PIL image.
( image: typing.Union[str, PIL.Image.Image] convert_method: typing.Optional[typing.Callable[[PIL.Image.Image], PIL.Image.Image]] = None ) → PIL.Image.Image
Parameters
str
or PIL.Image.Image
) —
The image to convert to the PIL Image format. None
the image will be converted
“RGB”. Returns
PIL.Image.Image
A PIL Image.
Loads image
to a PIL Image.
( image: typing.List[PIL.Image.Image] output_gif_path: str = None fps: int = 10 )
( video_frames: typing.Union[typing.List[numpy.ndarray], typing.List[PIL.Image.Image]] output_video_path: str = None fps: int = 10 quality: float = 5.0 bitrate: typing.Optional[int] = None macro_block_size: typing.Optional[int] = 16 )
quality:
Video output quality. Default is 5. Uses variable bit rate. Highest quality is 10, lowest is 0. Set to None to
prevent variable bitrate flags to FFMPEG so you can manually specify them using output_params instead.
Specifying a fixed bitrate using bitrate
disables this parameter.
bitrate:
Set a constant bitrate for the video encoding. Default is None causing quality
parameter to be used instead.
Better quality videos with smaller file sizes will result from using the quality
variable bitrate parameter
rather than specifiying a fixed bitrate with this parameter.
macro_block_size: Size constraint for video. Width and height, must be divisible by this number. If not divisible by this number imageio will tell ffmpeg to scale the image up to the next closest size divisible by this number. Most codecs are compatible with a macroblock size of 16 (default), some can go smaller (4, 8). To disable this automatic feature set it to None or 1, however be warned many players can’t decode videos that are odd in size and some codecs will produce poor results or fail. See https://en.wikipedia.org/wiki/Macroblock.
( images: typing.List[PIL.Image.Image] rows: int cols: int resize: int = None )
Prepares a single grid of images. Useful for visualization purposes.
( shape: typing.Union[typing.Tuple, typing.List] generator: typing.Union[typing.List[ForwardRef('torch.Generator')], ForwardRef('torch.Generator'), NoneType] = None device: typing.Optional[ForwardRef('torch.device')] = None dtype: typing.Optional[ForwardRef('torch.dtype')] = None layout: typing.Optional[ForwardRef('torch.layout')] = None )
A helper function to create random tensors on the desired device
with the desired dtype
. When
passing a list of generators, you can seed each batch size individually. If CPU generators are passed, the tensor
is always created on the CPU.
( module: Module storage_dtype: dtype compute_dtype: dtype skip_modules_pattern: typing.Union[str, typing.Tuple[str, ...]] = 'auto' skip_modules_classes: typing.Optional[typing.Tuple[typing.Type[torch.nn.modules.module.Module], ...]] = None non_blocking: bool = False )
Parameters
torch.nn.Module
) —
The module whose leaf modules will be cast to a high precision dtype for computation, and to a low
precision dtype for storage. torch.dtype
) —
The dtype to cast the module to before/after the forward pass for storage. torch.dtype
) —
The dtype to cast the module to during the forward pass for computation. Tuple[str, ...]
, defaults to "auto"
) —
A list of patterns to match the names of the modules to skip during the layerwise casting process. If set
to "auto"
, the default patterns are used. If set to None
, no modules are skipped. If set to None
alongside skip_modules_classes
being None
, the layerwise casting is applied directly to the module
instead of its internal submodules. Tuple[Type[torch.nn.Module], ...]
, defaults to None
) —
A list of module classes to skip during the layerwise casting process. bool
, defaults to False
) —
If True
, the weight casting operations are non-blocking. Applies layerwise casting to a given module. The module expected here is a Diffusers ModelMixin but it can be any nn.Module using diffusers layers or pytorch primitives.
Example:
>>> import torch
>>> from diffusers import CogVideoXTransformer3DModel
>>> transformer = CogVideoXTransformer3DModel.from_pretrained(
... model_id, subfolder="transformer", torch_dtype=torch.bfloat16
... )
>>> apply_layerwise_casting(
... transformer,
... storage_dtype=torch.float8_e4m3fn,
... compute_dtype=torch.bfloat16,
... skip_modules_pattern=["patch_embed", "norm", "proj_out"],
... non_blocking=True,
... )
( module: Module onload_device: device offload_device: device = device(type='cpu') offload_type: str = 'block_level' num_blocks_per_group: typing.Optional[int] = None non_blocking: bool = False use_stream: bool = False record_stream: bool = False low_cpu_mem_usage: bool = False )
Parameters
torch.nn.Module
) —
The module to which group offloading is applied. torch.device
) —
The device to which the group of modules are onloaded. torch.device
, defaults to torch.device("cpu")
) —
The device to which the group of modules are offloaded. This should typically be the CPU. Default is CPU. str
, defaults to “block_level”) —
The type of offloading to be applied. Can be one of “block_level” or “leaf_level”. Default is
“block_level”. int
, optional) —
The number of blocks per group when using offload_type=“block_level”. This is required when using
offload_type=“block_level”. bool
, defaults to False
) —
If True, offloading and onloading is done with non-blocking data transfer. bool
, defaults to False
) —
If True, offloading and onloading is done asynchronously using a CUDA stream. This can be useful for
overlapping computation and data transfer. bool
, defaults to False
) — When enabled with use_stream
, it marks the current tensor
as having been used by this stream. It is faster at the expense of slightly more memory usage. Refer to the
PyTorch official docs more
details. bool
, defaults to False
) —
If True, the CPU memory usage is minimized by pinning tensors on-the-fly instead of pre-pinning them. This
option only matters when using streamed CPU offloading (i.e. use_stream=True
). This can be useful when
the CPU memory is a bottleneck but may counteract the benefits of using streams. Applies group offloading to the internal layers of a torch.nn.Module. To understand what group offloading is, and where it is beneficial, we need to first provide some context on how other supported offloading methods work.
Typically, offloading is done at two levels:
ModelMixin::enable_model_cpu_offload()
method. It
works by offloading each component of a pipeline to the CPU for storage, and onloading to the accelerator device
when needed for computation. This method is more memory-efficient than keeping all components on the accelerator,
but the memory requirements are still quite high. For this method to work, one needs memory equivalent to size of
the model in runtime dtype + size of largest intermediate activation tensors to be able to complete the forward
pass.ModelMixin::enable_sequential_cpu_offload()
method. It
works by offloading the lowest leaf-level parameters of the computation graph to the CPU for storage, and
onloading only the leafs to the accelerator device for computation. This uses the lowest amount of accelerator
memory, but can be slower due to the excessive number of device synchronizations.Group offloading is a middle ground between the two methods. It works by offloading groups of internal layers,
(either torch.nn.ModuleList
or torch.nn.Sequential
). This method uses lower memory than module-level
offloading. It is also faster than leaf-level/sequential offloading, as the number of device synchronizations is
reduced.
Another supported feature (for CUDA devices with support for asynchronous data transfer streams) is the ability to overlap data transfer and computation to reduce the overall execution time compared to sequential offloading. This is enabled using layer prefetching with streams, i.e., the layer that is to be executed next starts onloading to the accelerator device while the current layer is being executed - this increases the memory requirements slightly. Note that this implementation also supports leaf-level offloading but can be made much faster when using streams.
Example:
>>> from diffusers import CogVideoXTransformer3DModel
>>> from diffusers.hooks import apply_group_offloading
>>> transformer = CogVideoXTransformer3DModel.from_pretrained(
... "THUDM/CogVideoX-5b", subfolder="transformer", torch_dtype=torch.bfloat16
... )
>>> apply_group_offloading(
... transformer,
... onload_device=torch.device("cuda"),
... offload_device=torch.device("cpu"),
... offload_type="block_level",
... num_blocks_per_group=2,
... use_stream=True,
... )