Utility and helper functions for working with 🤗 Diffusers.
Convert a numpy image or a batch of images to a PIL image.
Convert a torch image to a PIL image.
( image: typing.Union[str, PIL.Image.Image] convert_method: typing.Optional[typing.Callable[[PIL.Image.Image], PIL.Image.Image]] = None ) → PIL.Image.Image
Parameters
str or PIL.Image.Image) —
The image to convert to the PIL Image format. None the image will be converted
“RGB”. Returns
PIL.Image.Image
A PIL Image.
Loads image to a PIL Image.
( image: typing.List[PIL.Image.Image] output_gif_path: str = None fps: int = 10 )
( video_frames: typing.Union[typing.List[numpy.ndarray], typing.List[PIL.Image.Image]] output_video_path: str = None fps: int = 10 )
( images: typing.List[PIL.Image.Image] rows: int cols: int resize: int = None )
Prepares a single grid of images. Useful for visualization purposes.
( shape: typing.Union[typing.Tuple, typing.List] generator: typing.Union[typing.List[ForwardRef('torch.Generator')], ForwardRef('torch.Generator'), NoneType] = None device: typing.Optional[ForwardRef('torch.device')] = None dtype: typing.Optional[ForwardRef('torch.dtype')] = None layout: typing.Optional[ForwardRef('torch.layout')] = None )
A helper function to create random tensors on the desired device with the desired dtype. When
passing a list of generators, you can seed each batch size individually. If CPU generators are passed, the tensor
is always created on the CPU.
( module: Module storage_dtype: dtype compute_dtype: dtype skip_modules_pattern: typing.Union[str, typing.Tuple[str, ...]] = 'auto' skip_modules_classes: typing.Optional[typing.Tuple[typing.Type[torch.nn.modules.module.Module], ...]] = None non_blocking: bool = False )
Parameters
torch.nn.Module) —
The module whose leaf modules will be cast to a high precision dtype for computation, and to a low
precision dtype for storage. torch.dtype) —
The dtype to cast the module to before/after the forward pass for storage. torch.dtype) —
The dtype to cast the module to during the forward pass for computation. Tuple[str, ...], defaults to "auto") —
A list of patterns to match the names of the modules to skip during the layerwise casting process. If set
to "auto", the default patterns are used. If set to None, no modules are skipped. If set to None
alongside skip_modules_classes being None, the layerwise casting is applied directly to the module
instead of its internal submodules. Tuple[Type[torch.nn.Module], ...], defaults to None) —
A list of module classes to skip during the layerwise casting process. bool, defaults to False) —
If True, the weight casting operations are non-blocking. Applies layerwise casting to a given module. The module expected here is a Diffusers ModelMixin but it can be any nn.Module using diffusers layers or pytorch primitives.
Example:
>>> import torch
>>> from diffusers import CogVideoXTransformer3DModel
>>> transformer = CogVideoXTransformer3DModel.from_pretrained(
... model_id, subfolder="transformer", torch_dtype=torch.bfloat16
... )
>>> apply_layerwise_casting(
... transformer,
... storage_dtype=torch.float8_e4m3fn,
... compute_dtype=torch.bfloat16,
... skip_modules_pattern=["patch_embed", "norm", "proj_out"],
... non_blocking=True,
... )