AutoencoderKLHunyuanVideo

The 3D variational autoencoder (VAE) model with KL loss used in HunyuanVideo, which was introduced in HunyuanVideo: A Systematic Framework For Large Video Generative Models by Tencent.

The model can be loaded with the following code snippet.

TODO

AutoencoderKLMochi

class diffusers.AutoencoderKLHunyuanVideo

< >

( in_channels: int = 3 out_channels: int = 3 latent_channels: int = 16 down_block_types: typing.Tuple[str, ...] = ('DownEncoderBlockCausal3D', 'DownEncoderBlockCausal3D', 'DownEncoderBlockCausal3D', 'DownEncoderBlockCausal3D') up_block_types: typing.Tuple[str, ...] = ('UpDecoderBlockCausal3D', 'UpDecoderBlockCausal3D', 'UpDecoderBlockCausal3D', 'UpDecoderBlockCausal3D') block_out_channels: typing.Tuple[int] = (128, 256, 512, 512) layers_per_block: int = 2 act_fn: str = 'silu' norm_num_groups: int = 32 sample_size: int = 256 sample_tsize: int = 64 scaling_factor: float = 0.476986 spatial_compression_ratio: int = 8 time_compression_ratio: int = 4 mid_block_add_attention: bool = True )

A VAE model with KL loss for encoding images/videos into latents and decoding latent representations into images/videos.

This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).

wrapper

< >

( *args **kwargs )

disable_slicing

< >

( )

Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing decoding in one step.

disable_tiling

< >

( )

Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing decoding in one step.

enable_slicing

< >

( )

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

enable_tiling

< >

( use_tiling: bool = True )

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger videos.

forward

< >

( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True return_posterior: bool = False generator: typing.Optional[torch._C.Generator] = None )

Parameters

  • sample (torch.FloatTensor) — Input sample.
  • sample_posterior (bool, optional, defaults to False) — Whether to sample from the posterior.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a DecoderOutput instead of a plain tuple.

spatial_tiled_decode

< >

( z: FloatTensor return_dict: bool = True ) ~models.vae.DecoderOutput or tuple

Parameters

  • z (torch.FloatTensor) — Input batch of latent vectors.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple.

Returns

~models.vae.DecoderOutput or tuple

If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is returned.

Decode a batch of images/videos using a tiled decoder.

spatial_tiled_encode

< >

( x: FloatTensor return_dict: bool = True return_moments: bool = False ) ~models.autoencoder_kl.AutoencoderKLOutput or tuple

Parameters

  • x (torch.FloatTensor) — Input batch of images/videos.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~models.autoencoder_kl.AutoencoderKLOutput instead of a plain tuple.

Returns

~models.autoencoder_kl.AutoencoderKLOutput or tuple

If return_dict is True, a ~models.autoencoder_kl.AutoencoderKLOutput is returned, otherwise a plain tuple is returned.

Encode a batch of images/videos using a tiled encoder.

When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several steps. This is useful to keep memory use constant regardless of image/videos size. The end result of tiled encoding is different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the output, but they should be much less noticeable.

DecoderOutput

class diffusers.models.autoencoders.vae.DecoderOutput

< >

( sample: Tensor commit_loss: typing.Optional[torch.FloatTensor] = None )

Parameters

  • sample (torch.Tensor of shape (batch_size, num_channels, height, width)) — The decoded output sample from the last layer of the model.

Output of decoding method.

< > Update on GitHub