Diffusers documentation
AutoencoderKLWan
AutoencoderKLWan
The 3D variational autoencoder (VAE) model with KL loss used in Wan 2.1 by the Alibaba Wan Team.
The model can be loaded with the following code snippet.
from diffusers import AutoencoderKLWan
vae = AutoencoderKLWan.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B-Diffusers", subfolder="vae", torch_dtype=torch.float32)
AutoencoderKLWan
class diffusers.AutoencoderKLWan
< source >( base_dim: int = 96z_dim: int = 16dim_mult: typing.Tuple[int] = [1, 2, 4, 4]num_res_blocks: int = 2attn_scales: typing.List[float] = []temperal_downsample: typing.List[bool] = [False, True, True]dropout: float = 0.0latents_mean: typing.List[float] = [-0.7571, -0.7089, -0.9113, 0.1075, -0.1745, 0.9653, -0.1517, 1.5508, 0.4134, -0.0715, 0.5517, -0.3632, -0.1922, -0.9497, 0.2503, -0.2921]latents_std: typing.List[float] = [2.8184, 1.4541, 2.3275, 2.6558, 1.2196, 1.7708, 2.6052, 2.0743, 3.2687, 2.1526, 2.8652, 1.5579, 1.6382, 1.1253, 2.8251, 1.916] )
A VAE model with KL loss for encoding videos into latents and decoding latent representations into videos. Introduced in [Wan 2.1].
This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).
forward
< source >( sample: Tensorsample_posterior: bool = Falsereturn_dict: bool = Truegenerator: typing.Optional[torch._C.Generator] = None )
DecoderOutput
class diffusers.models.autoencoders.vae.DecoderOutput
< source >( sample: Tensorcommit_loss: typing.Optional[torch.FloatTensor] = None )
Output of decoding method.