A Diffusion Transformer model for 3D video-like data was introduced in Wan 2.1 by the Alibaba Wan Team.
The model can be loaded with the following code snippet.
from diffusers import WanTransformer3DModel
transformer = WanTransformer3DModel.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
( patch_size: typing.Tuple[int] = (1, 2, 2) num_attention_heads: int = 40 attention_head_dim: int = 128 in_channels: int = 16 out_channels: int = 16 text_dim: int = 4096 freq_dim: int = 256 ffn_dim: int = 13824 num_layers: int = 40 cross_attn_norm: bool = True qk_norm: typing.Optional[str] = 'rms_norm_across_heads' eps: float = 1e-06 image_dim: typing.Optional[int] = None added_kv_proj_dim: typing.Optional[int] = None rope_max_seq_len: int = 1024 )
Parameters
Tuple[int]
, defaults to (1, 2, 2)
) —
3D patch dimensions for video embedding (t_patch, h_patch, w_patch). int
, defaults to 40
) —
Fixed length for text embeddings. int
, defaults to 128
) —
The number of channels in each head. int
, defaults to 16
) —
The number of channels in the input. int
, defaults to 16
) —
The number of channels in the output. int
, defaults to 512
) —
Input dimension for text embeddings. int
, defaults to 256
) —
Dimension for sinusoidal time embeddings. int
, defaults to 13824
) —
Intermediate dimension in feed-forward network. int
, defaults to 40
) —
The number of layers of transformer blocks to use. Tuple[int]
, defaults to (-1, -1)
) —
Window size for local attention (-1 indicates global attention). bool
, defaults to True
) —
Enable cross-attention normalization. bool
, defaults to True
) —
Enable query/key normalization. float
, defaults to 1e-6
) —
Epsilon value for normalization layers. bool
, defaults to False
) —
Whether to use img_emb. int
, optional, defaults to None
) —
The number of channels to use for the added key and value projections. If None
, no projection is used. A Transformer model for video-like data used in the Wan model.
( sample: torch.Tensor )
Parameters
torch.Tensor
of shape (batch_size, num_channels, height, width)
or (batch size, num_vector_embeds - 1, num_latent_pixels)
if Transformer2DModel is discrete) —
The hidden states output conditioned on the encoder_hidden_states
input. If discrete, returns probability
distributions for the unnoised latent pixels. The output of Transformer2DModel.