A Diffusion Transformer model for 3D data from EasyAnimate was introduced by Alibaba PAI.
The model can be loaded with the following code snippet.
from diffusers import EasyAnimateTransformer3DModel
transformer = EasyAnimateTransformer3DModel.from_pretrained("alibaba-pai/EasyAnimateV5.1-12b-zh", subfolder="transformer", torch_dtype=torch.float16).to("cuda")( num_attention_heads: int = 48 attention_head_dim: int = 64 in_channels: typing.Optional[int] = None out_channels: typing.Optional[int] = None patch_size: typing.Optional[int] = None sample_width: int = 90 sample_height: int = 60 activation_fn: str = 'gelu-approximate' timestep_activation_fn: str = 'silu' freq_shift: int = 0 num_layers: int = 48 mmdit_layers: int = 48 dropout: float = 0.0 time_embed_dim: int = 512 add_norm_text_encoder: bool = False text_embed_dim: int = 3584 text_embed_dim_t5: int = None norm_eps: float = 1e-05 norm_elementwise_affine: bool = True flip_sin_to_cos: bool = True time_position_encoding_type: str = '3d_rope' after_norm = False resize_inpaint_mask_directly: bool = True enable_text_attention_mask: bool = True add_noise_in_inpaint_model: bool = True )
Parameters
int, defaults to 48) —
The number of heads to use for multi-head attention. int, defaults to 64) —
The number of channels in each head. int, defaults to 16) —
The number of channels in the input. int, optional, defaults to 16) —
The number of channels in the output. int, defaults to 2) —
The size of the patches to use in the patch embedding layer. int, defaults to 90) —
The width of the input latents. int, defaults to 60) —
The height of the input latents. str, defaults to "gelu-approximate") —
Activation function to use in feed-forward. str, defaults to "silu") —
Activation function to use when generating the timestep embeddings. int, defaults to 30) —
The number of layers of Transformer blocks to use. int, defaults to 1000) —
The number of layers of Multi Modal Transformer blocks to use. float, defaults to 0.0) —
The dropout probability to use. int, defaults to 512) —
Output dimension of timestep embeddings. int, defaults to 4096) —
Input dimension of text embeddings from the text encoder. float, defaults to 1e-5) —
The epsilon value to use in normalization layers. bool, defaults to True) —
Whether to use elementwise affine in normalization layers. bool, defaults to True) —
Whether to flip the sin to cos in the time embedding. str, defaults to 3d_rope) —
Type of time position encoding. bool, defaults to False) —
Flag to apply normalization after. bool, defaults to True) —
Flag to resize inpaint mask directly. bool, defaults to True) —
Flag to enable text attention mask. bool, defaults to False) —
Flag to add noise in inpaint model. A Transformer model for video-like data in EasyAnimate.
( sample: torch.Tensor )
Parameters
torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) —
The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability
distributions for the unnoised latent pixels. The output of Transformer2DModel.