A Diffusion Transformer model for 3D video-like data was introduced in Cosmos World Foundation Model Platform for Physical AI by NVIDIA.
The model can be loaded with the following code snippet.
from diffusers import CosmosTransformer3DModel
transformer = CosmosTransformer3DModel.from_pretrained("nvidia/Cosmos-1.0-Diffusion-7B-Text2World", subfolder="transformer", torch_dtype=torch.bfloat16)( in_channels: int = 16 out_channels: int = 16 num_attention_heads: int = 32 attention_head_dim: int = 128 num_layers: int = 28 mlp_ratio: float = 4.0 text_embed_dim: int = 1024 adaln_lora_dim: int = 256 max_size: typing.Tuple[int, int, int] = (128, 240, 240) patch_size: typing.Tuple[int, int, int] = (1, 2, 2) rope_scale: typing.Tuple[float, float, float] = (2.0, 1.0, 1.0) concat_padding_mask: bool = True extra_pos_embed_type: typing.Optional[str] = 'learnable' )
Parameters
int, defaults to 16) —
The number of channels in the input. int, defaults to 16) —
The number of channels in the output. int, defaults to 32) —
The number of heads to use for multi-head attention. int, defaults to 128) —
The number of channels in each attention head. int, defaults to 28) —
The number of layers of transformer blocks to use. float, defaults to 4.0) —
The ratio of the hidden layer size to the input size in the feedforward network. int, defaults to 4096) —
Input dimension of text embeddings from the text encoder. int, defaults to 256) —
The hidden dimension of the Adaptive LayerNorm LoRA layer. Tuple[int, int, int], defaults to (128, 240, 240)) —
The maximum size of the input latent tensors in the temporal, height, and width dimensions. Tuple[int, int, int], defaults to (1, 2, 2)) —
The patch size to use for patchifying the input latent tensors in the temporal, height, and width
dimensions. Tuple[float, float, float], defaults to (2.0, 1.0, 1.0)) —
The scaling factor to use for RoPE in the temporal, height, and width dimensions. bool, defaults to True) —
Whether to concatenate the padding mask to the input latent tensors. str, optional, defaults to learnable) —
The type of extra positional embeddings to use. Can be one of None or learnable. A Transformer model for video-like data used in Cosmos.
( sample: torch.Tensor )
Parameters
torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) —
The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability
distributions for the unnoised latent pixels. The output of Transformer2DModel.