A Diffusion Transformer model for 2D data from CogView4
The model can be loaded with the following code snippet.
from diffusers import CogView4Transformer2DModel
transformer = CogView4Transformer2DModel.from_pretrained("THUDM/CogView4-6B", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")( patch_size: int = 2 in_channels: int = 16 out_channels: int = 16 num_layers: int = 30 attention_head_dim: int = 40 num_attention_heads: int = 64 text_embed_dim: int = 4096 time_embed_dim: int = 512 condition_dim: int = 256 pos_embed_max_size: int = 128 sample_size: int = 128 rope_axes_dim: typing.Tuple[int, int] = (256, 256) )
Parameters
int, defaults to 2) —
The size of the patches to use in the patch embedding layer. int, defaults to 16) —
The number of channels in the input. int, defaults to 30) —
The number of layers of Transformer blocks to use. int, defaults to 40) —
The number of channels in each head. int, defaults to 64) —
The number of heads to use for multi-head attention. int, defaults to 16) —
The number of channels in the output. int, defaults to 4096) —
Input dimension of text embeddings from the text encoder. int, defaults to 512) —
Output dimension of timestep embeddings. int, defaults to 256) —
The embedding dimension of the input SDXL-style resolution conditions (original_size, target_size,
crop_coords). int, defaults to 128) —
The maximum resolution of the positional embeddings, from which slices of shape H x W are taken and added
to input patched latents, where H and W are the latent height and width respectively. A value of 128
means that the maximum supported height and width for image generation is 128 * vae_scale_factor * patch_size => 128 * 8 * 2 => 2048. int, defaults to 128) —
The base resolution of input latents. If height/width is not provided during generation, this value is used
to determine the resolution as sample_size * vae_scale_factor => 128 * 8 => 1024 ( sample: torch.Tensor )
Parameters
torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) —
The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability
distributions for the unnoised latent pixels. The output of Transformer2DModel.