A Next Version of Diffusion Transformer model for 2D data from Lumina-T2X.
( sample_size: int = 128 patch_size: Optional = 2 in_channels: Optional = 4 hidden_size: Optional = 2304 num_layers: Optional = 32 num_attention_heads: Optional = 32 num_kv_heads: Optional = None multiple_of: Optional = 256 ffn_dim_multiplier: Optional = None norm_eps: Optional = 1e-05 learn_sigma: Optional = True qk_norm: Optional = True cross_attention_dim: Optional = 2048 scaling_factor: Optional = 1.0 )
Parameters
int) — The width of the latent images. This is fixed during training since
it is used to learn a number of position embeddings. int, optional, (int, optional, defaults to 2) —
The size of each patch in the image. This parameter defines the resolution of patches fed into the model. int, optional, defaults to 4) —
The number of input channels for the model. Typically, this matches the number of channels in the input
images. int, optional, defaults to 4096) —
The dimensionality of the hidden layers in the model. This parameter determines the width of the model’s
hidden representations. int, optional, default to 32) —
The number of layers in the model. This defines the depth of the neural network. int, optional, defaults to 32) —
The number of attention heads in each attention layer. This parameter specifies how many separate attention
mechanisms are used. int, optional, defaults to 8) —
The number of key-value heads in the attention mechanism, if different from the number of attention heads.
If None, it defaults to num_attention_heads. int, optional, defaults to 256) —
A factor that the hidden size should be a multiple of. This can help optimize certain hardware
configurations. float, optional) —
A multiplier for the dimensionality of the feed-forward network. If None, it uses a default value based on
the model configuration. float, optional, defaults to 1e-5) —
A small value added to the denominator for numerical stability in normalization layers. bool, optional, defaults to True) —
Whether the model should learn the sigma parameter, which might be related to uncertainty or variance in
predictions. bool, optional, defaults to True) —
Indicates if the queries and keys in the attention mechanism should be normalized. int, optional, defaults to 2048) —
The dimensionality of the text embeddings. This parameter defines the size of the text representations used
in the model. float, optional, defaults to 1.0) —
A scaling factor applied to certain parameters or layers in the model. This can be used for adjusting the
overall scale of the model’s operations. LuminaNextDiT: Diffusion model with a Transformer backbone.
Inherit ModelMixin and ConfigMixin to be compatible with the sampler StableDiffusionPipeline of diffusers.
( chunk_size: Optional = None dim: int = 0 )
Parameters
int, optional) —
The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually
over each tensor of dim=dim. int, optional, defaults to 0) —
The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch)
or dim=1 (sequence length). Sets the attention processor to use feed forward chunking.
( hidden_states: Tensor timestep: Tensor encoder_hidden_states: Tensor encoder_mask: Tensor image_rotary_emb: Tensor cross_attention_kwargs: Dict = None return_dict = True )
Parameters
The LuminaNextDiT2DModel of forward method. Check the details on Lumina
paper.
Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.
This API is 🧪 experimental.
( processor: Union )
Parameters
dict of AttentionProcessor or only AttentionProcessor) —
The instantiated processor class or a dictionary of processor classes that will be set as the processor
for all Attention layers.
If processor is a dict, the key needs to define the path to the corresponding cross attention
processor. This is strongly recommended when setting trainable attention processors.
Sets the attention processor to use to compute attention.
Disables custom attention processors and sets the default attention implementation.
Disables the fused QKV projection if enabled.
This API is 🧪 experimental.