Normalization layers

Customized normalization layers for supporting various models in 🤗 Diffusers.

AdaLayerNorm

class diffusers.models.normalization.AdaLayerNorm

< >

( embedding_dim: int num_embeddings: typing.Optional[int] = None output_dim: typing.Optional[int] = None norm_elementwise_affine: bool = False norm_eps: float = 1e-05 chunk_dim: int = 0 )

Parameters

  • embedding_dim (int) — The size of each embedding vector.
  • num_embeddings (int, optional) — The size of the embeddings dictionary.
  • output_dim (int, optional) —
  • norm_elementwise_affine (bool, defaults to `False) —
  • norm_eps (bool, defaults to False) —
  • chunk_dim (int, defaults to 0) —

Norm layer modified to incorporate timestep embeddings.

AdaLayerNormZero

class diffusers.models.normalization.AdaLayerNormZero

< >

( embedding_dim: int num_embeddings: typing.Optional[int] = None norm_type = 'layer_norm' bias = True )

Parameters

  • embedding_dim (int) — The size of each embedding vector.
  • num_embeddings (int) — The size of the embeddings dictionary.

Norm layer adaptive layer norm zero (adaLN-Zero).

AdaLayerNormSingle

class diffusers.models.normalization.AdaLayerNormSingle

< >

( embedding_dim: int use_additional_conditions: bool = False )

Parameters

  • embedding_dim (int) — The size of each embedding vector.
  • use_additional_conditions (bool) — To use additional conditions for normalization or not.

Norm layer adaptive layer norm single (adaLN-single).

As proposed in PixArt-Alpha (see: https://arxiv.org/abs/2310.00426; Section 2.3).

AdaGroupNorm

class diffusers.models.normalization.AdaGroupNorm

< >

( embedding_dim: int out_dim: int num_groups: int act_fn: typing.Optional[str] = None eps: float = 1e-05 )

Parameters

  • embedding_dim (int) — The size of each embedding vector.
  • num_embeddings (int) — The size of the embeddings dictionary.
  • num_groups (int) — The number of groups to separate the channels into.
  • act_fn (str, optional, defaults to None) — The activation function to use.
  • eps (float, optional, defaults to 1e-5) — The epsilon value to use for numerical stability.

GroupNorm layer modified to incorporate timestep embeddings.

AdaLayerNormContinuous

class diffusers.models.normalization.AdaLayerNormContinuous

< >

( embedding_dim: int conditioning_embedding_dim: int elementwise_affine = True eps = 1e-05 bias = True norm_type = 'layer_norm' )

Parameters

  • embedding_dim (int) — Embedding dimension to use during projection.
  • conditioning_embedding_dim (int) — Dimension of the input condition.
  • elementwise_affine (bool, defaults to True) — Boolean flag to denote if affine transformation should be applied.
  • eps (float, defaults to 1e-5) — Epsilon factor.
  • bias (bias, defaults to True) — Boolean flag to denote if bias should be use.
  • norm_type (str, defaults to "layer_norm") — Normalization layer to use. Values supported: “layer_norm”, “rms_norm”.

Adaptive normalization layer with a norm layer (layer_norm or rms_norm).

RMSNorm

class diffusers.models.normalization.RMSNorm

< >

( dim eps: float elementwise_affine: bool = True bias: bool = False )

Parameters

  • dim (int) — Number of dimensions to use for weights. Only effective when elementwise_affine is True.
  • eps (float) — Small value to use when calculating the reciprocal of the square-root.
  • elementwise_affine (bool, defaults to True) — Boolean flag to denote if affine transformation should be applied.
  • bias (bool, defaults to False) — If also training the bias param.

RMS Norm as introduced in https://arxiv.org/abs/1910.07467 by Zhang et al.

GlobalResponseNorm

class diffusers.models.normalization.GlobalResponseNorm

< >

( dim )

Parameters

  • dim (int) — Number of dimensions to use for the gamma and beta.

Global response normalization as introduced in ConvNeXt-v2 (https://arxiv.org/abs/2301.00808).

LuminaLayerNormContinuous

class diffusers.models.normalization.LuminaLayerNormContinuous

< >

( embedding_dim: int conditioning_embedding_dim: int elementwise_affine = True eps = 1e-05 bias = True norm_type = 'layer_norm' out_dim: typing.Optional[int] = None )

SD35AdaLayerNormZeroX

class diffusers.models.normalization.SD35AdaLayerNormZeroX

< >

( embedding_dim: int norm_type: str = 'layer_norm' bias: bool = True )

Parameters

  • embedding_dim (int) — The size of each embedding vector.
  • num_embeddings (int) — The size of the embeddings dictionary.

Norm layer adaptive layer norm zero (AdaLN-Zero).

AdaLayerNormZeroSingle

class diffusers.models.normalization.AdaLayerNormZeroSingle

< >

( embedding_dim: int norm_type = 'layer_norm' bias = True )

Parameters

  • embedding_dim (int) — The size of each embedding vector.
  • num_embeddings (int) — The size of the embeddings dictionary.

Norm layer adaptive layer norm zero (adaLN-Zero).

LuminaRMSNormZero

class diffusers.models.normalization.LuminaRMSNormZero

< >

( embedding_dim: int norm_eps: float norm_elementwise_affine: bool )

Parameters

  • embedding_dim (int) — The size of each embedding vector.

Norm layer adaptive RMS normalization zero.

LpNorm

class diffusers.models.normalization.LpNorm

< >

( p: int = 2 dim: int = -1 eps: float = 1e-12 )

CogView3PlusAdaLayerNormZeroTextImage

class diffusers.models.normalization.CogView3PlusAdaLayerNormZeroTextImage

< >

( embedding_dim: int dim: int )

Parameters

  • embedding_dim (int) — The size of each embedding vector.
  • num_embeddings (int) — The size of the embeddings dictionary.

Norm layer adaptive layer norm zero (adaLN-Zero).

CogVideoXLayerNormZero

class diffusers.models.normalization.CogVideoXLayerNormZero

< >

( conditioning_dim: int embedding_dim: int elementwise_affine: bool = True eps: float = 1e-05 bias: bool = True )

MochiRMSNormZero

class diffusers.models.transformers.transformer_mochi.MochiRMSNormZero

< >

( embedding_dim: int hidden_dim: int eps: float = 1e-05 elementwise_affine: bool = False )

Parameters

  • embedding_dim (int) — The size of each embedding vector.

Adaptive RMS Norm used in Mochi.

MochiRMSNorm

class diffusers.models.normalization.MochiRMSNorm

< >

( dim eps: float elementwise_affine: bool = True )

< > Update on GitHub