( endpoint: str tensor: torch.Tensor processor: typing.Union[ForwardRef('VaeImageProcessor'), ForwardRef('VideoProcessor'), NoneType] = None do_scaling: bool = True scaling_factor: typing.Optional[float] = None shift_factor: typing.Optional[float] = None output_type: typing.Literal['mp4', 'pil', 'pt'] = 'pil' return_type: typing.Literal['mp4', 'pil', 'pt'] = 'pil' image_format: typing.Literal['png', 'jpg'] = 'jpg' partial_postprocess: bool = False input_tensor_type: typing.Literal['binary'] = 'binary' output_tensor_type: typing.Literal['binary'] = 'binary' height: typing.Optional[int] = None width: typing.Optional[int] = None )
Parameters
str
) —
Endpoint for Remote Decode. torch.Tensor
) —
Tensor to be decoded. VaeImageProcessor
or VideoProcessor
, optional) —
Used with return_type="pt"
, and return_type="pil"
for Video models. bool
, default True
, optional) —
DEPRECATED. pass scaling_factor
/shift_factor
instead. still set
do_scaling=None/do_scaling=False for no scaling until option is removed When True
scaling e.g. latents / self.vae.config.scaling_factor
is applied remotely. If False
, input must be passed with scaling
applied. float
, optional) —
Scaling is applied when passed e.g. latents / self.vae.config.scaling_factor
.
None
, input must be passed with scaling applied.float
, optional) —
Shift is applied when passed e.g. latents + self.vae.config.shift_factor
.
None
, input must be passed with scaling applied."mp4"
or "pil"
or "pt", default
“pil”) —
Endpoint output type. Subject to change. Report feedback on preferred type.
"mp4": Supported by video models. Endpoint returns
bytesof video.
“pil”: Supported by image and video models. Image models: Endpoint returns
bytesof an image in
image_format. Video models: Endpoint returns
torch.Tensorwith partial
postprocessingapplied. Requires
processoras a flag (any
Nonevalue will work).
“pt”: Support by image and video models. Endpoint returns
torch.Tensor. With
partial_postprocess=Truethe tensor is postprocessed
uint8` image tensor.
Recommendations:
"pt"
with partial_postprocess=True
is the smallest transfer for full quality. "pt"
with
partial_postprocess=False
is the most compatible with third party code. "pil"
with
image_format="jpg"
is the smallest transfer overall.
"mp4"
or "pil"
or "pt", default
“pil”) —
Function return type.
"mp4": Function returns
bytesof video.
“pil”: Function returns
PIL.Image.Image. With
output_type=“pil” no further processing is applied. With output_type="pt" a
PIL.Image.Imageis created.
partial_postprocess=False
processoris required.
partial_postprocess=True
processoris **not** required.
“pt”: Function returns
torch.Tensor.
processoris **not** required.
partial_postprocess=Falsetensor is
float16or
bfloat16, without denormalization.
partial_postprocess=Truetensor is
uint8`, denormalized.
"png"
or "jpg"
, default jpg
) —
Used with output_type="pil"
. Endpoint returns jpg
or png
. bool
, default False
) —
Used with output_type="pt"
. partial_postprocess=False
tensor is float16
or bfloat16
, without
denormalization. partial_postprocess=True
tensor is uint8
, denormalized. "binary"
, default "binary"
) —
Tensor transfer type. "binary"
, default "binary"
) —
Tensor transfer type. int
, optional) —
Required for "packed"
latents. int
, optional) —
Required for "packed"
latents. Hugging Face Hybrid Inference that allow running VAE decode remotely.
( endpoint: str image: typing.Union[ForwardRef('torch.Tensor'), PIL.Image.Image] scaling_factor: typing.Optional[float] = None shift_factor: typing.Optional[float] = None )
Parameters
str
) —
Endpoint for Remote Decode. torch.Tensor
or PIL.Image.Image
) —
Image to be encoded. float
, optional) —
Scaling is applied when passed e.g. latents * self.vae.config.scaling_factor
.None
, input must be passed with scaling applied.float
, optional) —
Shift is applied when passed e.g. latents - self.vae.config.shift_factor
.None
, input must be passed with scaling applied.Hugging Face Hybrid Inference that allow running VAE encode remotely.