Hybrid Inference API Reference

Remote Decode

diffusers.utils.remote_decode

< >

( endpoint: str tensor: torch.Tensor processor: typing.Union[ForwardRef('VaeImageProcessor'), ForwardRef('VideoProcessor'), NoneType] = None do_scaling: bool = True scaling_factor: typing.Optional[float] = None shift_factor: typing.Optional[float] = None output_type: typing.Literal['mp4', 'pil', 'pt'] = 'pil' return_type: typing.Literal['mp4', 'pil', 'pt'] = 'pil' image_format: typing.Literal['png', 'jpg'] = 'jpg' partial_postprocess: bool = False input_tensor_type: typing.Literal['binary'] = 'binary' output_tensor_type: typing.Literal['binary'] = 'binary' height: typing.Optional[int] = None width: typing.Optional[int] = None )

Parameters

  • endpoint (str) — Endpoint for Remote Decode.
  • tensor (torch.Tensor) — Tensor to be decoded.
  • processor (VaeImageProcessor or VideoProcessor, optional) — Used with return_type="pt", and return_type="pil" for Video models.
  • do_scaling (bool, default True, optional) — DEPRECATED. pass scaling_factor/shift_factor instead. still set do_scaling=None/do_scaling=False for no scaling until option is removed When True scaling e.g. latents / self.vae.config.scaling_factor is applied remotely. If False, input must be passed with scaling applied.
  • scaling_factor (float, optional) — Scaling is applied when passed e.g. latents / self.vae.config.scaling_factor.

    • SD v1: 0.18215
    • SD XL: 0.13025
    • Flux: 0.3611 If None, input must be passed with scaling applied.
  • shift_factor (float, optional) — Shift is applied when passed e.g. latents + self.vae.config.shift_factor.

    • Flux: 0.1159 If None, input must be passed with scaling applied.
  • output_type ("mp4" or "pil" or "pt", default “pil”) — Endpoint output type. Subject to change. Report feedback on preferred type.

    "mp4": Supported by video models. Endpoint returns bytesof video.“pil”: Supported by image and video models. Image models: Endpoint returns bytesof an image inimage_format. Video models: Endpoint returns torch.Tensorwith partialpostprocessingapplied. Requiresprocessoras a flag (anyNonevalue will work).“pt”: Support by image and video models. Endpoint returns torch.Tensor. With partial_postprocess=Truethe tensor is postprocesseduint8` image tensor.

    Recommendations: "pt" with partial_postprocess=True is the smallest transfer for full quality. "pt" with partial_postprocess=False is the most compatible with third party code. "pil" with image_format="jpg" is the smallest transfer overall.

  • return_type ("mp4" or "pil" or "pt", default “pil”) — Function return type.

    "mp4": Function returns bytesof video.“pil”: Function returns PIL.Image.Image. With output_type=“pil” no further processing is applied. With output_type="pt" a PIL.Image.Imageis created.partial_postprocess=False processoris required.partial_postprocess=True processoris **not** required.“pt”: Function returns torch.Tensor. processoris **not** required.partial_postprocess=Falsetensor isfloat16orbfloat16, without denormalization. partial_postprocess=Truetensor isuint8`, denormalized.

  • image_format ("png" or "jpg", default jpg) — Used with output_type="pil". Endpoint returns jpg or png.
  • partial_postprocess (bool, default False) — Used with output_type="pt". partial_postprocess=False tensor is float16 or bfloat16, without denormalization. partial_postprocess=True tensor is uint8, denormalized.
  • input_tensor_type ("binary", default "binary") — Tensor transfer type.
  • output_tensor_type ("binary", default "binary") — Tensor transfer type.
  • height (int, optional) — Required for "packed" latents.
  • width (int, optional) — Required for "packed" latents.

Hugging Face Hybrid Inference that allow running VAE decode remotely.

Remote Encode

diffusers.utils.remote_utils.remote_encode

< >

( endpoint: str image: typing.Union[ForwardRef('torch.Tensor'), PIL.Image.Image] scaling_factor: typing.Optional[float] = None shift_factor: typing.Optional[float] = None )

Parameters

  • endpoint (str) — Endpoint for Remote Decode.
  • image (torch.Tensor or PIL.Image.Image) — Image to be encoded.
  • scaling_factor (float, optional) — Scaling is applied when passed e.g. latents * self.vae.config.scaling_factor.
    • SD v1: 0.18215
    • SD XL: 0.13025
    • Flux: 0.3611 If None, input must be passed with scaling applied.
  • shift_factor (float, optional) — Shift is applied when passed e.g. latents - self.vae.config.shift_factor.
    • Flux: 0.1159 If None, input must be passed with scaling applied.

Hugging Face Hybrid Inference that allow running VAE encode remotely.

< > Update on GitHub