🤗 Optimum

🤗 Optimum is a collection of libraries that enables you to very easily deploy and optimize your models on various cloud providers and hardware accelerators.

Check out the sections below to learn more about our cloud, hardware and on-prem partners. You can reach out to hardware@huggingface.co to request more information about our current and future partnerships.

Cloud Partners

AWS

Deploy your models in a few clicks on AWS with SageMaker.

GCP

Build your own AI with the latest open models from Hugging Face and the latest cloud and hardware features from Google Cloud.

Azure

Deploy machine learning models to a dedicated endpoint with the enterprise-grade infrastructure of Azure.

Cloudflare

Serverless inference for AI models.

Hardware Partners

NVIDIA

Accelerate inference with NVIDIA TensorRT-LLM on the NVIDIA platform

AMD

Enable performance optimizations for AMD Instinct GPUs and AMD Ryzen AI NPUs

Intel

Optimize your model to speedup inference with OpenVINO and Neural Compressor

AWS Trainium/Inferentia

Accelerate your training and inference workflows with AWS Trainium and AWS Inferentia

Google TPUs

Accelerate your training and inference workflows with Google TPUs

Habana

Maximize training throughput and efficiency with Habana's Gaudi processor

FuriosaAI

Fast and efficient inference on FuriosaAI WARBOY

Some packages provide hardware-agnostic features (e.g. INC interface in Optimum Intel).

On-prem Partners

Dell

The Dell Enterprise Hub features custom, ready-to-deploy containers and scripts that facilitate the easy, secure deployment of open-source models available on Hugging Face.

Open-source integrations

🤗 Optimum also supports a variety of open-source frameworks to make model optimization very easy.

ONNX Runtime

Apply quantization and graph optimization to accelerate Transformers models training and inference with ONNX Runtime

Exporters

Export your PyTorch or TensorFlow model to different formats such as ONNX and TFLite

BetterTransformer

A one-liner integration to use PyTorch's BetterTransformer with Transformers models

Torch FX

Create and compose custom graph transformations to optimize PyTorch Transformers models with Torch FX

< > Update on GitHub