🤗 Optimum is a collection of libraries that enables you to very easily deploy and optimize your models on various cloud providers and hardware accelerators.
Check out the sections below to learn more about our cloud, hardware and on-prem partners. You can reach out to hardware@huggingface.co to request more information about our current and future partnerships.
Deploy your models in a few clicks on AWS with SageMaker.
Build your own AI with the latest open models from Hugging Face and the latest cloud and hardware features from Google Cloud.
Deploy machine learning models to a dedicated endpoint with the enterprise-grade infrastructure of Azure.
Serverless inference for AI models.
Accelerate inference with NVIDIA TensorRT-LLM on the NVIDIA platform
Enable performance optimizations for AMD Instinct GPUs and AMD Ryzen AI NPUs
Optimize your model to speedup inference with OpenVINO and Neural Compressor
Accelerate your training and inference workflows with AWS Trainium and AWS Inferentia
Accelerate your training and inference workflows with Google TPUs
Maximize training throughput and efficiency with Habana's Gaudi processor
Fast and efficient inference on FuriosaAI WARBOY
Some packages provide hardware-agnostic features (e.g. INC interface in Optimum Intel).
🤗 Optimum also supports a variety of open-source frameworks to make model optimization very easy.
Apply quantization and graph optimization to accelerate Transformers models training and inference with ONNX Runtime
Export your PyTorch or TensorFlow model to different formats such as ONNX and TFLite
A one-liner integration to use PyTorch's BetterTransformer with Transformers models
Create and compose custom graph transformations to optimize PyTorch Transformers models with Torch FX