TorchAO is an architecture optimization library for PyTorch, it provides high performance dtypes, optimization techniques and kernels for inference and training, featuring composability with native PyTorch features like torch.compile
, FSDP etc.. Some benchmark numbers can be found here.
Before you begin, make sure you have Pytorch version 2.5, or above, and TorchAO installed:
pip install -U torch torchao
Now you can quantize a model by passing a TorchAoConfig to from_pretrained(). This works for any model in any modality, as long as it supports loading with Accelerate and contains torch.nn.Linear
layers.