TorchAO is an architecture optimization library for PyTorch, it provides high performance dtypes, optimization techniques and kernels for inference and training, featuring composability with native PyTorch features like torch.compile
, FSDP etc. Some benchmark numbers can be found here.
Before you begin, make sure you have Pytorch version 2.5, or above, and TorchAO installed:
pip install -U torch torchao
Now you can quantize a model by passing a TorchAoConfig to from_pretrained(). Loading pre-quantized models is supported as well! This works for any model in any modality, as long as it supports loading with Accelerate and contains torch.nn.Linear
layers.
from diffusers import FluxPipeline, FluxTransformer2DModel, TorchAoConfig
model_id = "black-forest-labs/Flux.1-Dev"
dtype = torch.bfloat16
quantization_config = TorchAoConfig("int8wo")
transformer = FluxTransformer2DModel.from_pretrained(
model_id,
subfolder="transformer",
quantization_config=quantization_config,
torch_dtype=dtype,
)
pipe = FluxPipeline.from_pretrained(
model_id,
transformer=transformer,
torch_dtype=dtype,
)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
image.save("output.png")
TorchAO offers seamless compatibility with torch.compile
, setting it apart from other quantization methods. This ensures one to achieve remarkable speedups with ease.
# In the above code, add the following after initializing the transformer
transformer = torch.compile(transformer, mode="max-autotune", fullgraph=True)
For speed/memory benchmarks on Flux/CogVideoX, please refer to the table here.
Additionally, TorchAO supports an automatic quantization API exposed with autoquant
. Autoquantization determines the best quantization strategy applicable to a model by comparing the performance of each technique on chosen input types and shapes. This can directly be used with the underlying modeling components at the moment, but Diffusers will also expose an autoquant configuration option in the future.