FLUX dev Quantized Models
This repo contains quantized versions of the FLUX dev transformer for use in InvokeAI.
Contents:
transformer/base/
- Transformer in bfloat16 copied from heretransformer/bnb_nf4/
- Transformer quantized to bitsandbytes NF4 format using this script
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.