ABDALLALSWAITI commited on
Commit
8cf6344
·
verified ·
1 Parent(s): ce07205

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -13,13 +13,16 @@ tags:
13
  - Flux.1-dev
14
  - image-generation
15
  - Stable Diffusion
16
- base_model:
17
- - Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0
 
 
 
18
  ---
19
 
20
- # FLUX.1-dev-ControlNet-Union-Pro-2.0 (fp8)
21
 
22
- This repository contains an unified ControlNet for FLUX.1-dev model released by [Shakker Labs](https://huggingface.co/Shakker-Labs). This is a direct quantization of the original model to FP8 format for optimized inference performance (not a fine-tuned version). We provide an [online demo](https://huggingface.co/spaces/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0).
23
 
24
  # FP8 Quantization
25
  This model has been quantized from the original BFloat16 format to FP8 format. The benefits include:
@@ -27,7 +30,7 @@ This model has been quantized from the original BFloat16 format to FP8 format. T
27
  - **Faster Inference**: Potential speed improvements, especially on hardware with FP8 support
28
  - **Minimal Quality Loss**: Carefully calibrated quantization process to preserve output quality
29
 
30
- Note: This is a direct quantization of [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0) and preserves all the functionality of the original model.
31
 
32
  # Keynotes
33
  In comparison with [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro),
 
13
  - Flux.1-dev
14
  - image-generation
15
  - Stable Diffusion
16
+ - quantization
17
+ - fp8
18
+ inference:
19
+ parameters:
20
+ torch_dtype: torch.float8_e4m3fn
21
  ---
22
 
23
+ # FLUX.1-dev-ControlNet-Union-Pro-2.0 (FP8 Quantized)
24
 
25
+ This repository contains an FP8 quantized version of the [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0) model. **This is NOT a fine-tuned model** but a direct quantization of the original BFloat16 model to FP8 format for optimized inference performance. We provide an [online demo](https://huggingface.co/spaces/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0).
26
 
27
  # FP8 Quantization
28
  This model has been quantized from the original BFloat16 format to FP8 format. The benefits include:
 
30
  - **Faster Inference**: Potential speed improvements, especially on hardware with FP8 support
31
  - **Minimal Quality Loss**: Carefully calibrated quantization process to preserve output quality
32
 
33
+ **Important Note**: This is a direct quantization of [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0) and preserves all the functionality of the original model. No fine-tuning or additional training has been performed.
34
 
35
  # Keynotes
36
  In comparison with [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro),