Update README.md
Browse files
README.md
CHANGED
@@ -175,14 +175,6 @@ pipe.to("cuda")
|
|
175 |
|
176 |
See `fp8_inference_example.py` for a complete example.
|
177 |
|
178 |
-
# Pushing Model to Hugging Face Hub
|
179 |
-
To push your FP8 quantized model to the Hugging Face Hub, use the included script:
|
180 |
-
|
181 |
-
```bash
|
182 |
-
python push_model_to_hub.py --repo_id "ABDALLALSWAITI/FLUX.1-dev-ControlNet-Union-Pro-2.0-fp8"
|
183 |
-
```
|
184 |
-
|
185 |
-
You will need to have the `huggingface_hub` library installed and be logged in with your Hugging Face credentials.
|
186 |
|
187 |
# Resources
|
188 |
- [InstantX/FLUX.1-dev-IP-Adapter](https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter)
|
|
|
175 |
|
176 |
See `fp8_inference_example.py` for a complete example.
|
177 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
178 |
|
179 |
# Resources
|
180 |
- [InstantX/FLUX.1-dev-IP-Adapter](https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter)
|