Spaces:
Running
on
Zero
Running
on
Zero
Update README.md
Browse files
README.md
CHANGED
@@ -10,4 +10,11 @@ pinned: true
|
|
10 |
short_description: Inspired by our 8-Step FLUX Merged/Fusion Models
|
11 |
---
|
12 |
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
short_description: Inspired by our 8-Step FLUX Merged/Fusion Models
|
11 |
---
|
12 |
|
13 |
+
**Update 7/9/25:** This model is now quantized and implemented in [this example space.](https://huggingface.co/spaces/LPX55/Kontext-Multi_Lightning_4bit-nf4/) Seeing preliminary VRAM usage at around ~10GB with faster inferencing. Will be experimenting with different weights and schedulers to find particularly well-performing libraries.
|
14 |
+
|
15 |
+
# FLUX.1 Kontext-dev X LoRA Experimentation
|
16 |
+
|
17 |
+
Highly experimental, will update with more details later.
|
18 |
+
|
19 |
+
- 6-8 steps
|
20 |
+
- <s>Euler, SGM Uniform (Recommended, feel free to play around)</s> Getting mixed results now, feel free to play around and share.
|