LPX55 commited on
Commit
0749e05
·
verified ·
1 Parent(s): 5a88adb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -10,4 +10,11 @@ pinned: true
10
  short_description: Inspired by our 8-Step FLUX Merged/Fusion Models
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
10
  short_description: Inspired by our 8-Step FLUX Merged/Fusion Models
11
  ---
12
 
13
+ **Update 7/9/25:** This model is now quantized and implemented in [this example space.](https://huggingface.co/spaces/LPX55/Kontext-Multi_Lightning_4bit-nf4/) Seeing preliminary VRAM usage at around ~10GB with faster inferencing. Will be experimenting with different weights and schedulers to find particularly well-performing libraries.
14
+
15
+ # FLUX.1 Kontext-dev X LoRA Experimentation
16
+
17
+ Highly experimental, will update with more details later.
18
+
19
+ - 6-8 steps
20
+ - <s>Euler, SGM Uniform (Recommended, feel free to play around)</s> Getting mixed results now, feel free to play around and share.