FLUXllama / README.md
ginipick's picture
Update README.md
2ef7034 verified
---
title: FLUXllama
emoji: ๐Ÿฆ€๐Ÿ†๐Ÿฆ€
colorFrom: gray
colorTo: pink
sdk: gradio
sdk_version: 5.35.0
app_file: app.py
pinned: false
license: mit
short_description: mcp_server & FLUX 4-bit Quantization(just 8GB VRAM)
---
## English Description
### FluxLLama - NF4 Quantized FLUX.1-dev Image Generator
FluxLLama is an optimized implementation of the FLUX.1-dev model using 4-bit quantization (NF4) for efficient GPU memory usage. This application allows you to generate high-quality images from text prompts while using significantly less VRAM than the full-precision model.
#### Key Features:
- **4-bit NF4 Quantization**: Reduces model size from ~24GB to ~6GB VRAM requirement
- **Text-to-Image Generation**: Create images from detailed text descriptions
- **Image-to-Image Generation**: Transform existing images based on text prompts
- **Customizable Parameters**: Control image dimensions, guidance scale, inference steps, and seed
- **Efficient Memory Usage**: Uses bitsandbytes for optimized 4-bit operations
- **Web Interface**: Easy-to-use Gradio interface for image generation
#### Technical Details:
- Uses T5-XXL encoder for text understanding
- CLIP encoder for additional text conditioning
- Custom NF4 (Normal Float 4-bit) quantization implementation
- Supports resolutions from 128x128 to 2048x2048
- Adjustable inference steps (1-30) for quality/speed tradeoff
- Guidance scale control (1.0-5.0) for prompt adherence
#### How to Use:
1. Enter your text prompt describing the desired image
2. Adjust width and height for your preferred resolution
3. Set guidance scale (higher = closer to prompt)
4. Choose number of inference steps (more = better quality, slower)
5. Optionally set a seed for reproducible results
6. For image-to-image mode, upload an initial image and adjust the noising strength
7. Click "Generate" to create your image
---
## ํ•œ๊ธ€ ์„ค๋ช…
### FluxLLama - NF4 ์–‘์žํ™” FLUX.1-dev ์ด๋ฏธ์ง€ ์ƒ์„ฑ๊ธฐ
FluxLLama๋Š” ํšจ์œจ์ ์ธ GPU ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ์„ ์œ„ํ•ด 4๋น„ํŠธ ์–‘์žํ™”(NF4)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” FLUX.1-dev ๋ชจ๋ธ์˜ ์ตœ์ ํ™”๋œ ๊ตฌํ˜„์ž…๋‹ˆ๋‹ค. ์ด ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์‚ฌ์šฉํ•˜๋ฉด ์ „์ฒด ์ •๋ฐ€๋„ ๋ชจ๋ธ๋ณด๋‹ค ํ›จ์”ฌ ์ ์€ VRAM์„ ์‚ฌ์šฉํ•˜๋ฉด์„œ๋„ ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ๋กœ๋ถ€ํ„ฐ ๊ณ ํ’ˆ์งˆ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
#### ์ฃผ์š” ๊ธฐ๋Šฅ:
- **4๋น„ํŠธ NF4 ์–‘์žํ™”**: ๋ชจ๋ธ ํฌ๊ธฐ๋ฅผ ~24GB์—์„œ ~6GB VRAM ์š”๊ตฌ์‚ฌํ•ญ์œผ๋กœ ๊ฐ์†Œ
- **ํ…์ŠคํŠธ-์ด๋ฏธ์ง€ ์ƒ์„ฑ**: ์ƒ์„ธํ•œ ํ…์ŠคํŠธ ์„ค๋ช…์œผ๋กœ๋ถ€ํ„ฐ ์ด๋ฏธ์ง€ ์ƒ์„ฑ
- **์ด๋ฏธ์ง€-์ด๋ฏธ์ง€ ์ƒ์„ฑ**: ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ธฐ์กด ์ด๋ฏธ์ง€ ๋ณ€ํ™˜
- **์‚ฌ์šฉ์ž ์ •์˜ ๊ฐ€๋Šฅํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜**: ์ด๋ฏธ์ง€ ํฌ๊ธฐ, ๊ฐ€์ด๋˜์Šค ์Šค์ผ€์ผ, ์ถ”๋ก  ๋‹จ๊ณ„, ์‹œ๋“œ ์ œ์–ด
- **ํšจ์œจ์ ์ธ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ**: ์ตœ์ ํ™”๋œ 4๋น„ํŠธ ์—ฐ์‚ฐ์„ ์œ„ํ•œ bitsandbytes ์‚ฌ์šฉ
- **์›น ์ธํ„ฐํŽ˜์ด์Šค**: ์ด๋ฏธ์ง€ ์ƒ์„ฑ์„ ์œ„ํ•œ ์‚ฌ์šฉํ•˜๊ธฐ ์‰ฌ์šด Gradio ์ธํ„ฐํŽ˜์ด์Šค
#### ๊ธฐ์ˆ ์  ์„ธ๋ถ€์‚ฌํ•ญ:
- ํ…์ŠคํŠธ ์ดํ•ด๋ฅผ ์œ„ํ•œ T5-XXL ์ธ์ฝ”๋” ์‚ฌ์šฉ
- ์ถ”๊ฐ€ ํ…์ŠคํŠธ ์กฐ๊ฑดํ™”๋ฅผ ์œ„ํ•œ CLIP ์ธ์ฝ”๋”
- ์ปค์Šคํ…€ NF4 (Normal Float 4๋น„ํŠธ) ์–‘์žํ™” ๊ตฌํ˜„
- 128x128๋ถ€ํ„ฐ 2048x2048๊นŒ์ง€์˜ ํ•ด์ƒ๋„ ์ง€์›
- ํ’ˆ์งˆ/์†๋„ ๊ท ํ˜•์„ ์œ„ํ•œ ์กฐ์ • ๊ฐ€๋Šฅํ•œ ์ถ”๋ก  ๋‹จ๊ณ„ (1-30)
- ํ”„๋กฌํ”„ํŠธ ์ค€์ˆ˜๋ฅผ ์œ„ํ•œ ๊ฐ€์ด๋˜์Šค ์Šค์ผ€์ผ ์ œ์–ด (1.0-5.0)
#### ์‚ฌ์šฉ ๋ฐฉ๋ฒ•:
1. ์›ํ•˜๋Š” ์ด๋ฏธ์ง€๋ฅผ ์„ค๋ช…ํ•˜๋Š” ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ ์ž…๋ ฅ
2. ์›ํ•˜๋Š” ํ•ด์ƒ๋„์— ๋งž๊ฒŒ ๋„ˆ๋น„์™€ ๋†’์ด ์กฐ์ •
3. ๊ฐ€์ด๋˜์Šค ์Šค์ผ€์ผ ์„ค์ • (๋†’์„์ˆ˜๋ก ํ”„๋กฌํ”„ํŠธ์— ๋” ๊ฐ€๊น๊ฒŒ)
4. ์ถ”๋ก  ๋‹จ๊ณ„ ์ˆ˜ ์„ ํƒ (๋งŽ์„์ˆ˜๋ก ํ’ˆ์งˆ ํ–ฅ์ƒ, ์†๋„ ์ €ํ•˜)
5. ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ๊ฒฐ๊ณผ๋ฅผ ์œ„ํ•ด ์„ ํƒ์ ์œผ๋กœ ์‹œ๋“œ ์„ค์ •
6. ์ด๋ฏธ์ง€-์ด๋ฏธ์ง€ ๋ชจ๋“œ์˜ ๊ฒฝ์šฐ, ์ดˆ๊ธฐ ์ด๋ฏธ์ง€๋ฅผ ์—…๋กœ๋“œํ•˜๊ณ  ๋…ธ์ด์ง• ๊ฐ•๋„ ์กฐ์ •
7. "Generate" ํด๋ฆญํ•˜์—ฌ ์ด๋ฏธ์ง€ ์ƒ์„ฑ