Spaces:
Runtime error
Runtime error
more instructions
Browse files
README.md
CHANGED
|
@@ -19,41 +19,61 @@ You need a webcam to run this demo. 🤗
|
|
| 19 |
You need CUDA and Python 3.10, Mac with an M1/M2/M3 chip or Intel Arc GPU
|
| 20 |
|
| 21 |
`TIMEOUT`: limit user session timeout
|
| 22 |
-
`SAFETY_CHECKER`: disabled if you want NSFW filter off
|
| 23 |
`MAX_QUEUE_SIZE`: limit number of users on current app instance
|
| 24 |
-
`TORCH_COMPILE`: enable if you want to use torch compile for faster inference
|
| 25 |
|
| 26 |
-
|
|
|
|
| 27 |
|
| 28 |
```bash
|
| 29 |
python -m venv venv
|
| 30 |
source venv/bin/activate
|
| 31 |
pip3 install -r requirements.txt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
|
| 33 |
```
|
| 34 |
|
| 35 |
-
###
|
| 36 |
|
| 37 |
Based pipeline from [taabata](https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy)
|
| 38 |
|
| 39 |
```bash
|
| 40 |
-
python -m venv venv
|
| 41 |
-
source venv/bin/activate
|
| 42 |
-
pip3 install -r requirements.txt
|
| 43 |
uvicorn "app-controlnet:app" --host 0.0.0.0 --port 7860 --reload
|
| 44 |
```
|
| 45 |
|
| 46 |
-
|
| 47 |
-
### text to image
|
| 48 |
|
| 49 |
```bash
|
| 50 |
-
python -m venv venv
|
| 51 |
-
source venv/bin/activate
|
| 52 |
-
pip3 install -r requirements.txt
|
| 53 |
uvicorn "app-txt2img:app" --host 0.0.0.0 --port 7860 --reload
|
| 54 |
```
|
| 55 |
|
| 56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
```bash
|
| 59 |
TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
|
|
|
|
| 19 |
You need CUDA and Python 3.10, Mac with an M1/M2/M3 chip or Intel Arc GPU
|
| 20 |
|
| 21 |
`TIMEOUT`: limit user session timeout
|
| 22 |
+
`SAFETY_CHECKER`: disabled if you want NSFW filter off
|
| 23 |
`MAX_QUEUE_SIZE`: limit number of users on current app instance
|
| 24 |
+
`TORCH_COMPILE`: enable if you want to use torch compile for faster inference works well on A100 GPUs
|
| 25 |
|
| 26 |
+
|
| 27 |
+
## Install
|
| 28 |
|
| 29 |
```bash
|
| 30 |
python -m venv venv
|
| 31 |
source venv/bin/activate
|
| 32 |
pip3 install -r requirements.txt
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
# LCM
|
| 36 |
+
### Image to Image
|
| 37 |
+
|
| 38 |
+
```bash
|
| 39 |
uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
|
| 40 |
```
|
| 41 |
|
| 42 |
+
### Image to Image ControlNet Canny
|
| 43 |
|
| 44 |
Based pipeline from [taabata](https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy)
|
| 45 |
|
| 46 |
```bash
|
|
|
|
|
|
|
|
|
|
| 47 |
uvicorn "app-controlnet:app" --host 0.0.0.0 --port 7860 --reload
|
| 48 |
```
|
| 49 |
|
| 50 |
+
### Text to Image
|
|
|
|
| 51 |
|
| 52 |
```bash
|
|
|
|
|
|
|
|
|
|
| 53 |
uvicorn "app-txt2img:app" --host 0.0.0.0 --port 7860 --reload
|
| 54 |
```
|
| 55 |
|
| 56 |
+
# LCM + LoRa
|
| 57 |
+
|
| 58 |
+
Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. [Learn more here](https://huggingface.co/blog/lcm_lora) or [technical report](https://huggingface.co/papers/2311.05556)
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
### Image to Image ControlNet Canny LoRa
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
```bash
|
| 66 |
+
uvicorn "app-controlnetlora:app" --host 0.0.0.0 --port 7860 --reload
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
### Text to Image
|
| 70 |
+
|
| 71 |
+
```bash
|
| 72 |
+
uvicorn "app-txt2imglora:app" --host 0.0.0.0 --port 7860 --reload
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
### Setting environment variables
|
| 77 |
|
| 78 |
```bash
|
| 79 |
TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
|