|
<!--Copyright 2024 The HuggingFace Team. All rights reserved. |
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
|
the License. You may obtain a copy of the License at |
|
|
|
http://www.apache.org/licenses/LICENSE-2.0 |
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
|
specific language governing permissions and limitations under the License. |
|
--> |
|
|
|
# ControlNet |
|
|
|
[Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) (ControlNet)์ Lvmin Zhang๊ณผ Maneesh Agrawala์ ์ํด ์ฐ์ฌ์ก์ต๋๋ค. |
|
|
|
์ด ์์๋ [์๋ณธ ControlNet ๋ฆฌํฌ์งํ ๋ฆฌ์์ ์์ ํ์ตํ๊ธฐ](https://github.com/lllyasviel/ControlNet/blob/main/docs/train.md)์ ๊ธฐ๋ฐํฉ๋๋ค. ControlNet์ ์๋ค์ ์ฑ์ฐ๊ธฐ ์ํด [small synthetic dataset](https://huggingface.co/datasets/fusing/fill50k)์ ์ฌ์ฉํด์ ํ์ต๋ฉ๋๋ค. |
|
|
|
## ์์กด์ฑ ์ค์นํ๊ธฐ |
|
|
|
์๋์ ์คํฌ๋ฆฝํธ๋ฅผ ์คํํ๊ธฐ ์ ์, ๋ผ์ด๋ธ๋ฌ๋ฆฌ์ ํ์ต ์์กด์ฑ์ ์ค์นํด์ผ ํฉ๋๋ค. |
|
|
|
<Tip warning={true}> |
|
|
|
๊ฐ์ฅ ์ต์ ๋ฒ์ ์ ์์ ์คํฌ๋ฆฝํธ๋ฅผ ์ฑ๊ณต์ ์ผ๋ก ์คํํ๊ธฐ ์ํด์๋, ์์ค์์ ์ค์นํ๊ณ ์ต์ ๋ฒ์ ์ ์ค์น๋ฅผ ์ ์งํ๋ ๊ฒ์ ๊ฐ๋ ฅํ๊ฒ ์ถ์ฒํฉ๋๋ค. ์ฐ๋ฆฌ๋ ์์ ์คํฌ๋ฆฝํธ๋ค์ ์์ฃผ ์
๋ฐ์ดํธํ๊ณ ์์์ ๋ง์ถ ํน์ ํ ์๊ตฌ์ฌํญ์ ์ค์นํฉ๋๋ค. |
|
|
|
</Tip> |
|
|
|
์ ์ฌํญ์ ๋ง์กฑ์ํค๊ธฐ ์ํด์, ์๋ก์ด ๊ฐ์ํ๊ฒฝ์์ ๋ค์ ์ผ๋ จ์ ์คํ
์ ์คํํ์ธ์: |
|
|
|
```bash |
|
git clone https://github.com/huggingface/diffusers |
|
cd diffusers |
|
pip install -e . |
|
``` |
|
|
|
๊ทธ ๋ค์์๋ [์์ ํด๋](https://github.com/huggingface/diffusers/tree/main/examples/controlnet)์ผ๋ก ์ด๋ํฉ๋๋ค. |
|
|
|
```bash |
|
cd examples/controlnet |
|
``` |
|
|
|
์ด์ ์คํํ์ธ์: |
|
|
|
```bash |
|
pip install -r requirements.txt |
|
``` |
|
|
|
[๐คAccelerate](https://github.com/huggingface/accelerate/) ํ๊ฒฝ์ ์ด๊ธฐํ ํฉ๋๋ค: |
|
|
|
```bash |
|
accelerate config |
|
``` |
|
|
|
ํน์ ์ฌ๋ฌ๋ถ์ ํ๊ฒฝ์ด ๋ฌด์์ธ์ง ๋ชฐ๋ผ๋ ๊ธฐ๋ณธ์ ์ธ ๐คAccelerate ๊ตฌ์ฑ์ผ๋ก ์ด๊ธฐํํ ์ ์์ต๋๋ค: |
|
|
|
```bash |
|
accelerate config default |
|
``` |
|
|
|
ํน์ ๋น์ ์ ํ๊ฒฝ์ด ๋
ธํธ๋ถ ๊ฐ์ ์ํธ์์ฉํ๋ ์์ ์ง์ํ์ง ์๋๋ค๋ฉด, ์๋์ ์ฝ๋๋ก ์ด๊ธฐํ ํ ์ ์์ต๋๋ค: |
|
|
|
```python |
|
from accelerate.utils import write_basic_config |
|
|
|
write_basic_config() |
|
``` |
|
|
|
## ์์ ์ฑ์ฐ๋ ๋ฐ์ดํฐ์
|
|
|
|
์๋ณธ ๋ฐ์ดํฐ์
์ ControlNet [repo](https://huggingface.co/lllyasviel/ControlNet/blob/main/training/fill50k.zip)์ ์ฌ๋ผ์์์ง๋ง, ์ฐ๋ฆฌ๋ [์ฌ๊ธฐ](https://huggingface.co/datasets/fusing/fill50k)์ ์๋กญ๊ฒ ๋ค์ ์ฌ๋ ค์ ๐ค Datasets ๊ณผ ํธํ๊ฐ๋ฅํฉ๋๋ค. ๊ทธ๋์ ํ์ต ์คํฌ๋ฆฝํธ ์์์ ๋ฐ์ดํฐ ๋ถ๋ฌ์ค๊ธฐ๋ฅผ ๋ค๋ฃฐ ์ ์์ต๋๋ค. |
|
|
|
์ฐ๋ฆฌ์ ํ์ต ์์๋ ์๋ ControlNet์ ํ์ต์ ์ฐ์๋ [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)์ ์ฌ์ฉํฉ๋๋ค. ๊ทธ๋ ์ง๋ง ControlNet์ ๋์๋๋ ์ด๋ Stable Diffusion ๋ชจ๋ธ([`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4)) ํน์ [`stabilityai/stable-diffusion-2-1`](https://huggingface.co/stabilityai/stable-diffusion-2-1)์ ์ฆ๊ฐ๋ฅผ ์ํด ํ์ต๋ ์ ์์ต๋๋ค. |
|
|
|
์์ฒด ๋ฐ์ดํฐ์
์ ์ฌ์ฉํ๊ธฐ ์ํด์๋ [ํ์ต์ ์ํ ๋ฐ์ดํฐ์
์์ฑํ๊ธฐ](create_dataset) ๊ฐ์ด๋๋ฅผ ํ์ธํ์ธ์. |
|
|
|
## ํ์ต |
|
|
|
์ด ํ์ต์ ์ฌ์ฉ๋ ๋ค์ ์ด๋ฏธ์ง๋ค์ ๋ค์ด๋ก๋ํ์ธ์: |
|
|
|
```sh |
|
wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png |
|
|
|
wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png |
|
``` |
|
|
|
`MODEL_NAME` ํ๊ฒฝ ๋ณ์ (Hub ๋ชจ๋ธ ๋ฆฌํฌ์งํ ๋ฆฌ ์์ด๋ ํน์ ๋ชจ๋ธ ๊ฐ์ค์น๊ฐ ์๋ ๋๋ ํ ๋ฆฌ๋ก ๊ฐ๋ ์ฃผ์)๋ฅผ ๋ช
์ํ๊ณ [`pretrained_model_name_or_path`](https://huggingface.co/docs/diffusers/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path) ์ธ์๋ก ํ๊ฒฝ๋ณ์๋ฅผ ๋ณด๋
๋๋ค. |
|
|
|
ํ์ต ์คํฌ๋ฆฝํธ๋ ๋น์ ์ ๋ฆฌํฌ์งํ ๋ฆฌ์ `diffusion_pytorch_model.bin` ํ์ผ์ ์์ฑํ๊ณ ์ ์ฅํฉ๋๋ค. |
|
|
|
```bash |
|
export MODEL_DIR="runwayml/stable-diffusion-v1-5" |
|
export OUTPUT_DIR="path to save model" |
|
|
|
accelerate launch train_controlnet.py \ |
|
--pretrained_model_name_or_path=$MODEL_DIR \ |
|
--output_dir=$OUTPUT_DIR \ |
|
--dataset_name=fusing/fill50k \ |
|
--resolution=512 \ |
|
--learning_rate=1e-5 \ |
|
--validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ |
|
--validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ |
|
--train_batch_size=4 \ |
|
--push_to_hub |
|
``` |
|
|
|
์ด ๊ธฐ๋ณธ์ ์ธ ์ค์ ์ผ๋ก๋ ~38GB VRAM์ด ํ์ํฉ๋๋ค. |
|
|
|
๊ธฐ๋ณธ์ ์ผ๋ก ํ์ต ์คํฌ๋ฆฝํธ๋ ๊ฒฐ๊ณผ๋ฅผ ํ
์๋ณด๋์ ๊ธฐ๋กํฉ๋๋ค. ๊ฐ์ค์น(weight)์ ํธํฅ(bias)์ ์ฌ์ฉํ๊ธฐ ์ํด `--report_to wandb` ๋ฅผ ์ ๋ฌํฉ๋๋ค. |
|
|
|
๋ ์์ batch(๋ฐฐ์น) ํฌ๊ธฐ๋ก gradient accumulation(๊ธฐ์ธ๊ธฐ ๋์ )์ ํ๋ฉด ํ์ต ์๊ตฌ์ฌํญ์ ~20 GB VRAM์ผ๋ก ์ค์ผ ์ ์์ต๋๋ค. |
|
|
|
```bash |
|
export MODEL_DIR="runwayml/stable-diffusion-v1-5" |
|
export OUTPUT_DIR="path to save model" |
|
|
|
accelerate launch train_controlnet.py \ |
|
--pretrained_model_name_or_path=$MODEL_DIR \ |
|
--output_dir=$OUTPUT_DIR \ |
|
--dataset_name=fusing/fill50k \ |
|
--resolution=512 \ |
|
--learning_rate=1e-5 \ |
|
--validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ |
|
--validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ |
|
--train_batch_size=1 \ |
|
--gradient_accumulation_steps=4 \ |
|
--push_to_hub |
|
``` |
|
|
|
## ์ฌ๋ฌ๊ฐ GPU๋ก ํ์ตํ๊ธฐ |
|
|
|
`accelerate` ์ seamless multi-GPU ํ์ต์ ๊ณ ๋ คํฉ๋๋ค. `accelerate`๊ณผ ํจ๊ป ๋ถ์ฐ๋ ํ์ต์ ์คํํ๊ธฐ ์ํด [์ฌ๊ธฐ](https://huggingface.co/docs/accelerate/basic_tutorials/launch) |
|
์ ์ค๋ช
์ ํ์ธํ์ธ์. ์๋๋ ์์ ๋ช
๋ น์ด์
๋๋ค: |
|
|
|
```bash |
|
export MODEL_DIR="runwayml/stable-diffusion-v1-5" |
|
export OUTPUT_DIR="path to save model" |
|
|
|
accelerate launch --mixed_precision="fp16" --multi_gpu train_controlnet.py \ |
|
--pretrained_model_name_or_path=$MODEL_DIR \ |
|
--output_dir=$OUTPUT_DIR \ |
|
--dataset_name=fusing/fill50k \ |
|
--resolution=512 \ |
|
--learning_rate=1e-5 \ |
|
--validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ |
|
--validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ |
|
--train_batch_size=4 \ |
|
--mixed_precision="fp16" \ |
|
--tracker_project_name="controlnet-demo" \ |
|
--report_to=wandb \ |
|
--push_to_hub |
|
``` |
|
|
|
## ์์ ๊ฒฐ๊ณผ |
|
|
|
#### ๋ฐฐ์น ์ฌ์ด์ฆ 8๋ก 300 ์คํ
์ดํ: |
|
|
|
| | | |
|
|-------------------|:-------------------------:| |
|
| | ํธ๋ฅธ ๋ฐฐ๊ฒฝ๊ณผ ๋นจ๊ฐ ์ | |
|
 |  | |
|
| | ๊ฐ์ ๊ฝ ๋ฐฐ๊ฒฝ๊ณผ ์ฒญ๋ก์ ์ | |
|
 |  | |
|
|
|
#### ๋ฐฐ์น ์ฌ์ด์ฆ 8๋ก 6000 ์คํ
์ดํ: |
|
|
|
| | | |
|
|-------------------|:-------------------------:| |
|
| | ํธ๋ฅธ ๋ฐฐ๊ฒฝ๊ณผ ๋นจ๊ฐ ์ | |
|
 |  | |
|
| | ๊ฐ์ ๊ฝ ๋ฐฐ๊ฒฝ๊ณผ ์ฒญ๋ก์ ์ | |
|
 |  | |
|
|
|
## 16GB GPU์์ ํ์ตํ๊ธฐ |
|
|
|
16GB GPU์์ ํ์ตํ๊ธฐ ์ํด ๋ค์์ ์ต์ ํ๋ฅผ ์งํํ์ธ์: |
|
|
|
- ๊ธฐ์ธ๊ธฐ ์ฒดํฌํฌ์ธํธ ์ ์ฅํ๊ธฐ |
|
- bitsandbyte์ [8-bit optimizer](https://github.com/TimDettmers/bitsandbytes#requirements--installation)๊ฐ ์ค์น๋์ง ์์๋ค๋ฉด ๋งํฌ์ ์ฐ๊ฒฐ๋ ์ค๋ช
์๋ฅผ ๋ณด์ธ์. |
|
|
|
์ด์ ํ์ต ์คํฌ๋ฆฝํธ๋ฅผ ์์ํ ์ ์์ต๋๋ค: |
|
|
|
```bash |
|
export MODEL_DIR="runwayml/stable-diffusion-v1-5" |
|
export OUTPUT_DIR="path to save model" |
|
|
|
accelerate launch train_controlnet.py \ |
|
--pretrained_model_name_or_path=$MODEL_DIR \ |
|
--output_dir=$OUTPUT_DIR \ |
|
--dataset_name=fusing/fill50k \ |
|
--resolution=512 \ |
|
--learning_rate=1e-5 \ |
|
--validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ |
|
--validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ |
|
--train_batch_size=1 \ |
|
--gradient_accumulation_steps=4 \ |
|
--gradient_checkpointing \ |
|
--use_8bit_adam \ |
|
--push_to_hub |
|
``` |
|
|
|
## 12GB GPU์์ ํ์ตํ๊ธฐ |
|
|
|
12GB GPU์์ ์คํํ๊ธฐ ์ํด ๋ค์์ ์ต์ ํ๋ฅผ ์งํํ์ธ์: |
|
|
|
- ๊ธฐ์ธ๊ธฐ ์ฒดํฌํฌ์ธํธ ์ ์ฅํ๊ธฐ |
|
- bitsandbyte์ 8-bit [optimizer](https://github.com/TimDettmers/bitsandbytes#requirements--installation)(๊ฐ ์ค์น๋์ง ์์๋ค๋ฉด ๋งํฌ์ ์ฐ๊ฒฐ๋ ์ค๋ช
์๋ฅผ ๋ณด์ธ์) |
|
- [xFormers](https://huggingface.co/docs/diffusers/training/optimization/xformers)(๊ฐ ์ค์น๋์ง ์์๋ค๋ฉด ๋งํฌ์ ์ฐ๊ฒฐ๋ ์ค๋ช
์๋ฅผ ๋ณด์ธ์) |
|
- ๊ธฐ์ธ๊ธฐ๋ฅผ `None`์ผ๋ก ์ค์ |
|
|
|
```bash |
|
export MODEL_DIR="runwayml/stable-diffusion-v1-5" |
|
export OUTPUT_DIR="path to save model" |
|
|
|
accelerate launch train_controlnet.py \ |
|
--pretrained_model_name_or_path=$MODEL_DIR \ |
|
--output_dir=$OUTPUT_DIR \ |
|
--dataset_name=fusing/fill50k \ |
|
--resolution=512 \ |
|
--learning_rate=1e-5 \ |
|
--validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ |
|
--validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ |
|
--train_batch_size=1 \ |
|
--gradient_accumulation_steps=4 \ |
|
--gradient_checkpointing \ |
|
--use_8bit_adam \ |
|
--enable_xformers_memory_efficient_attention \ |
|
--set_grads_to_none \ |
|
--push_to_hub |
|
``` |
|
|
|
`pip install xformers`์ผ๋ก `xformers`์ ํ์คํ ์ค์นํ๊ณ `enable_xformers_memory_efficient_attention`์ ์ฌ์ฉํ์ธ์. |
|
|
|
## 8GB GPU์์ ํ์ตํ๊ธฐ |
|
|
|
์ฐ๋ฆฌ๋ ControlNet์ ์ง์ํ๊ธฐ ์ํ DeepSpeed๋ฅผ ์ฒ ์ ํ๊ฒ ํ
์คํธํ์ง ์์์ต๋๋ค. ํ๊ฒฝ์ค์ ์ด ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์ ์ฅํ ๋, |
|
๊ทธ ํ๊ฒฝ์ด ์ฑ๊ณต์ ์ผ๋ก ํ์ตํ๋์ง๋ฅผ ํ์ ํ์ง ์์์ต๋๋ค. ์ฑ๊ณตํ ํ์ต ์คํ์ ์ํด ์ค์ ์ ๋ณ๊ฒฝํด์ผ ํ ๊ฐ๋ฅ์ฑ์ด ๋์ต๋๋ค. |
|
|
|
8GB GPU์์ ์คํํ๊ธฐ ์ํด ๋ค์์ ์ต์ ํ๋ฅผ ์งํํ์ธ์: |
|
|
|
- ๊ธฐ์ธ๊ธฐ ์ฒดํฌํฌ์ธํธ ์ ์ฅํ๊ธฐ |
|
- bitsandbyte์ 8-bit [optimizer](https://github.com/TimDettmers/bitsandbytes#requirements--installation)(๊ฐ ์ค์น๋์ง ์์๋ค๋ฉด ๋งํฌ์ ์ฐ๊ฒฐ๋ ์ค๋ช
์๋ฅผ ๋ณด์ธ์) |
|
- [xFormers](https://huggingface.co/docs/diffusers/training/optimization/xformers)(๊ฐ ์ค์น๋์ง ์์๋ค๋ฉด ๋งํฌ์ ์ฐ๊ฒฐ๋ ์ค๋ช
์๋ฅผ ๋ณด์ธ์) |
|
- ๊ธฐ์ธ๊ธฐ๋ฅผ `None`์ผ๋ก ์ค์ |
|
- DeepSpeed stage 2 ๋ณ์์ optimizer ์์๊ธฐ |
|
- fp16 ํผํฉ ์ ๋ฐ๋(precision) |
|
|
|
[DeepSpeed](https://www.deepspeed.ai/)๋ CPU ๋๋ NVME๋ก ํ
์๋ฅผ VRAM์์ ์คํ๋ก๋ํ ์ ์์ต๋๋ค. |
|
์ด๋ฅผ ์ํด์ ํจ์ฌ ๋ ๋ง์ RAM(์ฝ 25 GB)๊ฐ ํ์ํฉ๋๋ค. |
|
|
|
DeepSpeed stage 2๋ฅผ ํ์ฑํํ๊ธฐ ์ํด์ `accelerate config`๋ก ํ๊ฒฝ์ ๊ตฌ์ฑํด์ผํฉ๋๋ค. |
|
|
|
๊ตฌ์ฑ(configuration) ํ์ผ์ ์ด๋ฐ ๋ชจ์ต์ด์ด์ผ ํฉ๋๋ค: |
|
|
|
```yaml |
|
compute_environment: LOCAL_MACHINE |
|
deepspeed_config: |
|
gradient_accumulation_steps: 4 |
|
offload_optimizer_device: cpu |
|
offload_param_device: cpu |
|
zero3_init_flag: false |
|
zero_stage: 2 |
|
distributed_type: DEEPSPEED |
|
``` |
|
|
|
<ํ> |
|
|
|
[๋ฌธ์](https://huggingface.co/docs/accelerate/usage_guides/deepspeed)๋ฅผ ๋ ๋ง์ DeepSpeed ์ค์ ์ต์
์ ์ํด ๋ณด์ธ์. |
|
|
|
<ํ> |
|
|
|
๊ธฐ๋ณธ Adam optimizer๋ฅผ DeepSpeed'์ Adam |
|
`deepspeed.ops.adam.DeepSpeedCPUAdam` ์ผ๋ก ๋ฐ๊พธ๋ฉด ์๋นํ ์๋ ํฅ์์ ์ด๋ฃฐ์ ์์ง๋ง, |
|
Pytorch์ ๊ฐ์ ๋ฒ์ ์ CUDA toolchain์ด ํ์ํฉ๋๋ค. 8-๋นํธ optimizer๋ ํ์ฌ DeepSpeed์ |
|
ํธํ๋์ง ์๋ ๊ฒ ๊ฐ์ต๋๋ค. |
|
|
|
```bash |
|
export MODEL_DIR="runwayml/stable-diffusion-v1-5" |
|
export OUTPUT_DIR="path to save model" |
|
|
|
accelerate launch train_controlnet.py \ |
|
--pretrained_model_name_or_path=$MODEL_DIR \ |
|
--output_dir=$OUTPUT_DIR \ |
|
--dataset_name=fusing/fill50k \ |
|
--resolution=512 \ |
|
--validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ |
|
--validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ |
|
--train_batch_size=1 \ |
|
--gradient_accumulation_steps=4 \ |
|
--gradient_checkpointing \ |
|
--enable_xformers_memory_efficient_attention \ |
|
--set_grads_to_none \ |
|
--mixed_precision fp16 \ |
|
--push_to_hub |
|
``` |
|
|
|
## ์ถ๋ก |
|
|
|
ํ์ต๋ ๋ชจ๋ธ์ [`StableDiffusionControlNetPipeline`]๊ณผ ํจ๊ป ์คํ๋ ์ ์์ต๋๋ค. |
|
`base_model_path`์ `controlnet_path` ์ ๊ฐ์ ์ง์ ํ์ธ์ `--pretrained_model_name_or_path` ์ |
|
`--output_dir` ๋ ํ์ต ์คํฌ๋ฆฝํธ์ ๊ฐ๋ณ์ ์ผ๋ก ์ง์ ๋ฉ๋๋ค. |
|
|
|
```py |
|
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler |
|
from diffusers.utils import load_image |
|
import torch |
|
|
|
base_model_path = "path to model" |
|
controlnet_path = "path to controlnet" |
|
|
|
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16) |
|
pipe = StableDiffusionControlNetPipeline.from_pretrained( |
|
base_model_path, controlnet=controlnet, torch_dtype=torch.float16 |
|
) |
|
|
|
# ๋ ๋น ๋ฅธ ์ค์ผ์ค๋ฌ์ ๋ฉ๋ชจ๋ฆฌ ์ต์ ํ๋ก diffusion ํ๋ก์ธ์ค ์๋ ์ฌ๋ฆฌ๊ธฐ |
|
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) |
|
# xformers๊ฐ ์ค์น๋์ง ์์ผ๋ฉด ์๋ ์ค์ ์ญ์ ํ๊ธฐ |
|
pipe.enable_xformers_memory_efficient_attention() |
|
|
|
pipe.enable_model_cpu_offload() |
|
|
|
control_image = load_image("./conditioning_image_1.png") |
|
prompt = "pale golden rod circle with old lace background" |
|
|
|
# ์ด๋ฏธ์ง ์์ฑํ๊ธฐ |
|
generator = torch.manual_seed(0) |
|
image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0] |
|
|
|
image.save("./output.png") |
|
``` |
|
|