text
stringlengths 0
5.54k
|
---|
Copied |
from accelerate import Accelerator |
from diffusers import DiffusionPipeline |
# Load the pipeline with the same arguments (model, revision) that were used for training |
model_id = "CompVis/stable-diffusion-v1-4" |
pipeline = DiffusionPipeline.from_pretrained(model_id) |
accelerator = Accelerator() |
# Use text_encoder if `--train_text_encoder` was used for the initial training |
unet, text_encoder = accelerator.prepare(pipeline.unet, pipeline.text_encoder) |
# Restore state from a checkpoint path. You have to use the absolute path here. |
accelerator.load_state("/sddata/dreambooth/daruma-v2-1/checkpoint-100") |
# Rebuild the pipeline with the unwrapped models (assignment to .unet and .text_encoder should work too) |
pipeline = DiffusionPipeline.from_pretrained( |
model_id, |
unet=accelerator.unwrap_model(unet), |
text_encoder=accelerator.unwrap_model(text_encoder), |
) |
# Perform inference, or save, or push to the hub |
pipeline.save_pretrained("dreambooth-pipeline") |
Optimizations for different GPU sizes |
Depending on your hardware, there are a few different ways to optimize DreamBooth on GPUs from 16GB to just 8GB! |
xFormers |
xFormers is a toolbox for optimizing Transformers, and it includes a memory-efficient attention mechanism that is used in 🧨 Diffusers. You’ll need to install xFormers and then add the following argument to your training script: |
Copied |
--enable_xformers_memory_efficient_attention |
xFormers is not available in Flax. |
Set gradients to none |
Another way you can lower your memory footprint is to set the gradients to None instead of zero. However, this may change certain behaviors, so if you run into any issues, try removing this argument. Add the following argument to your training script to set the gradients to None: |
Copied |
--set_grads_to_none |
16GB GPU |
With the help of gradient checkpointing and bitsandbytes 8-bit optimizer, it’s possible to train DreamBooth on a 16GB GPU. Make sure you have bitsandbytes installed: |
Copied |
pip install bitsandbytes |
Then pass the --use_8bit_adam option to the training script: |
Copied |
export MODEL_NAME="CompVis/stable-diffusion-v1-4" |
export INSTANCE_DIR="./dog" |
export CLASS_DIR="path_to_class_images" |
export OUTPUT_DIR="path_to_saved_model" |
accelerate launch train_dreambooth.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--instance_data_dir=$INSTANCE_DIR \ |
--class_data_dir=$CLASS_DIR \ |
--output_dir=$OUTPUT_DIR \ |
--with_prior_preservation --prior_loss_weight=1.0 \ |
--instance_prompt="a photo of sks dog" \ |
--class_prompt="a photo of dog" \ |
--resolution=512 \ |
--train_batch_size=1 \ |
--gradient_accumulation_steps=2 --gradient_checkpointing \ |
--use_8bit_adam \ |
--learning_rate=5e-6 \ |
--lr_scheduler="constant" \ |
--lr_warmup_steps=0 \ |
--num_class_images=200 \ |
--max_train_steps=800 |
12GB GPU |
To run DreamBooth on a 12GB GPU, you’ll need to enable gradient checkpointing, the 8-bit optimizer, xFormers, and set the gradients to None: |
Copied |
export MODEL_NAME="CompVis/stable-diffusion-v1-4" |
export INSTANCE_DIR="./dog" |
export CLASS_DIR="path-to-class-images" |
export OUTPUT_DIR="path-to-save-model" |
accelerate launch train_dreambooth.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--instance_data_dir=$INSTANCE_DIR \ |
--class_data_dir=$CLASS_DIR \ |
--output_dir=$OUTPUT_DIR \ |
--with_prior_preservation --prior_loss_weight=1.0 \ |
--instance_prompt="a photo of sks dog" \ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.