text
stringlengths 0
5.54k
|
---|
Copied |
pip install -U -r requirements.txt |
Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument. |
Now you can launch the training script with the following command: |
Copied |
export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" |
export INSTANCE_DIR="./dog" |
export OUTPUT_DIR="path-to-save-model" |
python train_dreambooth_flax.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--instance_data_dir=$INSTANCE_DIR \ |
--output_dir=$OUTPUT_DIR \ |
--instance_prompt="a photo of sks dog" \ |
--resolution=512 \ |
--train_batch_size=1 \ |
--learning_rate=5e-6 \ |
--max_train_steps=400 |
Finetuning with prior-preserving loss |
Prior preservation is used to avoid overfitting and language-drift (check out the paper to learn more if you’re interested). For prior preservation, you use other images of the same class as part of the training process. The nice thing is that you can generate those images using the Stable Diffusion model itself! The training script will save the generated images to a local path you specify. |
The authors recommend generating num_epochs * num_samples images for prior preservation. In most cases, 200-300 images work well. |
Pytorch |
Hide Pytorch content |
Copied |
export MODEL_NAME="CompVis/stable-diffusion-v1-4" |
export INSTANCE_DIR="./dog" |
export CLASS_DIR="path_to_class_images" |
export OUTPUT_DIR="path_to_saved_model" |
accelerate launch train_dreambooth.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--instance_data_dir=$INSTANCE_DIR \ |
--class_data_dir=$CLASS_DIR \ |
--output_dir=$OUTPUT_DIR \ |
--with_prior_preservation --prior_loss_weight=1.0 \ |
--instance_prompt="a photo of sks dog" \ |
--class_prompt="a photo of dog" \ |
--resolution=512 \ |
--train_batch_size=1 \ |
--gradient_accumulation_steps=1 \ |
--learning_rate=5e-6 \ |
--lr_scheduler="constant" \ |
--lr_warmup_steps=0 \ |
--num_class_images=200 \ |
--max_train_steps=800 |
JAX |
Hide JAX content |
Copied |
export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" |
export INSTANCE_DIR="./dog" |
export CLASS_DIR="path-to-class-images" |
export OUTPUT_DIR="path-to-save-model" |
python train_dreambooth_flax.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--instance_data_dir=$INSTANCE_DIR \ |
--class_data_dir=$CLASS_DIR \ |
--output_dir=$OUTPUT_DIR \ |
--with_prior_preservation --prior_loss_weight=1.0 \ |
--instance_prompt="a photo of sks dog" \ |
--class_prompt="a photo of dog" \ |
--resolution=512 \ |
--train_batch_size=1 \ |
--learning_rate=5e-6 \ |
--num_class_images=200 \ |
--max_train_steps=800 |
Finetuning the text encoder and UNet |
The script also allows you to finetune the text_encoder along with the unet. In our experiments (check out the Training Stable Diffusion with DreamBooth using 🧨 Diffusers post for more details), this yields much better results, especially when generating images of faces. |
Training the text encoder requires additional memory and it won’t fit on a 16GB GPU. You’ll need at least 24GB VRAM to use this option. |
Pass the --train_text_encoder argument to the training script to enable finetuning the text_encoder and unet: |
Pytorch |
Hide Pytorch content |
Copied |
export MODEL_NAME="CompVis/stable-diffusion-v1-4" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.