text
stringlengths 0
5.54k
|
---|
--class_prompt="a photo of dog" \ |
--resolution=512 \ |
--train_batch_size=1 \ |
--gradient_accumulation_steps=1 --gradient_checkpointing \ |
--use_8bit_adam \ |
--enable_xformers_memory_efficient_attention \ |
--set_grads_to_none \ |
--learning_rate=2e-6 \ |
--lr_scheduler="constant" \ |
--lr_warmup_steps=0 \ |
--num_class_images=200 \ |
--max_train_steps=800 |
8 GB GPU |
For 8GB GPUs, you’ll need the help of DeepSpeed to offload some |
tensors from the VRAM to either the CPU or NVME, enabling training with less GPU memory. |
Run the following command to configure your 🤗 Accelerate environment: |
Copied |
accelerate config |
During configuration, confirm that you want to use DeepSpeed. Now it’s possible to train on under 8GB VRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM, about 25 GB. See the DeepSpeed documentation for more configuration options. |
You should also change the default Adam optimizer to DeepSpeed’s optimized version of Adam |
deepspeed.ops.adam.DeepSpeedCPUAdam for a substantial speedup. Enabling DeepSpeedCPUAdam requires your system’s CUDA toolchain version to be the same as the one installed with PyTorch. |
8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment. |
Launch training with the following command: |
Copied |
export MODEL_NAME="CompVis/stable-diffusion-v1-4" |
export INSTANCE_DIR="./dog" |
export CLASS_DIR="path_to_class_images" |
export OUTPUT_DIR="path_to_saved_model" |
accelerate launch train_dreambooth.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--instance_data_dir=$INSTANCE_DIR \ |
--class_data_dir=$CLASS_DIR \ |
--output_dir=$OUTPUT_DIR \ |
--with_prior_preservation --prior_loss_weight=1.0 \ |
--instance_prompt="a photo of sks dog" \ |
--class_prompt="a photo of dog" \ |
--resolution=512 \ |
--train_batch_size=1 \ |
--sample_batch_size=1 \ |
--gradient_accumulation_steps=1 --gradient_checkpointing \ |
--learning_rate=5e-6 \ |
--lr_scheduler="constant" \ |
--lr_warmup_steps=0 \ |
--num_class_images=200 \ |
--max_train_steps=800 \ |
--mixed_precision=fp16 |
Inference |
Once you have trained a model, specify the path to where the model is saved, and use it for inference in the StableDiffusionPipeline. Make sure your prompts include the special identifier used during training (sks in the previous examples). |
If you have "accelerate>=0.16.0" installed, you can use the following code to run |
inference from an intermediate checkpoint: |
Copied |
from diffusers import DiffusionPipeline |
import torch |
model_id = "path_to_saved_model" |
pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") |
prompt = "A photo of sks dog in a bucket" |
image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] |
image.save("dog-bucket.png") |
You may also run inference from any of the saved training checkpoints. |
Zero-shot Image-to-Image Translation |
Overview |
Zero-shot Image-to-Image Translation. |
The abstract of the paper is the following: |
Large-scale text-to-image generative models have shown their remarkable ability to synthesize diverse and high-quality images. However, it is still challenging to directly apply these models for editing real images for two reasons. First, it is hard for users to come up with a perfect text prompt that accurately describes every visual detail in the input image. Second, while existing models can introduce desirable changes in certain regions, they often dramatically alter the input content and introduce unexpected changes in unwanted regions. In this work, we propose pix2pix-zero, an image-to-image translation method that can preserve the content of the original image without manual prompting. We first automatically discover editing directions that reflect desired edits in the text embedding space. To preserve the general content structure after editing, we further propose cross-attention guidance, which aims to retain the cross-attention maps of the input image throughout the diffusion process. In addition, our method does not need additional training for these edits and can directly use the existing pre-trained text-to-image diffusion model. We conduct extensive experiments and show that our method outperforms existing and concurrent works for both real and synthetic image editing. |
Resources: |
Project Page. |
Paper. |
Original Code. |
Demo. |
Tips |
The pipeline can be conditioned on real input images. Check out the code examples below to know more. |
The pipeline exposes two arguments namely source_embeds and target_embeds |
that let you control the direction of the semantic edits in the final image to be generated. Let’s say, |
you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect |
this in the pipeline, you simply have to set the embeddings related to the phrases including “cat” to |
source_embeds and “dog” to target_embeds. Refer to the code example below for more details. |
When you’re using this pipeline from a prompt, specify the source concept in the prompt. Taking |
the above example, a valid input prompt would be: “a high resolution painting of a cat in the style of van gough”. |
If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_embeds and target_embeds. |
Change the input prompt to include “dog”. |
To learn more about how the source and target embeddings are generated, refer to the original |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.