text
stringlengths 0
5.54k
|
---|
paper. Below, we also provide some directions on how to generate the embeddings. |
Note that the quality of the outputs generated with this pipeline is dependent on how good the source_embeds and target_embeds are. Please, refer to this discussion for some suggestions on the topic. |
Available Pipelines: |
Pipeline |
Tasks |
Demo |
StableDiffusionPix2PixZeroPipeline |
Text-Based Image Editing |
🤗 Space |
Usage example |
Based on an image generated with the input prompt |
Copied |
import requests |
import torch |
from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline |
def download(embedding_url, local_filepath): |
r = requests.get(embedding_url) |
with open(local_filepath, "wb") as f: |
f.write(r.content) |
model_ckpt = "CompVis/stable-diffusion-v1-4" |
pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( |
model_ckpt, conditions_input_image=False, torch_dtype=torch.float16 |
) |
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) |
pipeline.to("cuda") |
prompt = "a high resolution painting of a cat in the style of van gogh" |
src_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/cat.pt" |
target_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/dog.pt" |
for url in [src_embs_url, target_embs_url]: |
download(url, url.split("/")[-1]) |
src_embeds = torch.load(src_embs_url.split("/")[-1]) |
target_embeds = torch.load(target_embs_url.split("/")[-1]) |
images = pipeline( |
prompt, |
source_embeds=src_embeds, |
target_embeds=target_embeds, |
num_inference_steps=50, |
cross_attention_guidance_amount=0.15, |
).images |
images[0].save("edited_image_dog.png") |
Based on an input image |
When the pipeline is conditioned on an input image, we first obtain an inverted |
noise from it using a DDIMInverseScheduler with the help of a generated caption. Then |
the inverted noise is used to start the generation process. |
First, let’s load our pipeline: |
Copied |
import torch |
from transformers import BlipForConditionalGeneration, BlipProcessor |
from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline |
captioner_id = "Salesforce/blip-image-captioning-base" |
processor = BlipProcessor.from_pretrained(captioner_id) |
model = BlipForConditionalGeneration.from_pretrained(captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True) |
sd_model_ckpt = "CompVis/stable-diffusion-v1-4" |
pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( |
sd_model_ckpt, |
caption_generator=model, |
caption_processor=processor, |
torch_dtype=torch.float16, |
safety_checker=None, |
) |
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) |
pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) |
pipeline.enable_model_cpu_offload() |
Then, we load an input image for conditioning and obtain a suitable caption for it: |
Copied |
import requests |
from PIL import Image |
img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png" |
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB").resize((512, 512)) |
caption = pipeline.generate_caption(raw_image) |
Then we employ the generated caption and the input image to get the inverted noise: |
Copied |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.