text
stringlengths
0
5.54k
tensor will ge generated by sampling using the supplied random generator.
prompt_embeds (torch.FloatTensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument.
cross_attention_guidance_amount (float, defaults to 0.1) —
Amount of guidance needed from the reference cross-attention maps.
output_type (str, optional, defaults to "pil") —
The output format of the generate image. Choose between
PIL: PIL.Image.Image or np.array.
return_dict (bool, optional, defaults to True) —
Whether or not to return a StableDiffusionPipelineOutput instead of a
plain tuple.
callback (Callable, optional) —
A function that will be called every callback_steps steps during inference. The function will be
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor).
callback_steps (int, optional, defaults to 1) —
The frequency at which the callback function will be called. If not specified, the callback will be
called at every step.
lambda_auto_corr (float, optional, defaults to 20.0) —
Lambda parameter to control auto correction
lambda_kl (float, optional, defaults to 20.0) —
Lambda parameter to control Kullback–Leibler divergence output
num_reg_steps (int, optional, defaults to 5) —
Number of regularization loss steps
num_auto_corr_rolls (int, optional, defaults to 5) —
Number of auto correction roll steps
Function used to generate inverted latents given a prompt and image.
Examples:
Copied
>>> import torch
>>> from transformers import BlipForConditionalGeneration, BlipProcessor
>>> from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline
>>> import requests
>>> from PIL import Image
>>> captioner_id = "Salesforce/blip-image-captioning-base"
>>> processor = BlipProcessor.from_pretrained(captioner_id)
>>> model = BlipForConditionalGeneration.from_pretrained(
... captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True
... )
>>> sd_model_ckpt = "CompVis/stable-diffusion-v1-4"
>>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(
... sd_model_ckpt,
... caption_generator=model,
... caption_processor=processor,
... torch_dtype=torch.float16,
... safety_checker=None,
... )
>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.enable_model_cpu_offload()
>>> img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png"
>>> raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB").resize((512, 512))
>>> # generate caption
>>> caption = pipeline.generate_caption(raw_image)
>>> # "a photography of a cat with flowers and dai dai daie - daie - daie kasaii"
>>> inv_latents = pipeline.invert(caption, image=raw_image).latents
>>> # we need to generate source and target embeds
>>> source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"]
>>> target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"]
>>> source_embeds = pipeline.get_embeds(source_prompts)
>>> target_embeds = pipeline.get_embeds(target_prompts)
>>> # the latents can then be used to edit a real image
>>> image = pipeline(