text
stringlengths 0
5.54k
|
---|
sent, |
padding="max_length", |
max_length=tokenizer.model_max_length, |
truncation=True, |
return_tensors="pt", |
) |
text_input_ids = text_inputs.input_ids |
prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] |
embeddings.append(prompt_embeds) |
return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) |
source_embeddings = embed_captions(source_captions, tokenizer, text_encoder) |
target_embeddings = embed_captions(target_captions, tokenizer, text_encoder) |
And you’re done! Here is a Colab Notebook that you can use to interact with the entire process. |
Now, you can use these embeddings directly while calling the pipeline: |
Copied |
from diffusers import DDIMScheduler |
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) |
images = pipeline( |
prompt, |
source_embeds=source_embeddings, |
target_embeds=target_embeddings, |
num_inference_steps=50, |
cross_attention_guidance_amount=0.15, |
).images |
images[0].save("edited_image_dog.png") |
StableDiffusionPix2PixZeroPipeline |
class diffusers.StableDiffusionPix2PixZeroPipeline |
< |
source |
> |
( |
vae: AutoencoderKL |
text_encoder: CLIPTextModel |
tokenizer: CLIPTokenizer |
unet: UNet2DConditionModel |
scheduler: typing.Union[diffusers.schedulers.scheduling_ddpm.DDPMScheduler, diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] |
feature_extractor: CLIPFeatureExtractor |
safety_checker: StableDiffusionSafetyChecker |
inverse_scheduler: DDIMInverseScheduler |
caption_generator: BlipForConditionalGeneration |
caption_processor: BlipProcessor |
requires_safety_checker: bool = True |
) |
Parameters |
vae (AutoencoderKL) — |
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. |
text_encoder (CLIPTextModel) — |
Frozen text-encoder. Stable Diffusion uses the text portion of |
CLIP, specifically |
the clip-vit-large-patch14 variant. |
tokenizer (CLIPTokenizer) — |
Tokenizer of class |
CLIPTokenizer. |
unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. |
scheduler (SchedulerMixin) — |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, EulerAncestralDiscreteScheduler, or DDPMScheduler. |
safety_checker (StableDiffusionSafetyChecker) — |
Classification module that estimates whether generated images could be considered offensive or harmful. |
Please, refer to the model card for details. |
feature_extractor (CLIPFeatureExtractor) — |
Model that extracts features from generated images to be used as inputs for the safety_checker. |
requires_safety_checker (bool) — |
Whether the pipeline requires a safety checker. We recommend setting it to True if you’re using the |
pipeline publicly. |
Pipeline for pixel-levl image editing using Pix2Pix Zero. Based on Stable Diffusion. |
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) |
__call__ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.