text
stringlengths 0
5.54k
|
---|
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. |
cross_attention_guidance_amount (float, defaults to 0.1) β |
Amount of guidance needed from the reference cross-attention maps. |
output_type (str, optional, defaults to "pil") β |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. |
return_dict (bool, optional, defaults to True) β |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. |
callback (Callable, optional) β |
A function that will be called every callback_steps steps during inference. The function will be |
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). |
callback_steps (int, optional, defaults to 1) β |
The frequency at which the callback function will be called. If not specified, the callback will be |
called at every step. |
Returns |
StableDiffusionPipelineOutput or tuple |
StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. |
Function invoked when calling the pipeline for generation. |
Examples: |
Copied |
>>> import requests |
>>> import torch |
>>> from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline |
>>> def download(embedding_url, local_filepath): |
... r = requests.get(embedding_url) |
... with open(local_filepath, "wb") as f: |
... f.write(r.content) |
>>> model_ckpt = "CompVis/stable-diffusion-v1-4" |
>>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16) |
>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) |
>>> pipeline.to("cuda") |
>>> prompt = "a high resolution painting of a cat in the style of van gough" |
>>> source_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/cat.pt" |
>>> target_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/dog.pt" |
>>> for url in [source_emb_url, target_emb_url]: |
... download(url, url.split("/")[-1]) |
>>> src_embeds = torch.load(source_emb_url.split("/")[-1]) |
>>> target_embeds = torch.load(target_emb_url.split("/")[-1]) |
>>> images = pipeline( |
... prompt, |
... source_embeds=src_embeds, |
... target_embeds=target_embeds, |
... num_inference_steps=50, |
... cross_attention_guidance_amount=0.15, |
... ).images |
>>> images[0].save("edited_image_dog.png") |
construct_direction |
< |
source |
> |
( |
embs_source: Tensor |
embs_target: Tensor |
) |
Constructs the edit direction to steer the image generation process semantically. |
enable_model_cpu_offload |
< |
source |
> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.