text
stringlengths
0
5.54k
Conceptually, indicates how much to transform the reference emb. Must be between 0 and 1. image
will be used as a starting point, adding more noise to it the larger the strength. The number of
denoising steps depends on the amount of noise initially added. emb (torch.FloatTensor) β€”
The image embedding. negative_prompt (str or List[str], optional) β€”
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) β€”
The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) β€”
One or a list of torch generator(s)
to make generation deterministic. guidance_scale (float, optional, defaults to 4.0) β€”
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale is defined as w of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt,
usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") β€”
The output format of the generate image. Choose between: "np" (np.array) or "pt"
(torch.Tensor). return_dict (bool, optional, defaults to True) β€”
Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns
KandinskyPriorPipelineOutput or tuple
Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorEmb2EmbPipeline
>>> import torch
>>> pipe_prior = KandinskyPriorPipeline.from_pretrained(
... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
... )
>>> pipe_prior.to("cuda")
>>> prompt = "red cat, 4k photo"
>>> img = load_image(
... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
... "/kandinsky/cat.png"
... )
>>> image_emb, nagative_image_emb = pipe_prior(prompt, image=img, strength=0.2).to_tuple()
>>> pipe = KandinskyPipeline.from_pretrained(
... "kandinsky-community/kandinsky-2-2-decoder, torch_dtype=torch.float16"
... )
>>> pipe.to("cuda")
>>> image = pipe(
... image_embeds=image_emb,
... negative_image_embeds=negative_image_emb,
... height=768,
... width=768,
... num_inference_steps=100,
... ).images
>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) β†’ KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) β€”
list of prompts and images to guide the image generation.
weights β€” (List[float]):
list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) β€”
The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) β€”
One or a list of torch generator(s)
to make generation deterministic. latents (torch.FloatTensor, optional) β€”
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) β€”
The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if
guidance_scale is less than 1). negative_prompt (str or List[str], optional) β€”
The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if
guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) β€”
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale is defined as w of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt,
usually at the expense of lower image quality. Returns
KandinskyPriorPipelineOutput or tuple
Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22Pipeline
>>> from diffusers.utils import load_image
>>> import PIL
>>> import torch
>>> from torchvision import transforms
>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
... )
>>> pipe_prior.to("cuda")
>>> img1 = load_image(
... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
... "/kandinsky/cat.png"
... )
>>> img2 = load_image(
... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
... "/kandinsky/starry_night.jpeg"
... )
>>> images_texts = ["a cat", img1, img2]
>>> weights = [0.3, 0.3, 0.4]
>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights)
>>> pipe = KandinskyV22Pipeline.from_pretrained(
... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16
... )
>>> pipe.to("cuda")
>>> image = pipe(