text
stringlengths 0
5.54k
|
---|
expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) β |
One or a list of torch generator(s) |
to make generation deterministic. latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) β |
The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if |
guidance_scale is less than 1). negative_prompt (str or List[str], optional) β |
The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if |
guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) β |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, |
usually at the expense of lower image quality. Returns |
KandinskyPriorPipelineOutput or tuple |
Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline |
>>> from diffusers.utils import load_image |
>>> import PIL |
>>> import torch |
>>> from torchvision import transforms |
>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( |
... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 |
... ) |
>>> pipe_prior.to("cuda") |
>>> img1 = load_image( |
... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" |
... "/kandinsky/cat.png" |
... ) |
>>> img2 = load_image( |
... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" |
... "/kandinsky/starry_night.jpeg" |
... ) |
>>> images_texts = ["a cat", img1, img2] |
>>> weights = [0.3, 0.3, 0.4] |
>>> out = pipe_prior.interpolate(images_texts, weights) |
>>> pipe = KandinskyV22Pipeline.from_pretrained( |
... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 |
... ) |
>>> pipe.to("cuda") |
>>> image = pipe( |
... image_embeds=out.image_embeds, |
... negative_image_embeds=out.negative_image_embeds, |
... height=768, |
... width=768, |
... num_inference_steps=50, |
... ).images[0] |
>>> image.save("starry_cat.png") KandinskyV22Pipeline class diffusers.KandinskyV22Pipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) β |
A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) β |
Conditional U-Net architecture to denoise the image embedding. movq (VQModel) β |
MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) β ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) β |
The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) β |
The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) β |
The height in pixels of the generated image. width (int, optional, defaults to 512) β |
The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. guidance_scale (float, optional, defaults to 4.0) β |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, |
usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) β |
The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) β |
One or a list of torch generator(s) |
to make generation deterministic. latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") β |
The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" |
(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) β |
Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) β |
A function that calls at the end of each denoising steps during the inference. The function is called |
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by |
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) β |
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list |
will be passed as callback_kwargs argument. You will only be able to include variables listed in the |
._callback_tensor_inputs attribute of your pipeline class. Returns |
ImagePipelineOutput or tuple |
Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline |
>>> import torch |
>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") |
>>> pipe_prior.to("cuda") |
>>> prompt = "red cat, 4k photo" |
>>> out = pipe_prior(prompt) |
>>> image_emb = out.image_embeds |
>>> zero_image_emb = out.negative_image_embeds |
>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") |
>>> pipe.to("cuda") |
>>> image = pipe( |
... image_embeds=image_emb, |
... negative_image_embeds=zero_image_emb, |
... height=768, |
... width=768, |
... num_inference_steps=50, |
... ).images |
>>> image[0].save("cat.png") KandinskyV22CombinedPipeline class diffusers.KandinskyV22CombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) β |
A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) β |
Conditional U-Net architecture to denoise the image embedding. movq (VQModel) β |
MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) β |
Subsets and Splits