text
stringlengths 0
5.54k
|
|---|
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None
|
output_type: typing.Optional[str] = 'pil'
|
return_dict: bool = True
|
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
|
callback_steps: int = 1
|
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
|
)
|
β
|
~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple
|
Parameters
|
prompt (str or List[str], optional) β
|
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds.
|
instead.
|
height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β
|
The height in pixels of the generated image.
|
width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β
|
The width in pixels of the generated image.
|
num_inference_steps (int, optional, defaults to 50) β
|
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
expense of slower inference.
|
guidance_scale (float, optional, defaults to 7.5) β
|
Guidance scale as defined in Classifier-Free Diffusion Guidance.
|
guidance_scale is defined as w of equation 2. of Imagen
|
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt,
|
usually at the expense of lower image quality.
|
negative_prompt (str or List[str], optional) β
|
The prompt or prompts not to guide the image generation. If not defined, one has to pass
|
negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead.
|
Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
|
num_images_per_prompt (int, optional, defaults to 1) β
|
The number of images to generate per prompt.
|
eta (float, optional, defaults to 0.0) β
|
Corresponds to parameter eta (Ξ·) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
|
schedulers.DDIMScheduler, will be ignored for others.
|
generator (torch.Generator or List[torch.Generator], optional) β
|
One or a list of torch generator(s)
|
to make generation deterministic.
|
latents (torch.FloatTensor, optional) β
|
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
|
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
tensor will ge generated by sampling using the supplied random generator.
|
prompt_embeds (torch.FloatTensor, optional) β
|
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
|
provided, text embeddings will be generated from prompt input argument.
|
negative_prompt_embeds (torch.FloatTensor, optional) β
|
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
|
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
|
argument.
|
output_type (str, optional, defaults to "pil") β
|
The output format of the generate image. Choose between
|
PIL: PIL.Image.Image or np.array.
|
return_dict (bool, optional, defaults to True) β
|
Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a
|
plain tuple.
|
callback (Callable, optional) β
|
A function that will be called every callback_steps steps during inference. The function will be
|
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor).
|
callback_steps (int, optional, defaults to 1) β
|
The frequency at which the callback function will be called. If not specified, the callback will be
|
called at every step.
|
cross_attention_kwargs (dict, optional) β
|
A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under
|
self.processor in
|
diffusers.cross_attention.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.