text
stringlengths 0
5.54k
|
---|
Frozen text-encoder. Stable Diffusion uses the text portion of |
CLIP, specifically |
the clip-vit-large-patch14 variant. |
tokenizer (CLIPTokenizer) — |
Tokenizer of class |
CLIPTokenizer. |
unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. |
scheduler (SchedulerMixin) — |
A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. |
safety_checker (Q16SafetyChecker) — |
Classification module that estimates whether generated images could be considered offensive or harmful. |
Please, refer to the model card for details. |
feature_extractor (CLIPImageProcessor) — |
Model that extracts features from generated images to be used as inputs for the safety_checker. |
Pipeline for text-to-image generation with latent editing. |
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) |
This model builds on the implementation of [‘StableDiffusionPipeline’] |
__call__ |
< |
source |
> |
( |
prompt: typing.Union[str, typing.List[str]] |
height: typing.Optional[int] = None |
width: typing.Optional[int] = None |
num_inference_steps: int = 50 |
guidance_scale: float = 7.5 |
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None |
num_images_per_prompt: int = 1 |
eta: float = 0.0 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
latents: typing.Optional[torch.FloatTensor] = None |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None |
callback_steps: int = 1 |
editing_prompt: typing.Union[str, typing.List[str], NoneType] = None |
editing_prompt_embeddings: typing.Optional[torch.Tensor] = None |
reverse_editing_direction: typing.Union[bool, typing.List[bool], NoneType] = False |
edit_guidance_scale: typing.Union[float, typing.List[float], NoneType] = 5 |
edit_warmup_steps: typing.Union[int, typing.List[int], NoneType] = 10 |
edit_cooldown_steps: typing.Union[int, typing.List[int], NoneType] = None |
edit_threshold: typing.Union[float, typing.List[float], NoneType] = 0.9 |
edit_momentum_scale: typing.Optional[float] = 0.1 |
edit_mom_beta: typing.Optional[float] = 0.4 |
edit_weights: typing.Optional[typing.List[float]] = None |
sem_guidance: typing.Optional[typing.List[torch.Tensor]] = None |
) |
→ |
SemanticStableDiffusionPipelineOutput or tuple |
Parameters |
prompt (str or List[str]) — |
The prompt or prompts to guide the image generation. |
height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The height in pixels of the generated image. |
width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The width in pixels of the generated image. |
num_inference_steps (int, optional, defaults to 50) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. |
guidance_scale (float, optional, defaults to 7.5) — |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, |
usually at the expense of lower image quality. |
negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored |
if guidance_scale is less than 1). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.