text
stringlengths 0
5.54k
|
---|
Tips |
The Semantic Guidance pipeline can be used with any Stable Diffusion checkpoint. |
Run Semantic Guidance |
The interface of SemanticStableDiffusionPipeline provides several additional parameters to influence the image generation. |
Exemplary usage may look like this: |
Copied |
import torch |
from diffusers import SemanticStableDiffusionPipeline |
pipe = SemanticStableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) |
pipe = pipe.to("cuda") |
out = pipe( |
prompt="a photo of the face of a woman", |
num_images_per_prompt=1, |
guidance_scale=7, |
editing_prompt=[ |
"smiling, smile", # Concepts to apply |
"glasses, wearing glasses", |
"curls, wavy hair, curly hair", |
"beard, full beard, mustache", |
], |
reverse_editing_direction=[False, False, False, False], # Direction of guidance i.e. increase all concepts |
edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept |
edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept |
edit_threshold=[ |
0.99, |
0.975, |
0.925, |
0.96, |
], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions |
edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance |
edit_mom_beta=0.6, # Momentum beta |
edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other |
) |
For more examples check the Colab notebook. |
StableDiffusionSafePipelineOutput |
class diffusers.pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput |
< |
source |
> |
( |
images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] |
nsfw_content_detected: typing.Optional[typing.List[bool]] |
) |
Parameters |
images (List[PIL.Image.Image] or np.ndarray) — |
List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. |
nsfw_content_detected (List[bool]) — |
List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” |
(nsfw) content, or None if safety checking could not be performed. |
Output class for Stable Diffusion pipelines. |
SemanticStableDiffusionPipeline |
class diffusers.SemanticStableDiffusionPipeline |
< |
source |
> |
( |
vae: AutoencoderKL |
text_encoder: CLIPTextModel |
tokenizer: CLIPTokenizer |
unet: UNet2DConditionModel |
scheduler: KarrasDiffusionSchedulers |
safety_checker: StableDiffusionSafetyChecker |
feature_extractor: CLIPImageProcessor |
requires_safety_checker: bool = True |
) |
Parameters |
vae (AutoencoderKL) — |
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. |
text_encoder (CLIPTextModel) — |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.