text
stringlengths 0
5.54k
|
---|
Parameters |
vqvae (VQModel) — |
Vector-quantized (VQ) VAE Model to encode and decode images to and from latent representations. |
unet (UNet2DModel) — U-Net architecture to denoise the encoded image. |
scheduler (SchedulerMixin) — |
A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, |
EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. |
A pipeline for image super-resolution using Latent |
This class inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) |
__call__ |
< |
source |
> |
( |
image: typing.Union[torch.Tensor, PIL.Image.Image] = None |
batch_size: typing.Optional[int] = 1 |
num_inference_steps: typing.Optional[int] = 100 |
eta: typing.Optional[float] = 0.0 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
) |
→ |
ImagePipelineOutput or tuple |
Parameters |
image (torch.Tensor or PIL.Image.Image) — |
Image, or tensor representing an image batch, that will be used as the starting point for the |
process. |
batch_size (int, optional, defaults to 1) — |
Number of images to generate. |
num_inference_steps (int, optional, defaults to 100) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. |
eta (float, optional, defaults to 0.0) — |
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to |
schedulers.DDIMScheduler, will be ignored for others. |
generator (torch.Generator, optional) — |
One or a list of torch generator(s) |
to make generation deterministic. |
output_type (str, optional, defaults to "pil") — |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. |
return_dict (bool, optional) — |
Whether or not to return a ImagePipelineOutput instead of a plain tuple. |
Returns |
ImagePipelineOutput or tuple |
~pipelines.utils.ImagePipelineOutput if return_dict is |
True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. |
Stable Diffusion 2 Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of the original Stable Diffusion, and it was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. |
These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official announcement post. The architecture of Stable Diffusion 2 is more or less identical to the original Stable Diffusion model so check out it’s API documentation for how to use Stable Diffusion 2. We recommend using the DPMSolverMultistepScheduler as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: Task Repository text-to-image (512x512) stabilityai/stable-diffusion-2-base text-to-image (768x768) stabilityai/stable-diffusion-2 inpainting stabilityai/stable-diffusion-2-inpainting super-resolution stable-diffusion-x4-upscaler depth-to-image stabilityai/stable-diffusion-2-depth Here are some examples for how to use Stable Diffusion 2 for each task: Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! Text-to-image Copied from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler |
import torch |
repo_id = "stabilityai/stable-diffusion-2-base" |
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") |
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) |
pipe = pipe.to("cuda") |
prompt = "High quality photo of an astronaut riding a horse in space" |
image = pipe(prompt, num_inference_steps=25).images[0] |
image Inpainting Copied import torch |
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler |
from diffusers.utils import load_image, make_image_grid |
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" |
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" |
Subsets and Splits