text
stringlengths 0
5.54k
|
---|
< |
source |
> |
( |
batch_size: int = 1 |
num_inference_steps: int = 2000 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
**kwargs |
) |
β |
ImagePipelineOutput or tuple |
Parameters |
batch_size (int, optional, defaults to 1) β |
The number of images to generate. |
generator (torch.Generator, optional) β |
One or a list of torch generator(s) |
to make generation deterministic. |
output_type (str, optional, defaults to "pil") β |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. |
return_dict (bool, optional, defaults to True) β |
Whether or not to return a ImagePipelineOutput instead of a plain tuple. |
Returns |
ImagePipelineOutput or tuple |
~pipelines.utils.ImagePipelineOutput if return_dict is |
True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. |
KarrasVeScheduler KarrasVeScheduler is a stochastic sampler tailored to variance-expanding (VE) models. It is based on the Elucidating the Design Space of Diffusion-Based Generative Models and Score-based generative modeling through stochastic differential equations papers. KarrasVeScheduler class diffusers.KarrasVeScheduler < source > ( sigma_min: float = 0.02 sigma_max: float = 100 s_noise: float = 1.007 s_churn: float = 80 s_min: float = 0.05 s_max: float = 50 ) Parameters sigma_min (float, defaults to 0.02) β |
The minimum noise magnitude. sigma_max (float, defaults to 100) β |
The maximum noise magnitude. s_noise (float, defaults to 1.007) β |
The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, |
1.011]. s_churn (float, defaults to 80) β |
The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100]. s_min (float, defaults to 0.05) β |
The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10]. s_max (float, defaults to 50) β |
The end value of the sigma range to add noise. A reasonable range is [0.2, 80]. A stochastic scheduler tailored to variance-expanding models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic |
methods the library implements for all schedulers such as loading and saving. For more details on the parameters, see Appendix E. The grid search values used |
to find the optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. add_noise_to_input < source > ( sample: FloatTensor sigma: float generator: Optional = None ) Parameters sample (torch.FloatTensor) β |
The input sample. sigma (float) β generator (torch.Generator, optional) β |
A random number generator. Explicit Langevin-like βchurnβ step of adding noise to the sample according to a gamma_i β₯ 0 to reach a |
higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) β torch.FloatTensor Parameters sample (torch.FloatTensor) β |
The input sample. timestep (int, optional) β |
The current timestep in the diffusion chain. Returns |
torch.FloatTensor |
A scaled input sample. |
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the |
current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) β |
The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) β |
The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor return_dict: bool = True ) β ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β |
The direct output from learned diffusion model. sigma_hat (float) β sigma_prev (float) β sample_hat (torch.FloatTensor) β return_dict (bool, optional, defaults to True) β |
Whether or not to return a ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple. Returns |
~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple |
If return_dict is True, ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput is returned, |
otherwise a tuple is returned where the first element is the sample tensor. |
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion |
process from the learned model outputs (most often the predicted noise). step_correct < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) β prev_sample (TODO) Parameters model_output (torch.FloatTensor) β |
The direct output from learned diffusion model. sigma_hat (float) β TODO sigma_prev (float) β TODO sample_hat (torch.FloatTensor) β TODO sample_prev (torch.FloatTensor) β TODO derivative (torch.FloatTensor) β TODO return_dict (bool, optional, defaults to True) β |
Whether or not to return a DDPMSchedulerOutput or tuple. Returns |
prev_sample (TODO) |
updated sample in the diffusion chain. derivative (TODO): TODO |
Corrects the predicted sample based on the model_output of the network. KarrasVeOutput class diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput < source > ( prev_sample: FloatTensor derivative: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β |
Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the |
denoising loop. derivative (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β |
Derivative of predicted original image sample (x_0). pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β |
The predicted denoised sample (x_{0}) based on the model output from the current timestep. |
pred_original_sample can be used to preview progress or for guidance. Output class for the schedulerβs step function output. |
Semantic Guidance |
Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Diffusion using Semantic Dimensions and provides strong semantic control over the image generation. |
Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, and stay true to the original image composition. |
The abstract of the paper is the following: |
Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the userβs intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGAβs effectiveness on a variety of tasks and provide evidence for its versatility and flexibility. |
Overview: |
Pipeline |
Tasks |
Colab |
Demo |
pipeline_semantic_stable_diffusion.py |
Text-to-Image Generation |
Coming Soon |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.