text
stringlengths 0
5.54k
|
---|
< |
source |
> |
( |
) |
Enable tiled VAE decoding. |
When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in |
several steps. This is useful to save a large amount of memory and to allow the processing of larger images. |
AltDiffusionImg2ImgPipeline |
class diffusers.AltDiffusionImg2ImgPipeline |
< |
source |
> |
( |
vae: AutoencoderKL |
text_encoder: RobertaSeriesModelWithTransformation |
tokenizer: XLMRobertaTokenizer |
unet: UNet2DConditionModel |
scheduler: KarrasDiffusionSchedulers |
safety_checker: StableDiffusionSafetyChecker |
feature_extractor: CLIPFeatureExtractor |
requires_safety_checker: bool = True |
) |
Parameters |
vae (AutoencoderKL) β |
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. |
text_encoder (RobertaSeriesModelWithTransformation) β |
Frozen text-encoder. Alt Diffusion uses the text portion of |
CLIP, |
specifically the clip-vit-large-patch14 variant. |
tokenizer (XLMRobertaTokenizer) β |
Tokenizer of class |
XLMRobertaTokenizer. |
unet (UNet2DConditionModel) β Conditional U-Net architecture to denoise the encoded image latents. |
scheduler (SchedulerMixin) β |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. |
safety_checker (StableDiffusionSafetyChecker) β |
Classification module that estimates whether generated images could be considered offensive or harmful. |
Please, refer to the model card for details. |
feature_extractor (CLIPFeatureExtractor) β |
Model that extracts features from generated images to be used as inputs for the safety_checker. |
Pipeline for text-guided image to image generation using Alt Diffusion. |
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) |
__call__ |
< |
source |
> |
( |
prompt: typing.Union[str, typing.List[str]] = None |
image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None |
strength: float = 0.8 |
num_inference_steps: typing.Optional[int] = 50 |
guidance_scale: typing.Optional[float] = 7.5 |
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None |
num_images_per_prompt: typing.Optional[int] = 1 |
eta: typing.Optional[float] = 0.0 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
prompt_embeds: typing.Optional[torch.FloatTensor] = None |
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None |
callback_steps: int = 1 |
) |
β |
~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple |
Parameters |
Subsets and Splits