text
stringlengths 0
5.54k
|
---|
( |
gpu_id = 0 |
) |
Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared |
to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward |
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with |
enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. |
enable_sequential_cpu_offload |
< |
source |
> |
( |
gpu_id = 0 |
) |
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, |
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a |
torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. |
generate_caption |
< |
source |
> |
( |
images |
) |
Generates caption for a given image. |
invert |
< |
source |
> |
( |
prompt: typing.Optional[str] = None |
image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None |
num_inference_steps: int = 50 |
guidance_scale: float = 1 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
latents: typing.Optional[torch.FloatTensor] = None |
prompt_embeds: typing.Optional[torch.FloatTensor] = None |
cross_attention_guidance_amount: float = 0.1 |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None |
callback_steps: typing.Optional[int] = 1 |
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None |
lambda_auto_corr: float = 20.0 |
lambda_kl: float = 20.0 |
num_reg_steps: int = 5 |
num_auto_corr_rolls: int = 5 |
) |
Parameters |
prompt (str or List[str], optional) β |
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. |
instead. |
image (PIL.Image.Image, optional) β |
Image, or tensor representing an image batch which will be used for conditioning. |
num_inference_steps (int, optional, defaults to 50) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. |
guidance_scale (float, optional, defaults to 7.5) β |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, |
usually at the expense of lower image quality. |
generator (torch.Generator or List[torch.Generator], optional) β |
One or a list of torch generator(s) |
to make generation deterministic. |
latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.