text
stringlengths 0
5.54k
|
---|
of the drift and diffusion components of the sample update. set_timesteps < source > ( num_inference_steps: int sampling_eps: float = None device: Union = None ) Parameters num_inference_steps (int) β |
The number of diffusion steps used when generating samples with a pre-trained model. sampling_eps (float, optional) β |
The final timestep value (overrides value given during scheduler instantiation). device (str or torch.device, optional) β |
The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_correct < source > ( model_output: FloatTensor sample: FloatTensor generator: Optional = None return_dict: bool = True ) β SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) β |
The direct output from learned diffusion model. sample (torch.FloatTensor) β |
A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) β |
A random number generator. return_dict (bool, optional, defaults to True) β |
Whether or not to return a SdeVeOutput or tuple. Returns |
SdeVeOutput or tuple |
If return_dict is True, SdeVeOutput is returned, otherwise a tuple |
is returned where the first element is the sample tensor. |
Correct the predicted sample based on the model_output of the network. This is often run repeatedly after |
making the prediction for the previous timestep. step_pred < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) β SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) β |
The direct output from learned diffusion model. timestep (int) β |
The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β |
A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) β |
A random number generator. return_dict (bool, optional, defaults to True) β |
Whether or not to return a SdeVeOutput or tuple. Returns |
SdeVeOutput or tuple |
If return_dict is True, SdeVeOutput is returned, otherwise a tuple |
is returned where the first element is the sample tensor. |
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion |
process from the learned model outputs (most often the predicted noise). SdeVeOutput class diffusers.schedulers.scheduling_sde_ve.SdeVeOutput < source > ( prev_sample: FloatTensor prev_sample_mean: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β |
Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the |
denoising loop. prev_sample_mean (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β |
Mean averaged prev_sample over previous timesteps. Output class for the schedulerβs step function output. |
AltDiffusion |
AltDiffusion was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu |
The abstract of the paper is the following: |
In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. |
Overview: |
Pipeline |
Tasks |
Colab |
Demo |
pipeline_alt_diffusion.py |
Text-to-Image Generation |
- |
- |
pipeline_alt_diffusion_img2img.py |
Image-to-Image Text-Guided Generation |
- |
- |
Tips |
AltDiffusion is conceptually exaclty the same as Stable Diffusion. |
Run AltDiffusion |
AltDiffusion can be tested very easily with the AltDiffusionPipeline, AltDiffusionImg2ImgPipeline and the "BAAI/AltDiffusion-m9" checkpoint exactly in the same way it is shown in the Conditional Image Generation Guide and the Image-to-Image Generation Guide. |
How to load and use different schedulers. |
The alt diffusion pipeline uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the alt diffusion pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. |
To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: |
Copied |
>>> from diffusers import AltDiffusionPipeline, EulerDiscreteScheduler |
>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9") |
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) |
>>> # or |
>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("BAAI/AltDiffusion-m9", subfolder="scheduler") |
>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", scheduler=euler_scheduler) |
How to convert all use cases with multiple or single pipeline |
If you want to use all possible use cases in a single DiffusionPipeline we recommend using the components functionality to instantiate all components in the most memory-efficient way: |
Copied |
>>> from diffusers import ( |
... AltDiffusionPipeline, |
... AltDiffusionImg2ImgPipeline, |
... ) |
>>> text2img = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9") |
>>> img2img = AltDiffusionImg2ImgPipeline(**text2img.components) |
>>> # now you can use text2img(...) and img2img(...) just like the call methods of each respective pipeline |
AltDiffusionPipelineOutput |
class diffusers.pipelines.alt_diffusion.AltDiffusionPipelineOutput |
< |
source |
> |
( |
images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] |
nsfw_content_detected: typing.Optional[typing.List[bool]] |
) |
Parameters |
images (List[PIL.Image.Image] or np.ndarray) β |
List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. |
Subsets and Splits