Text-to-Image
Diffusers
stable-diffusion

Img2Img

#8
by IndrasMirror - opened

Does SDXL-Lightning have an Image2Image pipeline or does it just work the same as the standard SDXL img2img pipe?

ByteDance org

Same SDXL pipeline should work, but I haven't looked into the setting of the SDXL img2img pipeline to make sure that everything is coded up as expected.

For 1/2/4-step model, the best img2img strength is 25%, 50%, or 75%.
For 8-step model, the best img2img noise strength 12.5%, 25%, 37.5%, 50%, 62.5%, 75%, 87.5%
These settings are what the model was trained for.

This comment has been hidden

I got this error for img2img:
Traceback (most recent call last):
File "D:\sd\test_sdxl_lighting_img.py", line 35, in
sd_model(
File "C:\Users\mi\miniconda3\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\mi\miniconda3\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl_img2img.py", line 1477, in call
image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
File "C:\Users\mi\miniconda3\lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "C:\Users\mi\miniconda3\lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 302, in decode
decoded = self._decode(z).sample
File "C:\Users\mi\miniconda3\lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 273, in _decode
dec = self.decoder(z)
File "C:\Users\mi\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\mi\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\mi\miniconda3\lib\site-packages\diffusers\models\autoencoders\vae.py", line 333, in forward
sample = self.mid_block(sample, latent_embeds)
File "C:\Users\mi\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\mi\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\mi\miniconda3\lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 660, in forward
hidden_states = self.resnets[0](hidden_states, temb)
File "C:\Users\mi\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\mi\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\mi\miniconda3\lib\site-packages\diffusers\models\resnet.py", line 338, in forward
hidden_states = self.nonlinearity(hidden_states)
File "C:\Users\mi\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\mi\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\mi\miniconda3\lib\site-packages\torch\nn\modules\activation.py", line 393, in forward
return F.silu(input, inplace=self.inplace)
File "C:\Users\mi\miniconda3\lib\site-packages\torch\nn\functional.py", line 2072, in silu
return torch._C._nn.silu(input)
TypeError: silu(): argument 'input' (position 1) must be Tensor, not NoneType

Wow, for my vae, strength should be equal to or larger than 0.5.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment