This is a copy of the SSD-1B model (https://huggingface.co/segmind/SSD-1B) with the unet replaced with the LCM distilled unet (https://huggingface.co/latent-consistency/lcm-ssd-1b) and scheduler config set to default to the LCM Scheduler.
This makes LCM SSD-1B run as a standard Diffusion Pipeline
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"Vargol/lcm-ssd-1b-full-model", variant='fp16', torch_dtype=torch.float16
).to("mps")
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
generator = torch.manual_seed(0)
image = pipe(
prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0
).images[0]
image.save('distilled.png')
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.