🍰 Tiny AutoEncoder for Stable Diffusion

TAESD is very tiny autoencoder which uses the same "latent API" as Stable Diffusion's VAE. TAESD is useful for real-time previewing of the SD generation process.

This repo contains .safetensors versions of the TAESD weights.

For SDXL, use TAESDXL instead (the SD and SDXL VAEs are incompatible).

Using in 🧨 diffusers

import torch
from diffusers import DiffusionPipeline, AutoencoderTiny

pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "slice of delicious New York-style cheesecake topped with berries, mint, chocolate crumble"
image = pipe(prompt, num_inference_steps=50, generator=torch.Generator("cpu").manual_seed(0x7A35D)).images[0]
image.save("cheesecake.png")

image/png

Downloads last month
11,368
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for madebyollin/taesd

Finetunes
1 model

Spaces using madebyollin/taesd 100