𧨠Diffusion Models Class - Unit 1: Unconditional Image Generation
This is a diffusion-based generative model trained for unconditional image generation, released as part of the Diffusion Models Class by Hugging Face.
This model learns to generate images from random noise via a Denoising Diffusion Probabilistic Model (DDPM) framework. It was trained on a toy dataset of cute π¦ images (or other illustrative data, modify as needed).
π¦ Model Details
- Model type: Denoising Diffusion Probabilistic Model (DDPM)
- Library: π€ Diffusers
- Framework: PyTorch
- Training objective: Predict noise (Ξ΅) added in the forward process
- Usage: Unconditional image generation (no text prompt required)
πΈ Example Usage
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('larryliu002/sd-class-butterflies-32')
image = pipeline().images[0]
image.show() # or display(image) in notebooks
- Downloads last month
- 18
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The HF Inference API does not support unconditional-image-generation models for diffusers
library.