🧨 Diffusion Models Class - Unit 1: Unconditional Image Generation

This is a diffusion-based generative model trained for unconditional image generation, released as part of the Diffusion Models Class by Hugging Face.

This model learns to generate images from random noise via a Denoising Diffusion Probabilistic Model (DDPM) framework. It was trained on a toy dataset of cute πŸ¦‹ images (or other illustrative data, modify as needed).


πŸ“¦ Model Details

  • Model type: Denoising Diffusion Probabilistic Model (DDPM)
  • Library: πŸ€— Diffusers
  • Framework: PyTorch
  • Training objective: Predict noise (Ξ΅) added in the forward process
  • Usage: Unconditional image generation (no text prompt required)

πŸ“Έ Example Usage

from diffusers import DDPMPipeline

pipeline = DDPMPipeline.from_pretrained('larryliu002/sd-class-butterflies-32')
image = pipeline().images[0]
image.show()  # or display(image) in notebooks
Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support