bokyeong1015 commited on
Commit
42d2d5d
·
1 Parent(s): f48e6fc

update Demo Env and Acknowledgments

Browse files
Files changed (1) hide show
  1. docs/description.md +5 -2
docs/description.md CHANGED
@@ -16,11 +16,14 @@ This demo showcases a lightweight Stable Diffusion model (SDM) for general-purpo
16
  - For different images with the same prompt, please change _Random Seed_ in Advanced Settings (because of using the firstly sampled latent code per seed).
17
 
18
  ### Acknowledgments
19
- - We thank [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) for supporting this research.
 
 
20
  - Some demo codes were borrowed from the repo of Stability AI ([stabilityai/stable-diffusion](https://huggingface.co/spaces/stabilityai/stable-diffusion)) and AK ([akhaliq/small-stable-diffusion-v0](https://huggingface.co/spaces/akhaliq/small-stable-diffusion-v0)). Thanks!
21
 
22
  ### Demo Environment
23
  - Regardless of machine types, our compressed model achieves speedups while preserving visually compelling results.
 
24
  - [June/30/2023] **Free CPU-basic** (2 vCPU · 16 GB RAM) — 7~10 min slow inference of the original SDM.
25
  - Because free CPU resources are dynamically allocated with other demos, it may take much longer, depending on the server situation.
26
- - [May/31/2023] **NVIDIA T4-small** (4 vCPU · 15 GB RAM · 16GB VRAM) — 5~10 sec inference of the original SDM (for a 512×512 image with 25 denoising steps).
 
16
  - For different images with the same prompt, please change _Random Seed_ in Advanced Settings (because of using the firstly sampled latent code per seed).
17
 
18
  ### Acknowledgments
19
+ - We thank [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) for generously providing the Azure credits used during pretraining.
20
+ - We appreciate the pioneering research on Latent/Stable Diffusion conducted by [CompVis](https://github.com/CompVis/latent-diffusion), [Runway](https://runwayml.com/), and [Stability AI](https://stability.ai/).
21
+ - Special thanks to the contributors to [LAION](https://laion.ai/), [Diffusers](https://github.com/huggingface/diffusers), and [Gradio](https://www.gradio.app/) for their valuable support.
22
  - Some demo codes were borrowed from the repo of Stability AI ([stabilityai/stable-diffusion](https://huggingface.co/spaces/stabilityai/stable-diffusion)) and AK ([akhaliq/small-stable-diffusion-v0](https://huggingface.co/spaces/akhaliq/small-stable-diffusion-v0)). Thanks!
23
 
24
  ### Demo Environment
25
  - Regardless of machine types, our compressed model achieves speedups while preserving visually compelling results.
26
+ - [July/27/2023] **NVIDIA T4-small** (4 vCPU · 15 GB RAM · 16GB VRAM) — 5~10 sec inference of the original SDM (for a 512×512 image with 25 denoising steps).
27
  - [June/30/2023] **Free CPU-basic** (2 vCPU · 16 GB RAM) — 7~10 min slow inference of the original SDM.
28
  - Because free CPU resources are dynamically allocated with other demos, it may take much longer, depending on the server situation.
29
+ - [May/31/2023] **NVIDIA T4-small**