Diffusers
Safetensors
dome272 commited on
Commit
739bb8d
·
1 Parent(s): 03e2e8a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -18
README.md CHANGED
@@ -1,25 +1,102 @@
1
  ---
2
  license: mit
3
  ---
 
4
 
5
- ## How to run
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
- **Note**: This is only a single prior model checkpoint and has to be run with https://huggingface.co/warp-diffusion/wuerstchen
 
 
 
8
 
9
- ```python
 
 
 
10
  import torch
11
- from diffusers import AutoPipelineForText2Image
12
- from diffusers.pipelines.wuerstchen import WuerstchenPrior
13
-
14
- prior_model = WuerstchenPrior.from_pretrained("warp-diffusion/wuerstchen-prior-model-base", torch_dtype=torch.float16)
15
- pipe = AutoPipelineForText2Image.from_pretrained("warp-diffusion/wuerstchen", prior_prior=prior_model, torch_dtype=torch.float16).to("cuda")
16
-
17
- prompt = [
18
- "An old destroyed car standing on a cliff in norway, cinematic photography",
19
- "Western movie, closeup cinematic photography",
20
- "Pink nike shoe commercial, closeup cinematic photography",
21
- "Croatia, closeup cinematic photography",
22
- "South Tyrol mountains at sunset, closeup cinematic photography",
23
- ]
24
- images = pipe(prompt, guidance_scale=8.0, width=1024, height=1024).images
25
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/i-DYpDHw8Pwiy7QBKZVR5.jpeg" width=1500>
5
 
6
+ ## Würstchen - Overview
7
+ Würstchen is diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce
8
+ computational costs for both training and inference by magnitudes. Training on 1024x1024 images, is way more expensive than training at 32x32. Usually, other works make
9
+ use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through it's novel design, we achieve a 42x spatial
10
+ compression. This was unseen before, because common methods fail to faithfully reconstruct detailed images after 16x spatial compression already. Würstchen employs a
11
+ two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://arxiv.org/abs/2306.00637)).
12
+ A third model, Stage C, is learnt in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, allowing
13
+ also cheaper and faster inference.
14
+
15
+ ## Würstchen - Prior
16
+ The Prior is what we refer to as "Stage C". It is the text-conditional model, operating in the small latent space that Stage A and Stage B encode images into. During
17
+ inference it's job is to generate the image latents given text. These image latents are then sent to Stage A & B to decode the latents into pixel space.
18
+
19
+ ### Prior - Model - Base
20
+ This is the base checkpoint for the Prior (Stage C). This means this is only pretrained and generates mostly standard images. We recommend using the [interpolated model](https://huggingface.co/warp-ai/wuerstchen-prior-model-interpolated),
21
+ as this is our best checkpoint for the Prior (Stage C), because it was finetuned on a curated dataset. However, we recommend this checkpoint if you want to finetune Würstchen
22
+ on your own large dataset, as the other checkpoints are already biased towards being more artistic. This checkpoint should provide a fairly standard baseline to finetune
23
+ from, as long as your dataset is rather large.
24
+
25
+ **Note:** This checkpoint was also already trained on multi-aspect-ratios, meaning you can generate larger images than just 1024x1024. Sometimes generations up to 2048x2048
26
+ even work. Feel free to try it out!
27
 
28
+ ### Image Sizes
29
+ Würstchen was trained on image resolutions between 1024x1024 & 1536x1536. We sometimes also observe good outputs at resolutions like 1024x2048. Feel free to try it out.
30
+ We also observed that the Prior (Stage C) adapts extremely fast to new resolutions. So finetuning it at 2048x2048 should be computationally cheap.
31
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/IfVsUDcP15OY-5wyLYKnQ.jpeg" width=1000>
32
 
33
+ ## How to run
34
+ This pipeline should be run together with https://huggingface.co/warp-diffusion/wuerstchen:
35
+
36
+ ```py
37
  import torch
38
+ from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
39
+ from diffusers.pipelines.wuerstchen import WuerstchenPrior, default_stage_c_timesteps
40
+
41
+ device = "cuda"
42
+ dtype = torch.float16
43
+ num_images_per_prompt = 2
44
+
45
+ prior = WuerstchenPrior.from_pretrained("warp-ai/wuerstchen-prior-model-base", torch_dtype=dtype).to(device)
46
+ prior_pipeline = WuerstchenPriorPipeline.from_pretrained(
47
+ "warp-ai/wuerstchen-prior", prior=prior, torch_dtype=dtype
48
+ ).to(device)
49
+ decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained(
50
+ "warp-ai/wuerstchen", torch_dtype=dtype
51
+ ).to(device)
52
+
53
+ caption = "Anthropomorphic cat dressed as a fire fighter"
54
+ negative_prompt = ""
55
+
56
+ prior_output = prior_pipeline(
57
+ prompt=caption,
58
+ height=1024,
59
+ width=1024,
60
+ timesteps=default_stage_c_timesteps,
61
+ negative_prompt=negative_prompt,
62
+ guidance_scale=4.0,
63
+ num_images_per_prompt=num_images_per_prompt,
64
+ )
65
+ decoder_output = decoder_pipeline(
66
+ image_embeddings=prior_output.image_embeddings,
67
+ prompt=caption,
68
+ negative_prompt=negative_prompt,
69
+ num_images_per_prompt=num_images_per_prompt,
70
+ guidance_scale=0.0,
71
+ output_type="pil",
72
+ ).images
73
+ ```
74
+
75
+ ## Model Details
76
+ - **Developed by:** Pablo Pernias, Dominic Rampas
77
+ - **Model type:** Diffusion-based text-to-image generation model
78
+ - **Language(s):** English
79
+ - **License:** MIT
80
+ - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a Diffusion model in the style of Stage C from the [Würstchen paper](https://arxiv.org/abs/2306.00637) that uses a fixed, pretrained text encoder ([CLIP ViT-bigG/14](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
81
+ - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2306.00637).
82
+ - **Cite as:**
83
+
84
+ @misc{pernias2023wuerstchen,
85
+ title={Wuerstchen: Efficient Pretraining of Text-to-Image Models},
86
+ author={Pablo Pernias and Dominic Rampas and Marc Aubreville},
87
+ year={2023},
88
+ eprint={2306.00637},
89
+ archivePrefix={arXiv},
90
+ primaryClass={cs.CV}
91
+ }
92
+
93
+ ## Environmental Impact
94
+
95
+ **Würstchen v2** **Estimated Emissions**
96
+ Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
97
+
98
+ - **Hardware Type:** A100 PCIe 40GB
99
+ - **Hours used:** 24602
100
+ - **Cloud Provider:** AWS
101
+ - **Compute Region:** US-east
102
+ - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 2275.68 kg CO2 eq.