vivjay30 commited on
Commit
a912f28
·
1 Parent(s): 89d5874

fix readme

Browse files
Files changed (1) hide show
  1. README.md +11 -66
README.md CHANGED
@@ -1,66 +1,11 @@
1
- # Linearly Constrained Diffusion Implicit Models
2
- ![alt text](Teaser.jpg)
3
-
4
- ### Authors
5
- [Vivek Jayaram](http://www.vivekjayaram.com/), [John Thickstun](https://johnthickstun.com/), [Ira Kemelmacher-Shlizerman](https://homes.cs.washington.edu/~kemelmi/), and [Steve Seitz](https://homes.cs.washington.edu/~seitz/)
6
-
7
- ### Links
8
- [[Gradio Demo]](https://huggingface.co/spaces/vivjay30/cdim) [[Project Page]](https://grail.cs.washington.edu/projects/cdim/) [[Paper]](https://arxiv.org/abs/2411.00359)
9
-
10
- ### Summary
11
- We solve noisy linear inverse problems with diffusion models. The method is fast and addresses many problems like inpainting, super-resolution, gaussian deblur, and poisson noise.
12
-
13
-
14
- ## Getting started
15
-
16
- Recommended environment: Python 3.11, Cuda 12, Conda. For lower verions please adjust the dependencies below.
17
-
18
- ### 1) Clone the repository
19
-
20
- ```
21
- git clone https://github.com/vivjay30/cdim
22
-
23
- cd cdim
24
- ```
25
-
26
- ### 2) Install dependencies
27
-
28
- ```
29
- conda create -n cdim python=3.11
30
-
31
- conda activate cdim
32
-
33
- pip install -r requirements.txt
34
-
35
- pip install torch==2.4.1+cu124 torchvision-0.19.1+cu124 --extra-index-url https://download.pytorch.org/whl/cu124
36
- ```
37
-
38
- ## Inference Examples
39
-
40
- (The underlying diffusion models will be automatically downloaded on the first run).
41
-
42
- #### CelebHQ Inpainting Example (T'=25 Denoising Steps)
43
-
44
- `python inference.py sample_images/celebhq/00001.jpg 25 operator_configs/box_inpainting_config.yaml noise_configs/gaussian_noise_config.yaml google/ddpm-celebahq-256`
45
-
46
- #### LSUN Churches Gaussian Deblur Example (T'=25 Denoising Steps)
47
- `python inference.py sample_images/lsun_church.png 25 operator_configs/gaussian_blur_config.yaml noise_configs/gaussian_noise_config.yaml google/ddpm-church-256`
48
-
49
-
50
- ## FFHQ and Imagenet Models
51
- These models are generally not as strong as the google ddpm models, but are used for comparisons with baseline methods.
52
-
53
- From [this link](https://drive.google.com/drive/folders/1jElnRoFv7b31fG0v6pTSQkelbSX3xGZh?usp=sharing), download the checkpoints "ffhq_10m.pt" and "imagenet_256.pt" to models/
54
-
55
- #### Imagenet Super Resolution Example
56
- Here we set T'=50 to show the algorithm running slower
57
- `python inference.py sample_images/imagenet_val_00002.png 50 operator_configs/super_resolution_config.yaml noise_configs/gaussian_noise_config.yaml models/imagenet_model_config.yaml`
58
-
59
- #### FFHQ Random Inpainting (Faster)
60
- Here we set T'=10 to show the algorithm running faster
61
- `python inference.py sample_images/ffhq_00010.png 10 operator_configs/random_inpainting_config.yaml noise_configs/gaussian_noise_config.yaml models/ffhq_model_config.yaml`
62
-
63
- #### A Note on Exact Recovery
64
- If you set the measurement noise to 0 in gaussian_noise_config.yaml, then the recovered image should match the the observation y exactly (e.g. inpainting doesn't chance observed pixels). In practice, this doesn't happen because the diffusion schedule sets $\overline{\alpha}_0 = 0.999$ for numeric stability, meaning a tiny amount of noise is injected even at t=0.
65
-
66
-
 
1
+ ---
2
+ title: CDIM
3
+ emoji: 😃
4
+ colorFrom: purple
5
+ colorTo: blue
6
+ sdk: gradio
7
+ sdk_version: 5.1.0
8
+ app_file: app.py
9
+ pinned: true
10
+ arxiv: https://arxiv.org/abs/2411.00359
11
+ ---