zhangap commited on
Commit
c4b30ae
·
verified ·
1 Parent(s): 20e1d75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -3
README.md CHANGED
@@ -1,3 +1,57 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-to-image
4
+ ---
5
+
6
+ # S3Diff Model Card
7
+ This model card focuses on the models associated with the S3Diff, available [here](https://github.com/ArcticHare105/S3Diff).
8
+
9
+ ## Model Details
10
+ - **Developed by:** Aiping Zhang
11
+ - **Model type:** Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors
12
+ - **Model Description:** This is the model used in [Paper](https://arxiv.org/abs/2409.17058).
13
+ - **Resources for more information:** [GitHub Repository](https://github.com/ArcticHare105/S3Diff).
14
+ - **Cite as:**
15
+
16
+ @article{2024s3diff,
17
+ author = {Aiping Zhang, Zongsheng Yue, Renjing Pei, Wenqi Ren, Xiaochun Cao},
18
+ title = {Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors},
19
+ journal = {arxiv},
20
+ year = {2024},
21
+ }
22
+
23
+ ## Limitations and Bias
24
+
25
+ ### Limitations
26
+
27
+ - S3Diff requires a tiled operation for generating a high-resolution image, which would largely increase the inference time.
28
+ - S3Diff sometimes cannot keep 100% fidelity due to its generative nature.
29
+ - S3Diff sometimes cannot generate perfect details under complex real-world scenarios.
30
+
31
+ ### Bias
32
+ While our model is based on a pre-trained SD-Turbo model, currently we do not observe obvious bias in generated results.
33
+ We conjecture the main reason is that our model does not rely on text prompts but on low-resolution images.
34
+ Such strong conditions make our model less likely to be affected.
35
+
36
+ ## Training
37
+
38
+ **Training Data**
39
+ The model developer used the following dataset for training the model:
40
+
41
+ - Our model is finetuned on [LSDIR](https://data.vision.ee.ethz.ch/yawli/index.html) + 100K samples from FFHQ datasets.
42
+
43
+ **Training Procedure**
44
+ S3Diff is an image super-resolution model finetuned on [SD-Turbo](https://huggingface.co/stabilityai/sd-turbo), further equipped with a degradation-guided LoRA and online negative prompting.
45
+
46
+ - Following SD-Turbo, images are encoded through the fixed autoencoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4.
47
+ - The LR images are fed to the degradation estimation network, trained by [mm-realsr](https://github.com/TencentARC/MM-RealSR), to predict degradation scores.
48
+ - We only inject LoRA layers into the VAE encoder and UNet.
49
+ - The total loss includes an L2 Loss, an LPIPS loss, and a GAN loss.
50
+
51
+ We currently provide the following checkpoints:
52
+
53
+ - [s3diff.pkl](https://huggingface.co/Iceclear/StableSR/resolve/main/stablesr_000117.ckpt): S3Diff finetuned on [SD-Turbo](https://huggingface.co/stabilityai/sd-turbo) for 30k iterations.
54
+ - [de_net.pth](https://huggingface.co/Iceclear/StableSR/resolve/main/stablesr_000117.ckpt): The degradation estimation network, extracted from [mm-realsr](https://github.com/TencentARC/MM-RealSR).
55
+
56
+ ## Evaluation Results
57
+ See [Paper](https://arxiv.org/abs/2409.17058) for details.