onkarsus13 commited on
Commit
95d4b52
·
verified ·
1 Parent(s): a8439b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -3
README.md CHANGED
@@ -1,3 +1,22 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # This is the official Weights Repository for the CelrifyNet
6
+
7
+ Please cite work work if you use this weights
8
+
9
+ ```bibtex
10
+ @article{SUSLADKAR2022102736,
11
+ title = {ClarifyNet: A high-pass and low-pass filtering based CNN for single image dehazing},
12
+ journal = {Journal of Systems Architecture},
13
+ volume = {132},
14
+ pages = {102736},
15
+ year = {2022},
16
+ issn = {1383-7621},
17
+ doi = {https://doi.org/10.1016/j.sysarc.2022.102736},
18
+ url = {https://www.sciencedirect.com/science/article/pii/S1383762122002211},
19
+ author = {Onkar Susladkar and Gayatri Deshmukh and Subhrajit Nag and Ananya Mantravadi and Dhruv Makwana and Sujitha Ravichandran and Sai Chandra Teja R and Gajanan H Chavhan and C Krishna Mohan and Sparsh Mittal},
20
+ keywords = {Single-image dehazing, Convolutional neural network, Encoder–decoder architecture, Attention, Low-pass filter, High-pass filter},
21
+ abstract = {Dehazing refers to removing the haze and restoring the details from hazy images. In this paper, we propose ClarifyNet, a novel, end-to-end trainable, convolutional neural network architecture for single image dehazing. We note that a high-pass filter detects sharp edges, texture, and other fine details in the image, whereas a low-pass filter detects color and contrast information. Based on this observation, our key idea is to train ClarifyNet on ground-truth haze-free images, low-pass filtered images, and high-pass filtered images. Based on this observation, we present a shared-encoder multi-decoder model ClarifyNet which employs interconnected parallelization. While training, ground-truth haze-free images, low-pass filtered images, and high-pass filtered images undergo multi-stage filter fusion and attention. By utilizing a weighted loss function composed of SSIM loss and L1 loss, we extract and propagate complementary features. We comprehensively evaluate ClarifyNet on I-HAZE, O-HAZE, Dense-Haze, NH-HAZE, SOTS-Indoor, SOTS-Outdoor, HSTS, and Middlebury datasets. We use PSNR and SSIM metrics and compare the results with previous works. For most datasets, ClarifyNet provides the highest scores. On using EfficientNet-B6 as the backbone, ClarifyNet has 18 M parameters (model size of ∼71 MB) and a throughput of 8 frames-per-second while processing images of size 2048 × 1024.}
22
+ }