Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
### A Unified Interface for IQA Datasets
|
2 |
+
|
3 |
+
This repository contains a unified interface for **downloading and loading** 20 popular Image Quality Assessment (IQA) datasets. We provide codes for both general **Python** and **PyTorch**.
|
4 |
+
|
5 |
+
#### Citation
|
6 |
+
|
7 |
+
This repository is part of our [Bayesian IQA project](http://ivc.uwaterloo.ca/research/bayesianIQA/) where we present an overview of IQA methods from a Bayesian perspective. More detailed summaries of both IQA models and datasets can be found in this [interactive webpage](http://ivc.uwaterloo.ca/research/bayesianIQA/).
|
8 |
+
|
9 |
+
If you find our project useful, please cite our paper
|
10 |
+
```
|
11 |
+
@article{duanmu2021biqa,
|
12 |
+
author = {Duanmu, Zhengfang and Liu, Wentao and Wang, Zhongling and Wang, Zhou},
|
13 |
+
title = {Quantifying Visual Image Quality: A Bayesian View},
|
14 |
+
journal = {Annual Review of Vision Science},
|
15 |
+
volume = {7},
|
16 |
+
number = {1},
|
17 |
+
pages = {437-464},
|
18 |
+
year = {2021}
|
19 |
+
}
|
20 |
+
```
|
21 |
+
|
22 |
+
#### Supported Datasets
|
23 |
+
|
24 |
+
| Dataset | Dis Img | Ref Img | MOS | DMOS |
|
25 |
+
| :-----------------------------------------------------------------------------------: | :----------------: | :----------------: | :----------------: | :----------------: |
|
26 |
+
| [LIVE](https://live.ece.utexas.edu/research/quality/subjective.htm) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
|
27 |
+
| [A57](http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=26) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
|
28 |
+
| [LIVE_MD](https://live.ece.utexas.edu/research/Quality/live_multidistortedimage.html) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
|
29 |
+
| [MDID2013](https://ieeexplore.ieee.org/document/6879255) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
|
30 |
+
| [CSIQ](http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=23) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
|
31 |
+
| [KADID-10k](http://database.mmsp-kn.de/kadid-10k-database.html) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:<sub>[(Note)](https://github.com/icbcbicc/IQA-Dataset/issues/3#issuecomment-2192649304)</sub> ~~~~| |
|
32 |
+
| [TID2008](http://www.ponomarenko.info/tid2008.htm) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
|
33 |
+
| [TID2013](http://www.ponomarenko.info/tid2013.htm) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
|
34 |
+
| [CIDIQ_MOS100](https://www.ntnu.edu/web/colourlab/software) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
|
35 |
+
| [CIDIQ_MOS50](https://www.ntnu.edu/web/colourlab/software) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
|
36 |
+
| [MDID2016](https://www.sciencedirect.com/science/article/abs/pii/S0031320316301911) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
|
37 |
+
| [SDIVL](http://www.ivl.disco.unimib.it/activities/imagequality/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
|
38 |
+
| [MDIVL](http://www.ivl.disco.unimib.it/activities/imagequality/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
|
39 |
+
| [Toyama](http://mict.eng.u-toyama.ac.jp/mictdb.html) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
|
40 |
+
| [PDAP-HDDS](https://sites.google.com/site/eelab907/zi-liao-ku) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
|
41 |
+
| [VCLFER](https://www.vcl.fer.hr/quality/vclfer.html) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
|
42 |
+
| [LIVE_Challenge](https://live.ece.utexas.edu/research/ChallengeDB/index.html) | :heavy_check_mark: | | :heavy_check_mark: | |
|
43 |
+
| [CID2013](https://zenodo.org/record/2647033#.YDSi73X0kUc) | :heavy_check_mark: | | :heavy_check_mark: | |
|
44 |
+
| [KonIQ-10k](http://database.mmsp-kn.de/koniq-10k-database.html) | :heavy_check_mark: | | :heavy_check_mark: | |
|
45 |
+
| [SPAQ](https://github.com/h4nwei/SPAQ) | :heavy_check_mark: | | :heavy_check_mark: | |
|
46 |
+
| [Waterloo_Exploration](https://ece.uwaterloo.ca/~k29ma/exploration/) | :heavy_check_mark: | :heavy_check_mark: | | |
|
47 |
+
| [<del>KADIS-700k</del>](http://database.mmsp-kn.de/kadid-10k-database.html) | :heavy_check_mark: <sub>(code only)</sub> | :heavy_check_mark: | | |
|
48 |
+
|
49 |
+
#### Basic Usage
|
50 |
+
|
51 |
+
0. Prerequisites
|
52 |
+
```shell
|
53 |
+
pip install wget
|
54 |
+
```
|
55 |
+
|
56 |
+
1. General Python (please refer [```demo.py```](demo.py))
|
57 |
+
|
58 |
+
```python
|
59 |
+
from load_dataset import load_dataset
|
60 |
+
dataset = load_dataset("LIVE")
|
61 |
+
```
|
62 |
+
|
63 |
+
2. PyTorch (please refer [```demo_pytorch.py```](demo_pytorch.py))
|
64 |
+
|
65 |
+
```python
|
66 |
+
from load_dataset import load_dataset_pytorch
|
67 |
+
dataset = load_dataset_pytorch("LIVE")
|
68 |
+
```
|
69 |
+
|
70 |
+
#### Advanced Usage
|
71 |
+
|
72 |
+
1. General Python (please refer [```demo.py```](demo.py))
|
73 |
+
|
74 |
+
```python
|
75 |
+
from load_dataset import load_dataset
|
76 |
+
dataset = load_dataset("LIVE", dataset_root="data", attributes=["dis_img_path", "dis_type", "ref_img_path", "score"], download=True)
|
77 |
+
```
|
78 |
+
|
79 |
+
2. PyTorch (please refer [```demo_pytorch.py```](demo_pytorch.py))
|
80 |
+
|
81 |
+
```python
|
82 |
+
from load_dataset import load_dataset_pytorch
|
83 |
+
transform = transforms.Compose([transforms.RandomCrop(size=64), transforms.ToTensor()])
|
84 |
+
dataset = load_dataset_pytorch("LIVE", dataset_root="data", attributes=["dis_img_path", "dis_type", "ref_img_path", "score"], download=True, transform=transform)
|
85 |
+
```
|
86 |
+
|
87 |
+
#### TODO
|
88 |
+
|
89 |
+
- [ ] Add more datasets: [PaQ-2-PiQ](https://github.com/baidut/PaQ-2-PiQ), [AVA](https://github.com/mtobeiyf/ava_downloader), [PIPAL](https://www.jasongt.com/projectpages/pipal.html), [AADB](https://github.com/aimerykong/deepImageAestheticsAnalysis), [FLIVE](https://github.com/niu-haoran/FLIVE_Database/blob/master/database_prep.ipynb), [BIQ2021](https://github.com/nisarahmedrana/BIQ2021), [IVC](http://ivc.univ-nantes.fr/en/databases/Subjective_Database/)
|
90 |
+
- [ ] PyPI package
|
91 |
+
- [ ] HuggingFace dataset
|
92 |
+
- [ ] Provide more attributes
|
93 |
+
- [ ] ~~Add TensorFlow support~~
|
94 |
+
- [ ] ~~Add MATLAB support~~
|
95 |
+
|
96 |
+
#### Star History
|
97 |
+
|
98 |
+
[](https://star-history.com/#icbcbicc/IQA-Dataset&Date)
|