harpreetsahota commited on
Commit
0e3e2c0
·
verified ·
1 Parent(s): a229f61

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +130 -0
README.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators: []
3
+ language: en
4
+ size_categories:
5
+ - 1K<n<10K
6
+ task_categories:
7
+ - image-classification
8
+ task_ids: []
9
+ pretty_name: ImageNet-D
10
+ tags:
11
+ - fiftyone
12
+ - image
13
+ - image-classification
14
+ - synthetic
15
+ dataset_summary: >
16
+
17
+ This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4838
18
+ samples.
19
+
20
+ ## Installation
21
+
22
+
23
+ If you haven't already, install FiftyOne:
24
+
25
+
26
+ ```bash
27
+
28
+ pip install -U fiftyone
29
+
30
+ ```
31
+ ## Usage
32
+
33
+
34
+ ```python
35
+
36
+ import fiftyone as fo
37
+
38
+ import fiftyone.utils.huggingface as fouh
39
+
40
+
41
+ # Load the dataset
42
+
43
+ # Note: other available arguments include 'max_samples', etc
44
+
45
+ dataset = fouh.load_from_hub("harpreetsahota/ImageNet-D")
46
+
47
+
48
+ # Launch the App
49
+
50
+ session = fo.launch_app(dataset)
51
+
52
+ ```
53
+ ---
54
+
55
+ # Dataset Card for ImageNet-D
56
+
57
+ ![image/png](imagenet-d.gif)
58
+
59
+ This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4838 samples.
60
+
61
+ ## Installation
62
+
63
+ If you haven't already, install FiftyOne:
64
+
65
+ ```bash
66
+ pip install -U fiftyone
67
+ ```
68
+
69
+ ## Usage
70
+
71
+ ```python
72
+ import fiftyone as fo
73
+ import fiftyone.utils.huggingface as fouh
74
+
75
+ # Load the dataset
76
+ # Note: other available arguments include 'max_samples', etc
77
+ dataset = fouh.load_from_hub("Voxel51/ImageNet-D")
78
+
79
+ # Launch the App
80
+ session = fo.launch_app(dataset)
81
+ ```
82
+
83
+ ### Dataset Description
84
+
85
+ ImageNet-D is a new benchmark created using diffusion models to generate realistic synthetic images with diverse backgrounds, textures, and materials[1]. The dataset contains 4,835 hard images that cause significant accuracy drops of up to 60% for a range of vision models, including ResNet, ViT, CLIP, LLaVa, and MiniGPT-4[1].
86
+
87
+ To create ImageNet-D, a large pool of synthetic images is generated by combining object categories with various nuisance attributes using Stable Diffusion[1]. The most challenging images that cause shared failures across multiple surrogate models are selected for the final dataset[1]. Human labelling via Amazon Mechanical Turk is used for quality control to ensure the images are valid and high-quality[1].
88
+
89
+ Experiments show that ImageNet-D reveals significant robustness gaps in current vision models[1]. The synthetic images transfer well to unseen models, uncovering common failure modes[1]. ImageNet-D provides a more diverse and challenging test set than prior synthetic benchmarks like ImageNet-C, ImageNet-9, and Stylized ImageNet[1].
90
+
91
+ The recipe notebook for creating this dataset can be found [here](https://colab.research.google.com/drive/1iiiXN8B36YhjtOH2PDbHevHTXH736It_?usp=sharing)
92
+
93
+ Citations:
94
+ [1] https://arxiv.org/html/2403.18775v1
95
+
96
+ - **Funded by :** KAIST, University of Michigan, Ann Arbor, McGill University, MILA
97
+ - **License:** MIT License
98
+
99
+ ### Source Data
100
+
101
+ See the [original repo](https://github.com/chenshuang-zhang/imagenet_d) for details
102
+
103
+ #### Data Collection and Processing
104
+
105
+ The ImageNet-D dataset was constructed using diffusion models to generate a large pool of realistic synthetic images covering various combinations of object categories and nuisance attributes. The key steps in the data collection and generation process were:
106
+
107
+ 1. **Image generation**: The Stable Diffusion model was used to generate high-fidelity images based on user-defined text prompts specifying the desired object category (C) and nuisance attributes (N) such as background, material, and texture. The image generation is formulated as:
108
+
109
+ Image(C, N) = StableDiffusion(Prompt(C, N))
110
+
111
+ For example, to generate an image of a backpack, the prompt might specify "a backpack in a wheat field" to control both the object category and background nuisance.
112
+
113
+ 2. **Prompt design**: A set of prompts was carefully designed to cover a matrix of object categories and nuisance attributes (see [Table 1 in the paper](https://arxiv.org/html/2403.18775v1#S3) for an overview). This allows generating images with a much broader range of category-nuisance combinations compared to existing test sets.
114
+
115
+ 3. **Labeling**: Each generated image is automatically labeled with the object category (C) specified in its generation prompt. This category label serves as the ground truth for evaluating classification models on the ImageNet-D dataset. A classification is considered incorrect if the model's predicted class does not match the ground truth category.
116
+
117
+ #### Who are the source data producers?
118
+
119
+ Chenshuang Zhang, Fei Pan, Junmo Kim, In So Kweon, Chengzhi Mao
120
+
121
+ ## Citation
122
+ **BibTeX:**
123
+ ```bibtex
124
+ @article{zhang2024imagenet_d,
125
+ author = {Zhang, Chenshuang and Pan, Fei and Kim, Junmo and Kweon, In So and Mao, Chengzhi},
126
+ title = {ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object},
127
+ journal = {CVPR},
128
+ year = {2024},
129
+ }
130
+ ```