harpreetsahota commited on
Commit
ef1f67d
·
verified ·
1 Parent(s): 6b42f60

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -134
README.md CHANGED
@@ -6,23 +6,21 @@ size_categories:
6
  task_categories:
7
  - image-classification
8
  task_ids: []
9
- pretty_name: ImageNet_D
10
  tags:
11
  - fiftyone
12
  - image
13
  - image-classification
14
- dataset_summary: '
15
-
16
-
17
-
18
-
19
- This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4835 samples.
20
 
 
 
21
 
22
  ## Installation
23
 
24
 
25
- If you haven''t already, install FiftyOne:
26
 
27
 
28
  ```bash
@@ -30,8 +28,6 @@ dataset_summary: '
30
  pip install -U fiftyone
31
 
32
  ```
33
-
34
-
35
  ## Usage
36
 
37
 
@@ -39,14 +35,14 @@ dataset_summary: '
39
 
40
  import fiftyone as fo
41
 
42
- from fiftyone.utils.huggingface import load_from_hub
43
 
44
 
45
  # Load the dataset
46
 
47
- # Note: other available arguments include ''max_samples'', etc
48
 
49
- dataset = load_from_hub("harpreetsahota/ImageNet-D")
50
 
51
 
52
  # Launch the App
@@ -54,19 +50,13 @@ dataset_summary: '
54
  session = fo.launch_app(dataset)
55
 
56
  ```
57
-
58
- '
59
  ---
60
 
61
- # Dataset Card for ImageNet_D
62
-
63
- <!-- Provide a quick summary of the dataset. -->
64
 
 
65
 
66
-
67
-
68
-
69
- This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4835 samples.
70
 
71
  ## Installation
72
 
@@ -80,145 +70,61 @@ pip install -U fiftyone
80
 
81
  ```python
82
  import fiftyone as fo
83
- from fiftyone.utils.huggingface import load_from_hub
84
 
85
  # Load the dataset
86
  # Note: other available arguments include 'max_samples', etc
87
- dataset = load_from_hub("harpreetsahota/ImageNet-D")
88
 
89
  # Launch the App
90
  session = fo.launch_app(dataset)
91
  ```
92
 
93
-
94
- ## Dataset Details
95
-
96
  ### Dataset Description
97
 
98
- <!-- Provide a longer summary of what this dataset is. -->
99
-
100
-
101
 
102
- - **Curated by:** [More Information Needed]
103
- - **Funded by [optional]:** [More Information Needed]
104
- - **Shared by [optional]:** [More Information Needed]
105
- - **Language(s) (NLP):** en
106
- - **License:** [More Information Needed]
107
 
108
- ### Dataset Sources [optional]
109
 
110
- <!-- Provide the basic links for the dataset. -->
111
 
112
- - **Repository:** [More Information Needed]
113
- - **Paper [optional]:** [More Information Needed]
114
- - **Demo [optional]:** [More Information Needed]
115
 
116
- ## Uses
117
-
118
- <!-- Address questions around how the dataset is intended to be used. -->
119
-
120
- ### Direct Use
121
-
122
- <!-- This section describes suitable use cases for the dataset. -->
123
-
124
- [More Information Needed]
125
-
126
- ### Out-of-Scope Use
127
-
128
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
129
-
130
- [More Information Needed]
131
-
132
- ## Dataset Structure
133
-
134
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
135
-
136
- [More Information Needed]
137
-
138
- ## Dataset Creation
139
-
140
- ### Curation Rationale
141
-
142
- <!-- Motivation for the creation of this dataset. -->
143
-
144
- [More Information Needed]
145
 
146
  ### Source Data
147
 
148
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
149
 
150
  #### Data Collection and Processing
151
 
152
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
153
 
154
- [More Information Needed]
155
 
156
- #### Who are the source data producers?
157
 
158
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
159
 
160
- [More Information Needed]
161
 
162
- ### Annotations [optional]
163
 
164
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
165
-
166
- #### Annotation process
167
-
168
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
169
-
170
- [More Information Needed]
171
-
172
- #### Who are the annotators?
173
-
174
- <!-- This section describes the people or systems who created the annotations. -->
175
-
176
- [More Information Needed]
177
-
178
- #### Personal and Sensitive Information
179
-
180
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
181
-
182
- [More Information Needed]
183
-
184
- ## Bias, Risks, and Limitations
185
-
186
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
187
-
188
- [More Information Needed]
189
-
190
- ### Recommendations
191
-
192
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
193
-
194
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
195
-
196
- ## Citation [optional]
197
 
198
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
199
 
 
200
  **BibTeX:**
201
-
202
- [More Information Needed]
203
-
204
- **APA:**
205
-
206
- [More Information Needed]
207
-
208
- ## Glossary [optional]
209
-
210
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
211
-
212
- [More Information Needed]
213
-
214
- ## More Information [optional]
215
-
216
- [More Information Needed]
217
-
218
- ## Dataset Card Authors [optional]
219
-
220
- [More Information Needed]
221
-
222
- ## Dataset Card Contact
223
-
224
- [More Information Needed]
 
6
  task_categories:
7
  - image-classification
8
  task_ids: []
9
+ pretty_name: ImageNet-D
10
  tags:
11
  - fiftyone
12
  - image
13
  - image-classification
14
+ - synthetic
15
+ dataset_summary: >
 
 
 
 
16
 
17
+ This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4838
18
+ samples.
19
 
20
  ## Installation
21
 
22
 
23
+ If you haven't already, install FiftyOne:
24
 
25
 
26
  ```bash
 
28
  pip install -U fiftyone
29
 
30
  ```
 
 
31
  ## Usage
32
 
33
 
 
35
 
36
  import fiftyone as fo
37
 
38
+ import fiftyone.utils.huggingface as fouh
39
 
40
 
41
  # Load the dataset
42
 
43
+ # Note: other available arguments include 'max_samples', etc
44
 
45
+ dataset = fouh.load_from_hub("harpreetsahota/ImageNet-D")
46
 
47
 
48
  # Launch the App
 
50
  session = fo.launch_app(dataset)
51
 
52
  ```
 
 
53
  ---
54
 
55
+ # Dataset Card for ImageNet-D
 
 
56
 
57
+ ![image/png](imagenet-d.gif)
58
 
59
+ This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4838 samples.
 
 
 
60
 
61
  ## Installation
62
 
 
70
 
71
  ```python
72
  import fiftyone as fo
73
+ import fiftyone.utils.huggingface as fouh
74
 
75
  # Load the dataset
76
  # Note: other available arguments include 'max_samples', etc
77
+ dataset = fouh.load_from_hub("Voxel51/ImageNet-D")
78
 
79
  # Launch the App
80
  session = fo.launch_app(dataset)
81
  ```
82
 
 
 
 
83
  ### Dataset Description
84
 
85
+ ImageNet-D is a new benchmark created using diffusion models to generate realistic synthetic images with diverse backgrounds, textures, and materials[1]. The dataset contains 4,835 hard images that cause significant accuracy drops of up to 60% for a range of vision models, including ResNet, ViT, CLIP, LLaVa, and MiniGPT-4[1].
 
 
86
 
87
+ To create ImageNet-D, a large pool of synthetic images is generated by combining object categories with various nuisance attributes using Stable Diffusion[1]. The most challenging images that cause shared failures across multiple surrogate models are selected for the final dataset[1]. Human labelling via Amazon Mechanical Turk is used for quality control to ensure the images are valid and high-quality[1].
 
 
 
 
88
 
89
+ Experiments show that ImageNet-D reveals significant robustness gaps in current vision models[1]. The synthetic images transfer well to unseen models, uncovering common failure modes[1]. ImageNet-D provides a more diverse and challenging test set than prior synthetic benchmarks like ImageNet-C, ImageNet-9, and Stylized ImageNet[1].
90
 
91
+ The recipe notebook for creating this dataset can be found [here](https://colab.research.google.com/drive/1iiiXN8B36YhjtOH2PDbHevHTXH736It_?usp=sharing)
92
 
93
+ Citations:
94
+ [1] https://arxiv.org/html/2403.18775v1
 
95
 
96
+ - **Funded by :** KAIST, University of Michigan, Ann Arbor, McGill University, MILA
97
+ - **License:** MIT License
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
 
99
  ### Source Data
100
 
101
+ See the [original repo](https://github.com/chenshuang-zhang/imagenet_d) for details
102
 
103
  #### Data Collection and Processing
104
 
105
+ The ImageNet-D dataset was constructed using diffusion models to generate a large pool of realistic synthetic images covering various combinations of object categories and nuisance attributes. The key steps in the data collection and generation process were:
106
 
107
+ 1. **Image generation**: The Stable Diffusion model was used to generate high-fidelity images based on user-defined text prompts specifying the desired object category (C) and nuisance attributes (N) such as background, material, and texture. The image generation is formulated as:
108
 
109
+ Image(C, N) = StableDiffusion(Prompt(C, N))
110
 
111
+ For example, to generate an image of a backpack, the prompt might specify "a backpack in a wheat field" to control both the object category and background nuisance.
112
 
113
+ 2. **Prompt design**: A set of prompts was carefully designed to cover a matrix of object categories and nuisance attributes (see [Table 1 in the paper](https://arxiv.org/html/2403.18775v1#S3) for an overview). This allows generating images with a much broader range of category-nuisance combinations compared to existing test sets.
114
 
115
+ 3. **Labeling**: Each generated image is automatically labeled with the object category (C) specified in its generation prompt. This category label serves as the ground truth for evaluating classification models on the ImageNet-D dataset. A classification is considered incorrect if the model's predicted class does not match the ground truth category.
116
 
117
+ #### Who are the source data producers?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
118
 
119
+ Chenshuang Zhang, Fei Pan, Junmo Kim, In So Kweon, Chengzhi Mao
120
 
121
+ ## Citation
122
  **BibTeX:**
123
+ ```bibtex
124
+ @article{zhang2024imagenet_d,
125
+ author = {Zhang, Chenshuang and Pan, Fei and Kim, Junmo and Kweon, In So and Mao, Chengzhi},
126
+ title = {ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object},
127
+ journal = {CVPR},
128
+ year = {2024},
129
+ }
130
+ ```