Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
abhik1505040 commited on
Commit
b0b8068
·
verified ·
1 Parent(s): d830889

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -21
README.md CHANGED
@@ -3,21 +3,27 @@ dataset_info:
3
  features:
4
  - name: image
5
  dtype: image
6
- - name: label
7
- dtype:
8
- class_label:
9
- names:
10
- '0': test
11
- '1': train
 
 
 
 
 
 
12
  splits:
13
  - name: train
14
- num_bytes: 48882
15
  num_examples: 4
16
  - name: test
17
- num_bytes: 24413829
18
- num_examples: 1000
19
- download_size: 24468235
20
- dataset_size: 24462711
21
  configs:
22
  - config_name: default
23
  data_files:
@@ -28,6 +34,7 @@ configs:
28
  task_categories:
29
  - image-to-text
30
  - visual-question-answering
 
31
  language:
32
  - en
33
  size_categories:
@@ -35,8 +42,10 @@ size_categories:
35
  ---
36
  # IllusionVQA: Optical Illusion Dataset
37
 
38
- Paper Link: <br>
39
- Github Link: <be>
 
 
40
  ## TL;DR
41
  IllusionVQA is a dataset of optical illusions and hard-to-interpret scenes designed to test the capability of Vision Language Models in comprehension and soft localization tasks. GPT4V achieved 62.99% accuracy on comprehension and 49.7% on localization, while humans achieved 91.03% and 100% respectively.
42
 
@@ -69,17 +78,17 @@ def construct_mcq(options, correct_option):
69
  def add_row(content, data, i, with_answer=False):
70
  mcq, correct_option_letter = construct_mcq(data["options"], data["answer"])
71
  content.append({ "type": "text",
72
- "text": "Image "+str(i)+": "+data["question"]+"\n"+mcq })
73
  content.append({ "type": "image_url",
74
  "image_url": {"url": f"data:image/jpeg;base64,{encode_image(data["image"])}",
75
  "detail": "low"}})
76
  if with_answer:
77
- content.append({"type": "text", "text": "Answer {}: ".format(i)+correct_option_letter})
78
  else:
79
  content.append({"type": "text", "text": "Answer {}: ".format(i), })
80
  return content
81
 
82
- dataset = load_dataset("csebuetnlp/illusionVQA-Soft-Localization")
83
  client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
84
 
85
  content = [{
@@ -87,13 +96,12 @@ content = [{
87
  "text": "You'll be given an image, an instruction and some choices. You have to select the correct one. Do not explain your reasoning. Answer with the option's letter from the given choices directly. Here are a few examples:",
88
  }]
89
 
90
- ### Add the few examples
91
- i = 1
92
- for data in dataset["train"]:
93
  content = add_row(content, data, i, with_answer=True)
94
- i += 1
95
 
96
  content.append({"type": "text","text": "Now you try it!",})
 
97
  next_idx = i
98
 
99
  ### Add the test data
@@ -108,14 +116,25 @@ response = client.chat.completions.create(
108
  )
109
  gpt4_answer = response.choices[0].message.content
110
  print(gpt4_answer)
 
111
  ```
112
 
113
  ## License
114
- This dataset is made available for non-commercial research purposes only, including for evaluation of model performance. The dataset may not be used for training models. The dataset contains images collected from the internet. While permission has been obtained from some of the images' creators, permission has not yet been received from all creators. If you believe any image in this dataset is used without proper permission and you are the copyright holder, please email [email protected] to request the removal of the image from the dataset.
115
 
116
  The dataset creator makes no representations or warranties regarding the copyright status of the images in the dataset. The dataset creator shall not be held liable for any unauthorized use of copyrighted material that may be contained in the dataset.
117
 
118
  You agree to the terms and conditions specified in this license by downloading or using this dataset. If you do not agree with these terms, do not download or use the dataset.
119
 
 
 
120
 
121
  ### Citation
 
 
 
 
 
 
 
 
 
3
  features:
4
  - name: image
5
  dtype: image
6
+ - name: question
7
+ dtype: string
8
+ - name: options
9
+ sequence: string
10
+ - name: answer
11
+ dtype: string
12
+ - name: category
13
+ dtype: string
14
+ - name: id
15
+ dtype: int64
16
+ - name: source
17
+ dtype: string
18
  splits:
19
  - name: train
20
+ num_bytes: 145603
21
  num_examples: 4
22
  - name: test
23
+ num_bytes: 15782648
24
+ num_examples: 435
25
+ download_size: 14338600
26
+ dataset_size: 15928251
27
  configs:
28
  - config_name: default
29
  data_files:
 
34
  task_categories:
35
  - image-to-text
36
  - visual-question-answering
37
+ - question-answering
38
  language:
39
  - en
40
  size_categories:
 
42
  ---
43
  # IllusionVQA: Optical Illusion Dataset
44
 
45
+ [Project Page](https://illusionvqa.github.io/) |
46
+ [Paper](https://arxiv.org/abs/2403.15952) |
47
+ [Github](https://github.com/csebuetnlp/IllusionVQA/)
48
+
49
  ## TL;DR
50
  IllusionVQA is a dataset of optical illusions and hard-to-interpret scenes designed to test the capability of Vision Language Models in comprehension and soft localization tasks. GPT4V achieved 62.99% accuracy on comprehension and 49.7% on localization, while humans achieved 91.03% and 100% respectively.
51
 
 
78
  def add_row(content, data, i, with_answer=False):
79
  mcq, correct_option_letter = construct_mcq(data["options"], data["answer"])
80
  content.append({ "type": "text",
81
+ "text": "Image " + str(i) + ": " + data["question"] + "\n" + mcq })
82
  content.append({ "type": "image_url",
83
  "image_url": {"url": f"data:image/jpeg;base64,{encode_image(data["image"])}",
84
  "detail": "low"}})
85
  if with_answer:
86
+ content.append({"type": "text", "text": "Answer {}: ".format(i) + correct_option_letter})
87
  else:
88
  content.append({"type": "text", "text": "Answer {}: ".format(i), })
89
  return content
90
 
91
+ dataset = load_dataset("csebuetnlp/illusionVQA-Comprehension")
92
  client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
93
 
94
  content = [{
 
96
  "text": "You'll be given an image, an instruction and some choices. You have to select the correct one. Do not explain your reasoning. Answer with the option's letter from the given choices directly. Here are a few examples:",
97
  }]
98
 
99
+ ### Add a few examples
100
+ for i, data in enumerate(dataset["train"], 1):
 
101
  content = add_row(content, data, i, with_answer=True)
 
102
 
103
  content.append({"type": "text","text": "Now you try it!",})
104
+
105
  next_idx = i
106
 
107
  ### Add the test data
 
116
  )
117
  gpt4_answer = response.choices[0].message.content
118
  print(gpt4_answer)
119
+
120
  ```
121
 
122
  ## License
123
+ This dataset is made available for non-commercial research purposes only under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). The dataset may not be used for training models. The dataset contains images collected from the internet. While permission has been obtained from some of the images' creators, permission has not yet been received from all creators. If you believe any image in this dataset is used without proper permission and you are the copyright holder, please email [email protected] to request the removal of the image from the dataset.
124
 
125
  The dataset creator makes no representations or warranties regarding the copyright status of the images in the dataset. The dataset creator shall not be held liable for any unauthorized use of copyrighted material that may be contained in the dataset.
126
 
127
  You agree to the terms and conditions specified in this license by downloading or using this dataset. If you do not agree with these terms, do not download or use the dataset.
128
 
129
+ <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a>
130
+
131
 
132
  ### Citation
133
+ ```
134
+ @article{shahgir2024illusionvqa,
135
+ title={IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models},
136
+ author={Haz Sameen Shahgir and Khondker Salman Sayeed and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yue Dong and Rifat Shahriyar},
137
+ year={2024},
138
+ url={https://arxiv.org/abs/2403.15952},
139
+ }
140
+ ```