harpreetsahota commited on
Commit
08e7925
·
verified ·
1 Parent(s): e43f4f7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -112
README.md CHANGED
@@ -10,6 +10,9 @@ task_ids: []
10
  pretty_name: mind2web_multimodal
11
  tags:
12
  - fiftyone
 
 
 
13
  - image
14
  - image-classification
15
  - object-detection
@@ -48,7 +51,7 @@ dataset_summary: '
48
 
49
  # Note: other available arguments include ''max_samples'', etc
50
 
51
- dataset = load_from_hub("harpreetsahota/mind2web_multimodal_test_website")
52
 
53
 
54
  # Launch the App
@@ -60,12 +63,11 @@ dataset_summary: '
60
  '
61
  ---
62
 
63
- # Dataset Card for mind2web_multimodal
64
-
65
- <!-- Provide a quick summary of the dataset. -->
66
-
67
 
 
68
 
 
69
 
70
 
71
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 1019 samples.
@@ -86,141 +88,119 @@ from fiftyone.utils.huggingface import load_from_hub
86
 
87
  # Load the dataset
88
  # Note: other available arguments include 'max_samples', etc
89
- dataset = load_from_hub("harpreetsahota/mind2web_multimodal_test_website")
90
 
91
  # Launch the App
92
  session = fo.launch_app(dataset)
93
  ```
 
94
 
 
 
 
 
 
95
 
96
- ## Dataset Details
97
-
98
- ### Dataset Description
99
-
100
- <!-- Provide a longer summary of what this dataset is. -->
101
-
102
-
103
-
104
- - **Curated by:** [More Information Needed]
105
- - **Funded by [optional]:** [More Information Needed]
106
- - **Shared by [optional]:** [More Information Needed]
107
- - **Language(s) (NLP):** en
108
- - **License:** [More Information Needed]
109
-
110
- ### Dataset Sources [optional]
111
-
112
- <!-- Provide the basic links for the dataset. -->
113
-
114
- - **Repository:** [More Information Needed]
115
- - **Paper [optional]:** [More Information Needed]
116
- - **Demo [optional]:** [More Information Needed]
117
 
118
  ## Uses
119
-
120
- <!-- Address questions around how the dataset is intended to be used. -->
121
-
122
  ### Direct Use
123
-
124
- <!-- This section describes suitable use cases for the dataset. -->
125
-
126
- [More Information Needed]
127
 
128
  ### Out-of-Scope Use
129
-
130
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
131
-
132
- [More Information Needed]
133
 
134
  ## Dataset Structure
135
-
136
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
137
-
138
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
140
  ## Dataset Creation
141
-
142
  ### Curation Rationale
143
-
144
- <!-- Motivation for the creation of this dataset. -->
145
-
146
- [More Information Needed]
147
 
148
  ### Source Data
149
-
150
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
151
-
152
  #### Data Collection and Processing
153
-
154
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
155
-
156
- [More Information Needed]
157
 
158
  #### Who are the source data producers?
 
159
 
160
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
161
-
162
- [More Information Needed]
163
-
164
- ### Annotations [optional]
165
-
166
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
167
-
168
  #### Annotation process
169
-
170
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
171
-
172
- [More Information Needed]
173
 
174
  #### Who are the annotators?
 
175
 
176
- <!-- This section describes the people or systems who created the annotations. -->
177
-
178
- [More Information Needed]
179
-
180
- #### Personal and Sensitive Information
181
-
182
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
183
-
184
- [More Information Needed]
185
 
186
  ## Bias, Risks, and Limitations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
187
 
188
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
189
-
190
- [More Information Needed]
191
-
192
- ### Recommendations
193
-
194
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
195
-
196
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
197
-
198
- ## Citation [optional]
199
-
200
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
201
-
202
- **BibTeX:**
203
-
204
- [More Information Needed]
205
-
206
- **APA:**
207
-
208
- [More Information Needed]
209
-
210
- ## Glossary [optional]
211
-
212
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
213
-
214
- [More Information Needed]
215
-
216
- ## More Information [optional]
217
-
218
- [More Information Needed]
219
-
220
- ## Dataset Card Authors [optional]
221
-
222
- [More Information Needed]
223
 
224
  ## Dataset Card Contact
225
-
226
- [More Information Needed]
 
10
  pretty_name: mind2web_multimodal
11
  tags:
12
  - fiftyone
13
+ - visual-agents
14
+ - os-agents
15
+ - gui-grounding
16
  - image
17
  - image-classification
18
  - object-detection
 
51
 
52
  # Note: other available arguments include ''max_samples'', etc
53
 
54
+ dataset = load_from_hub("Voxel51/mind2web_multimodal_test_website")
55
 
56
 
57
  # Launch the App
 
63
  '
64
  ---
65
 
66
+ # Dataset Card for Multimodal Mind2Web "Cross-Domain" Test Split
 
 
 
67
 
68
+ **Note**: This dataset is the test split of the Cross-Domain dataset introduced in the paper.
69
 
70
+ ![image/png](m2w_tw.gif)
71
 
72
 
73
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 1019 samples.
 
88
 
89
  # Load the dataset
90
  # Note: other available arguments include 'max_samples', etc
91
+ dataset = load_from_hub("Voxel51/mind2web_multimodal_test_website")
92
 
93
  # Launch the App
94
  session = fo.launch_app(dataset)
95
  ```
96
+ # Dataset Details for "Cross-Website" Split in Multimodal Mind2Web
97
 
98
+ ## Dataset Description
99
+ **Curated by:** The Ohio State University NLP Group (OSU-NLP-Group)
100
+ **Shared by:** OSU-NLP-Group on Hugging Face
101
+ **Language(s) (NLP):** en
102
+ **License:** OPEN-RAIL License (mentioned in the Impact Statements section)
103
 
104
+ ## Dataset Sources
105
+ **Repository:** https://github.com/OSU-NLP-Group/SeeAct and https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web
106
+ **Paper:** "GPT-4V(ision) is a Generalist Web Agent, if Grounded" by Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, Yu Su
107
+ **Demo:** https://osu-nlp-group.github.io/SeeAct
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
 
109
  ## Uses
 
 
 
110
  ### Direct Use
111
+ - Evaluating web agents' ability to generalize to new websites within familiar domains
112
+ - Testing website-level transfer capabilities of models
113
+ - Benchmarking adaptability to new website interfaces with similar functionality
114
+ - Assessing how models handle design variations within the same domain category
115
 
116
  ### Out-of-Scope Use
117
+ - Developing web agents for harmful purposes (as stated in the paper's impact statement)
118
+ - Automating actions that could violate website terms of service
119
+ - Creating agents that access users' personal profiles or perform sensitive operations without consent
 
120
 
121
  ## Dataset Structure
122
+ - Contains 142 tasks across 9 domains and 10 websites
123
+ - Tasks average 7.2 actions each
124
+ - Average 4,653 visual tokens per task (highest among the three splits)
125
+ - Average 612 HTML elements per task (most complex pages among the splits)
126
+ - Average 114,358 HTML tokens per task
127
+ - Each example includes task descriptions, HTML structure, operations (CLICK, TYPE, SELECT), target elements with attributes, and action histories
128
+
129
+ ### FiftyOne Dataset Structure
130
+
131
+ **Basic Info:** 1,338 web UI screenshots with task-based annotations
132
+
133
+ **Core Fields:**
134
+ - `action_uid`: StringField - Unique action identifier
135
+ - `annotation_id`: StringField - Annotation identifier
136
+ - `target_action_index`: IntField - Index of target action in sequence
137
+ - `ground_truth`: EmbeddedDocumentField(Detection) - Element to interact with:
138
+ - `label`: Action type (TYPE, CLICK)
139
+ - `bounding_box`: a list of relative bounding box coordinates in [0, 1] in the following format: `<top-left-x>, <top-left-y>, <width>, <height>]`
140
+ - `target_action_reprs`: String representation of target action
141
+ - `website`: EmbeddedDocumentField(Classification) - Website name
142
+ - `domain`: EmbeddedDocumentField(Classification) - Website domain category
143
+ - `subdomain`: EmbeddedDocumentField(Classification) - Website subdomain category
144
+ - `task_description`: StringField - Natural language description of the task
145
+ - `full_sequence`: ListField(StringField) - Complete sequence of actions for the task
146
+ - `previous_actions`: ListField - Actions already performed in the sequence
147
+ - `current_action`: StringField - Action to be performed
148
+ - `alternative_candidates`: EmbeddedDocumentField(Detections) - Other possible elements
149
 
150
  ## Dataset Creation
 
151
  ### Curation Rationale
152
+ The Cross-Website split was specifically designed to evaluate an agent's ability to generalize to new websites within domains it has encountered during training, representing a medium difficulty generalization scenario.
 
 
 
153
 
154
  ### Source Data
 
 
 
155
  #### Data Collection and Processing
156
+ - Based on the original MIND2WEB dataset
157
+ - Each HTML document is aligned with its corresponding webpage screenshot image
158
+ - Underwent human verification to confirm element visibility and correct rendering for action prediction
159
+ - Specifically includes 10 new websites from the top-level domains represented in the training data
160
 
161
  #### Who are the source data producers?
162
+ Web screenshots and HTML were collected from 10 websites across 9 domains that were represented in the training data, but the specific websites were held out.
163
 
164
+ ### Annotations
 
 
 
 
 
 
 
165
  #### Annotation process
166
+ Each task includes annotated action sequences showing the correct steps to complete the task. These were likely captured through a tool that records user actions on websites.
 
 
 
167
 
168
  #### Who are the annotators?
169
+ Researchers from The Ohio State University NLP Group or hired annotators, though specific details aren't provided in the paper.
170
 
171
+ ### Personal and Sensitive Information
172
+ The dataset focuses on non-login tasks to comply with user agreements and avoid privacy issues.
 
 
 
 
 
 
 
173
 
174
  ## Bias, Risks, and Limitations
175
+ - This split presents a medium difficulty generalization scenario, testing adaptation to new interfaces within familiar domains
176
+ - In-context learning methods show advantages over supervised fine-tuning on this split
177
+ - The pages in this split are the most complex in terms of HTML elements and have the highest average visual tokens
178
+ - Website layouts and functionality may change over time, affecting the validity of the dataset
179
+ - Limited to only 10 websites across 9 domains, may not capture the full diversity of websites within those domains
180
+
181
+ ## Citation
182
+ ### BibTeX:
183
+
184
+ ```bibtex
185
+ @article{zheng2024seeact,
186
+ title={GPT-4V(ision) is a Generalist Web Agent, if Grounded},
187
+ author={Boyuan Zheng and Boyu Gou and Jihyung Kil and Huan Sun and Yu Su},
188
+ booktitle={Forty-first International Conference on Machine Learning},
189
+ year={2024},
190
+ url={https://openreview.net/forum?id=piecKJ2DlB},
191
+ }
192
+
193
+ @inproceedings{deng2023mindweb,
194
+ title={Mind2Web: Towards a Generalist Agent for the Web},
195
+ author={Xiang Deng and Yu Gu and Boyuan Zheng and Shijie Chen and Samuel Stevens and Boshi Wang and Huan Sun and Yu Su},
196
+ booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
197
+ year={2023},
198
+ url={https://openreview.net/forum?id=kiYqbO3wqw}
199
+ }
200
+ ```
201
 
202
+ ### APA:
203
+ Zheng, B., Gou, B., Kil, J., Sun, H., & Su, Y. (2024). GPT-4V(ision) is a Generalist Web Agent, if Grounded. arXiv preprint arXiv:2401.01614.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
204
 
205
  ## Dataset Card Contact
206
+ GitHub: https://github.com/OSU-NLP-Group/SeeAct