harpreetsahota commited on
Commit
6edd225
·
verified ·
1 Parent(s): c7d3ea4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -112
README.md CHANGED
@@ -10,6 +10,9 @@ task_ids: []
10
  pretty_name: mind2web_multimodal_test_domain
11
  tags:
12
  - fiftyone
 
 
 
13
  - image
14
  - image-classification
15
  - object-detection
@@ -48,7 +51,7 @@ dataset_summary: '
48
 
49
  # Note: other available arguments include ''max_samples'', etc
50
 
51
- dataset = load_from_hub("harpreetsahota/mind2web_multimodal_test_domain")
52
 
53
 
54
  # Launch the App
@@ -60,12 +63,11 @@ dataset_summary: '
60
  '
61
  ---
62
 
63
- # Dataset Card for mind2web_multimodal_test_domain
64
-
65
- <!-- Provide a quick summary of the dataset. -->
66
-
67
 
 
68
 
 
69
 
70
 
71
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4050 samples.
@@ -86,141 +88,119 @@ from fiftyone.utils.huggingface import load_from_hub
86
 
87
  # Load the dataset
88
  # Note: other available arguments include 'max_samples', etc
89
- dataset = load_from_hub("harpreetsahota/mind2web_multimodal_test_domain")
90
 
91
  # Launch the App
92
  session = fo.launch_app(dataset)
93
  ```
94
 
 
 
 
 
 
95
 
96
- ## Dataset Details
97
-
98
- ### Dataset Description
99
-
100
- <!-- Provide a longer summary of what this dataset is. -->
101
-
102
-
103
-
104
- - **Curated by:** [More Information Needed]
105
- - **Funded by [optional]:** [More Information Needed]
106
- - **Shared by [optional]:** [More Information Needed]
107
- - **Language(s) (NLP):** en
108
- - **License:** [More Information Needed]
109
-
110
- ### Dataset Sources [optional]
111
-
112
- <!-- Provide the basic links for the dataset. -->
113
-
114
- - **Repository:** [More Information Needed]
115
- - **Paper [optional]:** [More Information Needed]
116
- - **Demo [optional]:** [More Information Needed]
117
 
118
  ## Uses
119
-
120
- <!-- Address questions around how the dataset is intended to be used. -->
121
-
122
  ### Direct Use
123
-
124
- <!-- This section describes suitable use cases for the dataset. -->
125
-
126
- [More Information Needed]
127
 
128
  ### Out-of-Scope Use
129
-
130
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
131
-
132
- [More Information Needed]
133
 
134
  ## Dataset Structure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135
 
136
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
137
-
138
- [More Information Needed]
139
 
140
  ## Dataset Creation
141
-
142
  ### Curation Rationale
143
-
144
- <!-- Motivation for the creation of this dataset. -->
145
-
146
- [More Information Needed]
147
 
148
  ### Source Data
149
-
150
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
151
-
152
  #### Data Collection and Processing
153
-
154
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
155
-
156
- [More Information Needed]
157
 
158
  #### Who are the source data producers?
 
159
 
160
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
161
-
162
- [More Information Needed]
163
-
164
- ### Annotations [optional]
165
-
166
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
167
-
168
  #### Annotation process
169
-
170
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
171
-
172
- [More Information Needed]
173
 
174
  #### Who are the annotators?
 
175
 
176
- <!-- This section describes the people or systems who created the annotations. -->
177
-
178
- [More Information Needed]
179
-
180
- #### Personal and Sensitive Information
181
-
182
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
183
-
184
- [More Information Needed]
185
 
186
  ## Bias, Risks, and Limitations
187
-
188
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
189
-
190
- [More Information Needed]
191
-
192
- ### Recommendations
193
-
194
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
195
-
196
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
197
-
198
- ## Citation [optional]
199
-
200
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
201
-
202
- **BibTeX:**
203
-
204
- [More Information Needed]
205
-
206
- **APA:**
207
-
208
- [More Information Needed]
209
-
210
- ## Glossary [optional]
211
-
212
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
213
-
214
- [More Information Needed]
215
-
216
- ## More Information [optional]
217
-
218
- [More Information Needed]
219
-
220
- ## Dataset Card Authors [optional]
221
-
222
- [More Information Needed]
223
 
224
  ## Dataset Card Contact
225
-
226
- [More Information Needed]
 
10
  pretty_name: mind2web_multimodal_test_domain
11
  tags:
12
  - fiftyone
13
+ - visual-agents
14
+ - os-agents
15
+ - gui-grounding
16
  - image
17
  - image-classification
18
  - object-detection
 
51
 
52
  # Note: other available arguments include ''max_samples'', etc
53
 
54
+ dataset = load_from_hub("Voxel51/mind2web_multimodal_test_domain")
55
 
56
 
57
  # Launch the App
 
63
  '
64
  ---
65
 
66
+ # Dataset Card for "Cross-Domain" Test Split in Multimodal Mind2Web
 
 
 
67
 
68
+ **Note**: This dataset is the test split of the Cross-Domain dataset introduced in the paper.
69
 
70
+ ![image/png](m2w_td.gif)
71
 
72
 
73
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4050 samples.
 
88
 
89
  # Load the dataset
90
  # Note: other available arguments include 'max_samples', etc
91
+ dataset = load_from_hub("Voxel51/mind2web_multimodal_test_domain")
92
 
93
  # Launch the App
94
  session = fo.launch_app(dataset)
95
  ```
96
 
97
+ ## Dataset Description
98
+ **Curated by:** The Ohio State University NLP Group (OSU-NLP-Group)
99
+ **Shared by:** OSU-NLP-Group on Hugging Face
100
+ **Language(s) (NLP):** en
101
+ **License:** OPEN-RAIL License
102
 
103
+ ## Dataset Sources
104
+ **Repository:** https://github.com/OSU-NLP-Group/SeeAct and https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web
105
+ **Paper:** "GPT-4V(ision) is a Generalist Web Agent, if Grounded" by Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, Yu Su
106
+ **Demo:** https://osu-nlp-group.github.io/SeeAct
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
 
108
  ## Uses
 
 
 
109
  ### Direct Use
110
+ - Evaluating web agents' ability to generalize to entirely new domains
111
+ - Testing zero-shot domain transfer capabilities of models
112
+ - Benchmarking the true generalist capabilities of web agents
113
+ - Assessing model performance in unseen web environments
114
 
115
  ### Out-of-Scope Use
116
+ - Developing web agents for harmful purposes (as stated in the paper's impact statement)
117
+ - Automating actions that could violate website terms of service
118
+ - Creating agents that access users' personal profiles or perform sensitive operations without consent
 
119
 
120
  ## Dataset Structure
121
+ - Contains 694 tasks across 13 domains and 53 websites
122
+ - Tasks average 5.9 actions each
123
+ - Average 4,314 visual tokens per task
124
+ - Average 494 HTML elements per task
125
+ - Average 91,163 HTML tokens per task
126
+ - Each example includes task descriptions, HTML structure, operations (CLICK, TYPE, SELECT), target elements with attributes, and action histories
127
+
128
+ ### FiftyOne Dataset Structure
129
+
130
+ **Basic Info:** 1,338 web UI screenshots with task-based annotations
131
+
132
+ **Core Fields:**
133
+ - `action_uid`: StringField - Unique action identifier
134
+ - `annotation_id`: StringField - Annotation identifier
135
+ - `target_action_index`: IntField - Index of target action in sequence
136
+ - `ground_truth`: EmbeddedDocumentField(Detection) - Element to interact with:
137
+ - `label`: Action type (TYPE, CLICK)
138
+ - `bounding_box`: a list of relative bounding box coordinates in [0, 1] in the following format: `<top-left-x>, <top-left-y>, <width>, <height>]`
139
+ - `target_action_reprs`: String representation of target action
140
+ - `website`: EmbeddedDocumentField(Classification) - Website name
141
+ - `domain`: EmbeddedDocumentField(Classification) - Website domain category
142
+ - `subdomain`: EmbeddedDocumentField(Classification) - Website subdomain category
143
+ - `task_description`: StringField - Natural language description of the task
144
+ - `full_sequence`: ListField(StringField) - Complete sequence of actions for the task
145
+ - `previous_actions`: ListField - Actions already performed in the sequence
146
+ - `current_action`: StringField - Action to be performed
147
+ - `alternative_candidates`: EmbeddedDocumentField(Detections) - Other possible elements
148
 
 
 
 
149
 
150
  ## Dataset Creation
 
151
  ### Curation Rationale
152
+ The Cross-Domain split was specifically designed to evaluate an agent's ability to generalize to entirely new domains it hasn't encountered during training, representing the most challenging generalization scenario.
 
 
 
153
 
154
  ### Source Data
 
 
 
155
  #### Data Collection and Processing
156
+ - Based on the original MIND2WEB dataset
157
+ - Each HTML document is aligned with its corresponding webpage screenshot image
158
+ - Underwent human verification to confirm element visibility and correct rendering for action prediction
159
+ - Specifically includes websites from top-level domains held out from the training data
160
 
161
  #### Who are the source data producers?
162
+ Web screenshots and HTML were collected from 53 websites across 13 domains that were not represented in the training data.
163
 
164
+ ### Annotations
 
 
 
 
 
 
 
165
  #### Annotation process
166
+ Each task includes annotated action sequences showing the correct steps to complete the task. These were likely captured through a tool that records user actions on websites.
 
 
 
167
 
168
  #### Who are the annotators?
169
+ Researchers from The Ohio State University NLP Group or hired annotators, though specific details aren't provided in the paper.
170
 
171
+ ### Personal and Sensitive Information
172
+ The dataset focuses on non-login tasks to comply with user agreements and avoid privacy issues.
 
 
 
 
 
 
 
173
 
174
  ## Bias, Risks, and Limitations
175
+ - This split presents the most challenging generalization scenario as it tests performance on entirely unfamiliar domains
176
+ - In-context learning methods with large models show better performance than supervised fine-tuning on this split
177
+ - The gap between SEEACTOracle and other methods is largest in this split (23.2% step success rate difference)
178
+ - Website layouts and functionality may change over time, affecting the validity of the dataset
179
+ - Limited to the specific domains included; may not fully represent all possible web domains
180
+
181
+ ## Citation
182
+
183
+ ### BibTeX:
184
+
185
+ ```bibtex
186
+ @article{zheng2024seeact,
187
+ title={GPT-4V(ision) is a Generalist Web Agent, if Grounded},
188
+ author={Boyuan Zheng and Boyu Gou and Jihyung Kil and Huan Sun and Yu Su},
189
+ booktitle={Forty-first International Conference on Machine Learning},
190
+ year={2024},
191
+ url={https://openreview.net/forum?id=piecKJ2DlB},
192
+ }
193
+
194
+ @inproceedings{deng2023mindweb,
195
+ title={Mind2Web: Towards a Generalist Agent for the Web},
196
+ author={Xiang Deng and Yu Gu and Boyuan Zheng and Shijie Chen and Samuel Stevens and Boshi Wang and Huan Sun and Yu Su},
197
+ booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
198
+ year={2023},
199
+ url={https://openreview.net/forum?id=kiYqbO3wqw}
200
+ }
201
+ ```
202
+ ### APA:
203
+ Zheng, B., Gou, B., Kil, J., Sun, H., & Su, Y. (2024). GPT-4V(ision) is a Generalist Web Agent, if Grounded. arXiv preprint arXiv:2401.01614.
 
 
 
 
 
 
 
204
 
205
  ## Dataset Card Contact
206
+ GitHub: https://github.com/OSU-NLP-Group/SeeAct