Datasets:
Tasks:
Image-to-Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
screens
License:
File size: 3,199 Bytes
4f09351 cf533be 4f09351 cf533be 83da7ce cf533be 83da7ce cf533be 4f09351 83da7ce |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
language:
- en
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- image-to-text
pretty_name: RICO Screen Annotations
tags:
- screens
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: screen_id
dtype: string
- name: screen_annotation
dtype: string
- name: file_name
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 1684182938.288
num_examples: 15548
- name: valid
num_bytes: 240141824.938
num_examples: 2311
- name: test
num_bytes: 452100376.53
num_examples: 4217
download_size: 1880458708
dataset_size: 2376425139.756
---
# Dataset Card for RICO Screen Annotations
This is a standardization of Google's Screen Annotation dataset on a subset of RICO screens, as described in their ScreenAI paper.
Unlike the original, this version transforms integer-based bounding boxes into floating-point-based bounding boxes of 2 decimal precision.
## Dataset Details
### Dataset Description
This is an image-to-text annotation format first proscribed in Google's ScreenAI paper.
The idea is to standardize an expected text output that is reasonable for the model to follow,
and fuses together things like element detection, referring expression generation/recognition, and element classification.
- **Curated by:** Google Research
- **Language(s) (NLP):** English
- **License:** CC-BY-4.0
### Dataset Sources
- **Repository:** [google-research/screen_annotation](https://github.com/google-research-datasets/screen_annotation/tree/main)
- **Paper [optional]:** [ScreenAI](https://arxiv.org/abs/2402.04615)
## Uses
### Direct Use
Pre-training of multimodal models to better understand screens.
## Dataset Structure
- `screen_id`: Screen ID in the RICO dataset
- `screen_annotation`: Target output string
- `image`: The RICO screenshot
## Dataset Creation
### Curation Rationale
> The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The mobile screenshots are directly taken from the publicly available Rico dataset. The annotations are in text format, and contain information on the UI elements present on the screen: their type, their location, the text they contain or a short description. This dataset has been introduced in the paper ScreenAI: A Vision-Language Model for UI and Infographics Understanding and can be used to improve the screen understanding capabilities of multimodal (image+text) models.
## Citation
**BibTeX:**
```
@misc{baechler2024screenai,
title={ScreenAI: A Vision-Language Model for UI and Infographics Understanding},
author={Gilles Baechler and Srinivas Sunkara and Maria Wang and Fedir Zubach and Hassan Mansoor and Vincent Etter and Victor Cărbune and Jason Lin and Jindong Chen and Abhanshu Sharma},
year={2024},
eprint={2402.04615},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Dataset Card Authors
Hunter Heidenreich, Roots Automation
## Dataset Card Contact
hunter "dot" heidenreich AT rootsautomation `DOT` com |