Papers
arxiv:2108.03353

Screen2Words: Automatic Mobile UI Summarization with Multimodal Learning

Published on Aug 7, 2021
Authors:
,
,
,
,
,

Abstract

Mobile User Interface Summarization generates succinct language descriptions of mobile screens for conveying important contents and functionalities of the screen, which can be useful for many language-based application scenarios. We present Screen2Words, a novel screen summarization approach that automatically encapsulates essential information of a UI screen into a coherent language phrase. Summarizing mobile screens requires a holistic understanding of the multi-modal data of mobile UIs, including text, image, structures as well as UI semantics, motivating our multi-modal learning approach. We collected and analyzed a large-scale screen summarization dataset annotated by human workers. Our dataset contains more than 112k language summarization across sim22k unique UI screens. We then experimented with a set of deep models with different configurations. Our evaluation of these models with both automatic accuracy metrics and human rating shows that our approach can generate high-quality summaries for mobile screens. We demonstrate potential use cases of Screen2Words and open-source our dataset and model to lay the foundations for further bridging language and user interfaces.

Community

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 164

Browse 164 models citing this paper

Datasets citing this paper 1

Spaces citing this paper 62

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.