Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Tags:
code
Libraries:
Datasets
Dask
License:
starmage520 commited on
Commit
a91ef07
·
verified ·
1 Parent(s): 6364f42

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -23
README.md CHANGED
@@ -1,26 +1,21 @@
1
- ---
2
- license: mit
3
- size_categories:
4
- - 100M<n<1B
5
- task_categories:
6
- - image-to-text
7
- pretty_name: vision2ui
8
- configs:
9
- - config_name: default
10
- data_files:
11
- - split: train
12
- path: data/*.parquet
13
- tags:
14
- - code
15
- ---
16
- VISION2UI: A Real-World Dataset with Layout for Code Generation from UI Designs
17
- > Automatically generating UI code from webpage design visions can significantly alleviate the burden of developers, enabling beginner developers or designers to directly generate Web pages from design diagrams. Currently, prior research has accomplished the objective of generating UI code from rudimentary design visions or sketches through designing deep neural networks. Inspired by the groundbreaking advancements achieved by Multimodal Large Language Models (MLLMs), the automatic generation of UI code from high-fidelity design images is now emerging as a viable possibility. Nevertheless, our investigation reveals that existing MLLMs are hampered by the scarcity of authentic, high-quality, and large-scale datasets, leading to unsatisfactory performance in automated UI code generation. To mitigate this gap, we present a novel dataset, termed VISION2UI, extracted from real-world scenarios, augmented with comprehensive layout information, tailored specifically for finetuning MLLMs in UI code generation. Specifically, this dataset is derived through a series of operations, encompassing collecting, cleaning, and filtering of the open-source Common Crawl dataset. In order to uphold its quality, a neural scorer trained on labeled samples is utilized to refine the data, retaining higher-quality instances. Ultimately, this process yields a dataset comprising 2,000 (Much more is coming soon) parallel samples encompassing design visions and UI code.
18
 
19
 
20
  The paper can be accessed at:
21
- https://arxiv.org/abs/2404.06369
22
-
23
- <span style="color:green;">
24
- The complete dataset, containing approximately 3.1 million data points, is currently being uploaded in blocks. <br />
25
- This process may last for about a week, during which the volume of available data will continue to increase.
26
- </span>
 
1
+ ---
2
+ license: cc-by-4.0
3
+ size_categories:
4
+ - n>1T
5
+ task_categories:
6
+ - image-to-text
7
+ pretty_name: vision2ui
8
+ configs:
9
+ - config_name: default
10
+ data_files:
11
+ - split: train
12
+ path: data/*.parquet
13
+ tags:
14
+ - code
15
+ ---
16
+ Vision2UI: A Real-World Dataset for Code Generation from UI Designs with Layouts
17
+ > Automatically generating webpage code from User Interface (UI) design images can significantly reduce the workload of front-end developers, and Multimodal Large Language Models (MLLMs) have demonstrated promising potential in this area. However, our investigation reveals that existing MLLMs are limited by the lack of authentic, high-quality, and large-scale datasets, leading to suboptimal performance in automated UI code generation. To mitigate this gap, we introduce a novel dataset, Vision2UI, derived from real-world scenarios and enriched with comprehensive layout information, specifically designed to finetune MLLMs for UI code generation. This dataset is created through a meticulous process involving the collection, cleaning, and refining of the open-source Common Crawl dataset. To ensure high quality, a neural scorer trained on manually annotated samples is employed to refine the data, retaining only the highest-quality instances. As a result, we obtain a high-quality dataset comprising over three million parallel samples that include UI design images, webpage code, and layout information. To validate the effectiveness of our proposed dataset, we establish a benchmark and introduce a baseline model based on the Vision Transformer (ViT), named UICoder. Additionally, we introduce a new metric, TreeBLEU, designed to evaluate the structural similarity between generated webpages and their corresponding ground truth in source code. Experimental results demonstrate that our dataset significantly improves the capability of MLLMs in learning code generation from UI design images. The code and dataset are publicly available at our project homepage: https://vision2ui.github.io.
18
 
19
 
20
  The paper can be accessed at:
21
+ https://arxiv.org/abs/2404.06369