Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,26 +1,21 @@
|
|
1 |
-
---
|
2 |
-
license:
|
3 |
-
size_categories:
|
4 |
-
-
|
5 |
-
task_categories:
|
6 |
-
- image-to-text
|
7 |
-
pretty_name: vision2ui
|
8 |
-
configs:
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
tags:
|
14 |
-
- code
|
15 |
-
---
|
16 |
-
|
17 |
-
> Automatically generating
|
18 |
|
19 |
|
20 |
The paper can be accessed at:
|
21 |
-
https://arxiv.org/abs/2404.06369
|
22 |
-
|
23 |
-
<span style="color:green;">
|
24 |
-
The complete dataset, containing approximately 3.1 million data points, is currently being uploaded in blocks. <br />
|
25 |
-
This process may last for about a week, during which the volume of available data will continue to increase.
|
26 |
-
</span>
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
size_categories:
|
4 |
+
- n>1T
|
5 |
+
task_categories:
|
6 |
+
- image-to-text
|
7 |
+
pretty_name: vision2ui
|
8 |
+
configs:
|
9 |
+
- config_name: default
|
10 |
+
data_files:
|
11 |
+
- split: train
|
12 |
+
path: data/*.parquet
|
13 |
+
tags:
|
14 |
+
- code
|
15 |
+
---
|
16 |
+
Vision2UI: A Real-World Dataset for Code Generation from UI Designs with Layouts
|
17 |
+
> Automatically generating webpage code from User Interface (UI) design images can significantly reduce the workload of front-end developers, and Multimodal Large Language Models (MLLMs) have demonstrated promising potential in this area. However, our investigation reveals that existing MLLMs are limited by the lack of authentic, high-quality, and large-scale datasets, leading to suboptimal performance in automated UI code generation. To mitigate this gap, we introduce a novel dataset, Vision2UI, derived from real-world scenarios and enriched with comprehensive layout information, specifically designed to finetune MLLMs for UI code generation. This dataset is created through a meticulous process involving the collection, cleaning, and refining of the open-source Common Crawl dataset. To ensure high quality, a neural scorer trained on manually annotated samples is employed to refine the data, retaining only the highest-quality instances. As a result, we obtain a high-quality dataset comprising over three million parallel samples that include UI design images, webpage code, and layout information. To validate the effectiveness of our proposed dataset, we establish a benchmark and introduce a baseline model based on the Vision Transformer (ViT), named UICoder. Additionally, we introduce a new metric, TreeBLEU, designed to evaluate the structural similarity between generated webpages and their corresponding ground truth in source code. Experimental results demonstrate that our dataset significantly improves the capability of MLLMs in learning code generation from UI design images. The code and dataset are publicly available at our project homepage: https://vision2ui.github.io.
|
18 |
|
19 |
|
20 |
The paper can be accessed at:
|
21 |
+
https://arxiv.org/abs/2404.06369
|
|
|
|
|
|
|
|
|
|