Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Tags:
code
Libraries:
Datasets
Dask
License:
starmage520 commited on
Commit
f53cd4a
·
verified ·
1 Parent(s): 12a30cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -4,7 +4,7 @@ size_categories:
4
  - n>1T
5
  task_categories:
6
  - image-to-text
7
- pretty_name: vision2ui (WebCode2M)
8
  configs:
9
  - config_name: default
10
  data_files:
@@ -13,10 +13,9 @@ configs:
13
  tags:
14
  - code
15
  ---
16
- Vision2UI: A Real-World Dataset for Code Generation from Webpage Designs with Layouts
17
-
18
- (This dataset is also called **WebCode2M**.)
19
- > Automatically generating webpage code from User Interface (UI) design images can significantly reduce the workload of front-end developers, and Multimodal Large Language Models (MLLMs) have demonstrated promising potential in this area. However, our investigation reveals that existing MLLMs are limited by the lack of authentic, high-quality, and large-scale datasets, leading to suboptimal performance in automated UI code generation. To mitigate this gap, we introduce a novel dataset, Vision2UI, derived from real-world scenarios and enriched with comprehensive layout information, specifically designed to finetune MLLMs for UI code generation. This dataset is created through a meticulous process involving the collection, cleaning, and refining of the open-source Common Crawl dataset. To ensure high quality, a neural scorer trained on manually annotated samples is employed to refine the data, retaining only the highest-quality instances. As a result, we obtain a high-quality dataset comprising over three million parallel samples that include UI design images, webpage code, and layout information. To validate the effectiveness of our proposed dataset, we establish a benchmark and introduce a baseline model based on the Vision Transformer (ViT), named UICoder. Additionally, we introduce a new metric, TreeBLEU, designed to evaluate the structural similarity between generated webpages and their corresponding ground truth in source code. Experimental results demonstrate that our dataset significantly improves the capability of MLLMs in learning code generation from UI design images.
20
 
21
 
22
  Features:
@@ -29,4 +28,5 @@ Features:
29
  - `score`: the score is obtained by the neural scorer proposed in the paper.
30
  - `hash`: the hash code of the image object.
31
 
32
- **Warning**: This dataset is sourced from the internet and, despite filtering efforts, may still contain a small amount of inappropriate content, such as explicit material or violence. Users should exercise caution.
 
 
4
  - n>1T
5
  task_categories:
6
  - image-to-text
7
+ pretty_name: WebCode2M
8
  configs:
9
  - config_name: default
10
  data_files:
 
13
  tags:
14
  - code
15
  ---
16
+ [WebCode2M: A Real-World Dataset for Code Generation from Webpage Designs with Layouts](https://arxiv.org/pdf/2404.06369)
17
+ (This dataset is also called **Vision2UI**.)
18
+ > Automatically generating webpage code from webpage designscan significantly reduce the workload of front-end developers, andrecent Multimodal Large Language Models (MLLMs) have shownpromising potential in this area. However, our investigation revealsthat most existing MLLMs are constrained by the absence of highquality, large-scale, real-world datasets, resulting in inadequateperformance in automated webpage code generation. To fill thisgap, this paper introduces WebCode2M, a new dataset comprising2.56 million instances, each containing a design image along withthe corresponding webpage code and layout details. Sourced fromreal-world web resources, WebCode2M offers a rich and valuabledataset for webpage code generation across a variety of applications.The dataset quality is ensured by a scoring model that filters out instances with aesthetic deficiencies or other incomplete elements. Tovalidate the effectiveness of WebCode2M, we introduce a baselinemodel based on the Vision Transformer (ViT), named WebCoder,and establish a benchmark for fair comparison. Additionally, weintroduce a new metric, TreeBLEU, to measure the structural hierarchy recall. The benchmarking results demonstrate that our datasetsignificantly improves the ability of MLLMs to generate code fromwebpage designs, confirming its effectiveness and usability for future applications in front-end design tools. Finally, we highlightseveral practical challenges introduced by our dataset, calling forfurther research.
 
19
 
20
 
21
  Features:
 
28
  - `score`: the score is obtained by the neural scorer proposed in the paper.
29
  - `hash`: the hash code of the image object.
30
 
31
+ **Warning**: This dataset is sourced from the internet and, despite filtering efforts, may still contain a small amount of inappropriate content, such as explicit material or violence. Users should exercise caution.
32
+ An enhanced version—with most inappropriate content filtered out—is available at **[xcodemind/webcode2m_purified](https://huggingface.co/datasets/xcodemind/webcode2m_purified)**.