--- license: cc-by-4.0 size_categories: - n>1T task_categories: - image-to-text pretty_name: WebCode2M configs: - config_name: default data_files: - split: train path: data/*.parquet tags: - code --- [WebCode2M: A Real-World Dataset for Code Generation from Webpage Designs with Layouts](https://arxiv.org/pdf/2404.06369) (This dataset is also called **Vision2UI**.) > Automatically generating webpage code from webpage designscan significantly reduce the workload of front-end developers, andrecent Multimodal Large Language Models (MLLMs) have shownpromising potential in this area. However, our investigation revealsthat most existing MLLMs are constrained by the absence of highquality, large-scale, real-world datasets, resulting in inadequateperformance in automated webpage code generation. To fill thisgap, this paper introduces WebCode2M, a new dataset comprising2.56 million instances, each containing a design image along withthe corresponding webpage code and layout details. Sourced fromreal-world web resources, WebCode2M offers a rich and valuabledataset for webpage code generation across a variety of applications.The dataset quality is ensured by a scoring model that filters out instances with aesthetic deficiencies or other incomplete elements. Tovalidate the effectiveness of WebCode2M, we introduce a baselinemodel based on the Vision Transformer (ViT), named WebCoder,and establish a benchmark for fair comparison. Additionally, weintroduce a new metric, TreeBLEU, to measure the structural hierarchy recall. The benchmarking results demonstrate that our datasetsignificantly improves the ability of MLLMs to generate code fromwebpage designs, confirming its effectiveness and usability for future applications in front-end design tools. Finally, we highlightseveral practical challenges introduced by our dataset, calling forfurther research. Features: - `image`: the screenshot of the webpage. - `bbox`: the layout information, i.e., the bounding boxes (Bbox) of all the elements in the webpage, which contains the size, position, and hierarchy information. - `text`: the webpage code text including HTML/CSS code. - `scale`: the scale of the screenshot, in the format [width, height]. - `lang`: the main language of the text content displayed on the rendered page (excluding HTML/CSS code). It is generated by a widely-applied [model](https://huggingface.co/papluca/xlm-roberta-base-language-detection) on HuggingFace, which achieved very high accuracy on its evaluation set. Currently, it supports the following 20 languages: arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh). - `tokens`: the count of tokens of HTML and CSS code, in the format of [CSS length, HTML length]. The tokens are generated by [GPT-2 tokenizer](https://huggingface.co/openai-community/gpt2). - `score`: the score is obtained by the neural scorer proposed in the paper. - `hash`: the hash code of the image object. **Warning**: This dataset is sourced from the internet and, despite filtering efforts, may still contain a small amount of inappropriate content, such as explicit material or violence. Users should exercise caution. An enhanced version—with most inappropriate content filtered out—is available at **[xcodemind/webcode2m_purified](https://huggingface.co/datasets/xcodemind/webcode2m_purified)**.