|
--- |
|
dataset_info: |
|
- config_name: css |
|
features: |
|
- name: structure |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: image |
|
dtype: image |
|
- name: download_url |
|
dtype: string |
|
- name: instance_name |
|
dtype: string |
|
- name: date |
|
dtype: string |
|
- name: additional_info |
|
dtype: string |
|
- name: date_scrapped |
|
dtype: string |
|
- name: file_filters |
|
dtype: string |
|
- name: compilation_info |
|
dtype: string |
|
- name: rendering_filters |
|
dtype: string |
|
- name: assets |
|
sequence: string |
|
- name: category |
|
dtype: string |
|
- name: uuid |
|
dtype: string |
|
- name: length |
|
dtype: string |
|
- name: difficulty |
|
dtype: string |
|
splits: |
|
- name: validation |
|
num_bytes: 815105541.0 |
|
num_examples: 300 |
|
download_size: 809865478 |
|
dataset_size: 815105541.0 |
|
- config_name: html |
|
features: |
|
- name: structure |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: image |
|
dtype: image |
|
- name: download_url |
|
dtype: string |
|
- name: instance_name |
|
dtype: string |
|
- name: date |
|
dtype: string |
|
- name: additional_info |
|
dtype: string |
|
- name: date_scrapped |
|
dtype: string |
|
- name: file_filters |
|
dtype: string |
|
- name: compilation_info |
|
dtype: string |
|
- name: rendering_filters |
|
dtype: string |
|
- name: assets |
|
sequence: string |
|
- name: category |
|
dtype: string |
|
- name: uuid |
|
dtype: string |
|
- name: length |
|
dtype: string |
|
- name: difficulty |
|
dtype: string |
|
splits: |
|
- name: validation |
|
num_bytes: 263470560.0 |
|
num_examples: 300 |
|
download_size: 257833986 |
|
dataset_size: 263470560.0 |
|
- config_name: javascript |
|
features: |
|
- name: structure |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: image |
|
dtype: image |
|
- name: download_url |
|
dtype: string |
|
- name: instance_name |
|
dtype: string |
|
- name: date |
|
dtype: string |
|
- name: additional_info |
|
dtype: string |
|
- name: date_scrapped |
|
dtype: string |
|
- name: file_filters |
|
dtype: string |
|
- name: compilation_info |
|
dtype: string |
|
- name: rendering_filters |
|
dtype: string |
|
- name: assets |
|
sequence: string |
|
- name: category |
|
dtype: string |
|
- name: uuid |
|
dtype: string |
|
- name: length |
|
dtype: string |
|
- name: difficulty |
|
dtype: string |
|
splits: |
|
- name: validation |
|
num_bytes: 279510653.0 |
|
num_examples: 300 |
|
download_size: 273214540 |
|
dataset_size: 279510653.0 |
|
- config_name: wild |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: additional_info |
|
dtype: string |
|
- name: assets |
|
sequence: string |
|
- name: category |
|
dtype: string |
|
- name: uuid |
|
dtype: string |
|
- name: difficulty |
|
dtype: string |
|
splits: |
|
- name: validation |
|
num_bytes: 335841.0 |
|
num_examples: 2 |
|
download_size: 333134 |
|
dataset_size: 335841.0 |
|
- config_name: wild_legacy |
|
features: |
|
- name: structure |
|
dtype: string |
|
- name: image |
|
dtype: image |
|
- name: url |
|
dtype: string |
|
- name: instance_name |
|
dtype: string |
|
- name: date_scrapped |
|
dtype: string |
|
- name: uuid |
|
dtype: string |
|
- name: category |
|
dtype: string |
|
- name: additional_info |
|
dtype: string |
|
- name: assets |
|
sequence: string |
|
- name: difficulty |
|
dtype: string |
|
splits: |
|
- name: validation |
|
num_bytes: 99236852.0 |
|
num_examples: 50 |
|
download_size: 99142716 |
|
dataset_size: 99236852.0 |
|
configs: |
|
- config_name: css |
|
data_files: |
|
- split: validation |
|
path: css/validation-* |
|
- config_name: html |
|
data_files: |
|
- split: validation |
|
path: html/validation-* |
|
- config_name: javascript |
|
data_files: |
|
- split: validation |
|
path: javascript/validation-* |
|
- config_name: wild |
|
data_files: |
|
- split: validation |
|
path: wild/validation-* |
|
- config_name: wild_legacy |
|
data_files: |
|
- split: validation |
|
path: wild_legacy/validation-* |
|
--- |
|
|
|
# Image2Struct - Webpage |
|
[Paper](TODO) | [Website](https://crfm.stanford.edu/helm/image2structure/latest/) | Datasets ([Webpages](https://huggingface.co/datasets/stanford-crfm/i2s-webpage), [Latex](https://huggingface.co/datasets/stanford-crfm/i2s-latex), [Music sheets](https://huggingface.co/datasets/stanford-crfm/i2s-musicsheet)) | [Leaderboard](https://crfm.stanford.edu/helm/image2structure/latest/#/leaderboard) | [HELM repo](https://github.com/stanford-crfm/helm) | [Image2Struct repo](https://github.com/stanford-crfm/image2structure) |
|
|
|
**License:** [Apache License](http://www.apache.org/licenses/) Version 2.0, January 2004 |
|
|
|
|
|
## Dataset description |
|
Image2struct is a benchmark for evaluating vision-language models in practical tasks of extracting structured information from images. |
|
This subdataset focuses on webpages. The model is given an image of the expected output with the prompt: |
|
``` |
|
Please generate the source code to generate a webpage that looks like this image as much as feasibly possible. |
|
You should output a json object associating each file name with its content. |
|
|
|
Here is a simple example of the expected structure (that does not correspond to the image). |
|
In this example, 3 files are created: index.html, style.css and script.js. |
|
[ |
|
{ |
|
"filename": "index.html", |
|
"content": "<!DOCTYPE html>\\n<html>\\n<head>\\n<title>Title of the document</title>\\n</head>\\n<body>\\n\\n<p>Content of the document......</p>\\n\\n</body>\\n</html>" |
|
}, |
|
{ |
|
"filename": "style.css", |
|
"content": "body {\\n background-color: lightblue;\\n}\\nh1 {\\n color: white;\\n text-align: center;\\n}" |
|
}, |
|
{ |
|
"filename": "script.js", |
|
"content": "document.getElementById(\\"demo\\").innerHTML = \\"Hello JavaScript!\\";" |
|
} |
|
] |
|
You do not have to create files with the same names. Create as many files as you need, you can even use directories if necessary, |
|
they will be created for you automatically. Try to write some realistic code keeping in mind that it should |
|
look like the image as much as feasibly possible. |
|
``` |
|
|
|
The dataset is divided into 4 categories. There are 3 categories that are collected automatically using the [Image2Struct repo](https://github.com/stanford-crfm/image2structure). |
|
The webpages were collected on GitHub pages (.github.io) and are split into 3 groups that are determined by the main language of the repository: |
|
* html |
|
* css |
|
* javascript |
|
|
|
The last category: **wild**, was collected by taking screenshots of popular websites. The full list is available at the end of this document. |
|
|
|
|
|
## Uses |
|
|
|
To load the subset `html` of the dataset to be sent to the model under evaluation in Python: |
|
|
|
```python |
|
import datasets |
|
datasets.load_dataset("stanford-crfm/i2s-webpage", "html", split="validation") |
|
``` |
|
|
|
|
|
To evaluate a model on Image2Webpage (html) using [HELM](https://github.com/stanford-crfm/helm/), run the following command-line commands: |
|
|
|
```sh |
|
pip install crfm-helm |
|
helm-run --run-entries image2webpage:subset=html,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10 |
|
``` |
|
|
|
You can also run the evaluation for only a specific `subset` and `difficulty`: |
|
```sh |
|
helm-run --run-entries image2webpage:subset=html,difficulty=hard,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10 |
|
``` |
|
|
|
For more information on running Image2Struct using [HELM](https://github.com/stanford-crfm/helm/), refer to the [HELM documentation](https://crfm-helm.readthedocs.io/) and the article on [reproducing leaderboards](https://crfm-helm.readthedocs.io/en/latest/reproducing_leaderboards/). |
|
|
|
## Citation |
|
|
|
**BibTeX:** |
|
|
|
```tex |
|
@misc{roberts2024image2struct, |
|
title={Image2Struct: A Benchmark for Evaluating Vision-Language Models in Extracting Structured Information from Images}, |
|
author={Josselin Somerville Roberts and Tony Lee and Chi Heem Wong and Michihiro Yasunaga and Yifan Mai and Percy Liang}, |
|
year={2024}, |
|
eprint={TBD}, |
|
archivePrefix={arXiv}, |
|
primaryClass={TBD} |
|
} |
|
``` |
|
|
|
## List of websites used for wild subset |
|
``` |
|
[ |
|
"https://www.nytimes.com", |
|
"https://www.bbc.com", |
|
"https://www.wikipedia.org", |
|
"https://www.github.com", |
|
"https://www.reddit.com", |
|
"https://www.twitter.com", |
|
"https://www.facebook.com", |
|
"https://www.instagram.com", |
|
"https://www.linkedin.com", |
|
"https://www.youtube.com", |
|
"https://www.amazon.com", |
|
"https://www.apple.com", |
|
"https://www.microsoft.com", |
|
"https://www.ibm.com", |
|
"https://www.google.com", |
|
"https://www.yahoo.com", |
|
"https://www.bing.com", |
|
"https://www.duckduckgo.com", |
|
"https://www.netflix.com", |
|
"https://www.hulu.com", |
|
"https://www.disneyplus.com", |
|
"https://www.imdb.com", |
|
"https://www.metacritic.com", |
|
"https://www.rottentomatoes.com", |
|
"https://www.nationalgeographic.com", |
|
"https://www.nasa.gov", |
|
"https://www.cnn.com", |
|
"https://www.foxnews.com", |
|
"https://www.bloomberg.com", |
|
"https://www.cnbc.com", |
|
"https://www.forbes.com", |
|
"https://www.businessinsider.com", |
|
"https://www.techcrunch.com", |
|
"https://www.engadget.com", |
|
"https://www.arstechnica.com", |
|
"https://www.lifehacker.com", |
|
"https://www.theguardian.com", |
|
"https://www.independent.co.uk", |
|
"https://www.buzzfeed.com", |
|
"https://www.vox.com", |
|
"https://www.theverge.com", |
|
"https://www.wired.com", |
|
"https://www.polygon.com", |
|
"https://www.gamespot.com", |
|
"https://www.kotaku.com", |
|
"https://www.twitch.tv", |
|
"https://www.netflix.com", |
|
"https://www.hbo.com", |
|
"https://www.showtime.com", |
|
"https://www.cbs.com", |
|
"https://www.abc.com", |
|
"https://www.nbc.com", |
|
"https://www.criterion.com", |
|
"https://www.imdb.com", |
|
"https://www.rottentomatoes.com", |
|
"https://www.metacritic.com", |
|
"https://www.pitchfork.com", |
|
"https://www.billboard.com", |
|
"https://www.rollingstone.com", |
|
"https://www.npr.org", |
|
"https://www.bbc.co.uk", |
|
"https://www.thetimes.co.uk", |
|
"https://www.telegraph.co.uk", |
|
"https://www.guardian.co.uk", |
|
"https://www.independent.co.uk", |
|
"https://www.economist.com", |
|
"https://www.ft.com", |
|
"https://www.wsj.com", |
|
"https://www.nature.com", |
|
"https://www.scientificamerican.com", |
|
"https://www.newscientist.com", |
|
"https://www.sciencedaily.com", |
|
"https://www.space.com", |
|
"https://www.livescience.com", |
|
"https://www.popsci.com", |
|
"https://www.healthline.com", |
|
"https://www.webmd.com", |
|
"https://www.mayoclinic.org", |
|
"https://www.nih.gov", |
|
"https://www.cdc.gov", |
|
"https://www.who.int", |
|
"https://www.un.org", |
|
"https://www.nationalgeographic.com", |
|
"https://www.worldreallife.org", |
|
"https://www.greenpeace.org", |
|
"https://www.nrdc.org", |
|
"https://www.sierraclub.org", |
|
"https://www.amnesty.org", |
|
"https://www.hrw.org", |
|
"https://www.icrc.org", |
|
"https://www.redcross.org", |
|
"https://www.unicef.org", |
|
"https://www.savethechildren.org", |
|
"https://www.doctorswithoutborders.org", |
|
"https://www.wikimedia.org", |
|
"https://www.archive.org", |
|
"https://www.opendemocracy.net", |
|
"https://www.projectgutenberg.org", |
|
"https://www.khanacademy.org", |
|
"https://www.codecademy.com", |
|
] |
|
``` |