JosselinSom commited on
Commit
f6435a9
·
verified ·
1 Parent(s): 5f8a937

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +188 -32
README.md CHANGED
@@ -120,34 +120,6 @@ dataset_info:
120
  num_examples: 300
121
  download_size: 273214540
122
  dataset_size: 279510653.0
123
- - config_name: real
124
- features:
125
- - name: structure
126
- dtype: string
127
- - name: image
128
- dtype: image
129
- - name: url
130
- dtype: string
131
- - name: instance_name
132
- dtype: string
133
- - name: date_scrapped
134
- dtype: string
135
- - name: uuid
136
- dtype: string
137
- - name: category
138
- dtype: string
139
- - name: additional_info
140
- dtype: string
141
- - name: assets
142
- sequence: string
143
- - name: difficulty
144
- dtype: string
145
- splits:
146
- - name: validation
147
- num_bytes: 99236502.0
148
- num_examples: 50
149
- download_size: 99146849
150
- dataset_size: 99236502.0
151
  - config_name: wild
152
  features:
153
  - name: structure
@@ -189,12 +161,196 @@ configs:
189
  data_files:
190
  - split: validation
191
  path: javascript/validation-*
192
- - config_name: real
193
- data_files:
194
- - split: validation
195
- path: real/validation-*
196
  - config_name: wild
197
  data_files:
198
  - split: validation
199
  path: wild/validation-*
200
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
120
  num_examples: 300
121
  download_size: 273214540
122
  dataset_size: 279510653.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
123
  - config_name: wild
124
  features:
125
  - name: structure
 
161
  data_files:
162
  - split: validation
163
  path: javascript/validation-*
 
 
 
 
164
  - config_name: wild
165
  data_files:
166
  - split: validation
167
  path: wild/validation-*
168
  ---
169
+
170
+ # Image2Struct - Webpage
171
+ [Paper](TODO) | [Website](https://crfm.stanford.edu/helm/image2structure/latest/) | Datasets ([Webpages](https://huggingface.co/datasets/stanford-crfm/i2s-webpage), [Latex](https://huggingface.co/datasets/stanford-crfm/i2s-latex), [Music sheets](https://huggingface.co/datasets/stanford-crfm/i2s-musicsheet)) | [Leaderboard](https://crfm.stanford.edu/helm/image2structure/latest/#/leaderboard) | [HELM repo](https://github.com/stanford-crfm/helm) | [Image2Struct repo](https://github.com/stanford-crfm/image2structure)
172
+
173
+ **License:** [Apache License](http://www.apache.org/licenses/) Version 2.0, January 2004
174
+
175
+
176
+ ## Dataset description
177
+ Image2struct is a benchmark for evaluating vision-language models in practical tasks of extracting structured information from images.
178
+ This subdataset focuses on webpages. The model is given an image of the expected output with the prompt:
179
+ ```
180
+ Please generate the source code to generate a webpage that looks like this image as much as feasibly possible.
181
+ You should output a json object associating each file name with its content.
182
+
183
+ Here is a simple example of the expected structure (that does not correspond to the image).
184
+ In this example, 3 files are created: index.html, style.css and script.js.
185
+ [
186
+ {
187
+ "filename": "index.html",
188
+ "content": "<!DOCTYPE html>\\n<html>\\n<head>\\n<title>Title of the document</title>\\n</head>\\n<body>\\n\\n<p>Content of the document......</p>\\n\\n</body>\\n</html>"
189
+ },
190
+ {
191
+ "filename": "style.css",
192
+ "content": "body {\\n background-color: lightblue;\\n}\\nh1 {\\n color: white;\\n text-align: center;\\n}"
193
+ },
194
+ {
195
+ "filename": "script.js",
196
+ "content": "document.getElementById(\\"demo\\").innerHTML = \\"Hello JavaScript!\\";"
197
+ }
198
+ ]
199
+ You do not have to create files with the same names. Create as many files as you need, you can even use directories if necessary,
200
+ they will be created for you automatically. Try to write some realistic code keeping in mind that it should
201
+ look like the image as much as feasibly possible.
202
+ ```
203
+
204
+ The dataset is divided into 4 categories. There are 3 categories that are collected automatically using the [Image2Struct repo](https://github.com/stanford-crfm/image2structure).
205
+ The webpages were collected on GitHub pages (.github.io) and are split into 3 groups that are determined by the main language of the repository:
206
+ * html
207
+ * css
208
+ * javascript
209
+
210
+ The last category: **wild**, was collected by taking screenshots of popular websites. The full list is available at the end of this document.
211
+
212
+
213
+ ## Uses
214
+
215
+ To load the subset `html` of the dataset to be sent to the model under evaluation in Python:
216
+
217
+ ```python
218
+ import datasets
219
+ datasets.load_dataset("stanford-crfm/i2s-webpage", "html", split="validation")
220
+ ```
221
+
222
+
223
+ To evaluate a model on Image2Webpage (equation) using [HELM](https://github.com/stanford-crfm/helm/), run the following command-line commands:
224
+
225
+ ```sh
226
+ pip install crfm-helm
227
+ helm-run --run-entries image2webpage:subset=html,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10
228
+ ```
229
+
230
+ You can also run the evaluation for only a specific `subset` and `difficulty`:
231
+ ```sh
232
+ helm-run --run-entries image2webpage:subset=html,difficulty=hard,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10
233
+ ```
234
+
235
+ For more information on running Image2Struct using [HELM](https://github.com/stanford-crfm/helm/), refer to the [HELM documentation](https://crfm-helm.readthedocs.io/) and the article on [reproducing leaderboards](https://crfm-helm.readthedocs.io/en/latest/reproducing_leaderboards/).
236
+
237
+ ## Citation
238
+
239
+ **BibTeX:**
240
+
241
+ ```tex
242
+ @misc{roberts2024image2struct,
243
+ title={Image2Struct: A Benchmark for Evaluating Vision-Language Models in Extracting Structured Information from Images},
244
+ author={Josselin Somerville Roberts and Tony Lee and Chi Heem Wong and Michihiro Yasunaga and Yifan Mai and Percy Liang},
245
+ year={2024},
246
+ eprint={TBD},
247
+ archivePrefix={arXiv},
248
+ primaryClass={TBD}
249
+ }
250
+ ```
251
+
252
+ ## List of websites used for wild subset
253
+ ```
254
+ [
255
+ "https://www.nytimes.com",
256
+ "https://www.bbc.com",
257
+ "https://www.wikipedia.org",
258
+ "https://www.github.com",
259
+ "https://www.reddit.com",
260
+ "https://www.twitter.com",
261
+ "https://www.facebook.com",
262
+ "https://www.instagram.com",
263
+ "https://www.linkedin.com",
264
+ "https://www.youtube.com",
265
+ "https://www.amazon.com",
266
+ "https://www.apple.com",
267
+ "https://www.microsoft.com",
268
+ "https://www.ibm.com",
269
+ "https://www.google.com",
270
+ "https://www.yahoo.com",
271
+ "https://www.bing.com",
272
+ "https://www.duckduckgo.com",
273
+ "https://www.netflix.com",
274
+ "https://www.hulu.com",
275
+ "https://www.disneyplus.com",
276
+ "https://www.imdb.com",
277
+ "https://www.metacritic.com",
278
+ "https://www.rottentomatoes.com",
279
+ "https://www.nationalgeographic.com",
280
+ "https://www.nasa.gov",
281
+ "https://www.cnn.com",
282
+ "https://www.foxnews.com",
283
+ "https://www.bloomberg.com",
284
+ "https://www.cnbc.com",
285
+ "https://www.forbes.com",
286
+ "https://www.businessinsider.com",
287
+ "https://www.techcrunch.com",
288
+ "https://www.engadget.com",
289
+ "https://www.arstechnica.com",
290
+ "https://www.lifehacker.com",
291
+ "https://www.theguardian.com",
292
+ "https://www.independent.co.uk",
293
+ "https://www.buzzfeed.com",
294
+ "https://www.vox.com",
295
+ "https://www.theverge.com",
296
+ "https://www.wired.com",
297
+ "https://www.polygon.com",
298
+ "https://www.gamespot.com",
299
+ "https://www.kotaku.com",
300
+ "https://www.twitch.tv",
301
+ "https://www.netflix.com",
302
+ "https://www.hbo.com",
303
+ "https://www.showtime.com",
304
+ "https://www.cbs.com",
305
+ "https://www.abc.com",
306
+ "https://www.nbc.com",
307
+ "https://www.criterion.com",
308
+ "https://www.imdb.com",
309
+ "https://www.rottentomatoes.com",
310
+ "https://www.metacritic.com",
311
+ "https://www.pitchfork.com",
312
+ "https://www.billboard.com",
313
+ "https://www.rollingstone.com",
314
+ "https://www.npr.org",
315
+ "https://www.bbc.co.uk",
316
+ "https://www.thetimes.co.uk",
317
+ "https://www.telegraph.co.uk",
318
+ "https://www.guardian.co.uk",
319
+ "https://www.independent.co.uk",
320
+ "https://www.economist.com",
321
+ "https://www.ft.com",
322
+ "https://www.wsj.com",
323
+ "https://www.nature.com",
324
+ "https://www.scientificamerican.com",
325
+ "https://www.newscientist.com",
326
+ "https://www.sciencedaily.com",
327
+ "https://www.space.com",
328
+ "https://www.livescience.com",
329
+ "https://www.popsci.com",
330
+ "https://www.healthline.com",
331
+ "https://www.webmd.com",
332
+ "https://www.mayoclinic.org",
333
+ "https://www.nih.gov",
334
+ "https://www.cdc.gov",
335
+ "https://www.who.int",
336
+ "https://www.un.org",
337
+ "https://www.nationalgeographic.com",
338
+ "https://www.worldreallife.org",
339
+ "https://www.greenpeace.org",
340
+ "https://www.nrdc.org",
341
+ "https://www.sierraclub.org",
342
+ "https://www.amnesty.org",
343
+ "https://www.hrw.org",
344
+ "https://www.icrc.org",
345
+ "https://www.redcross.org",
346
+ "https://www.unicef.org",
347
+ "https://www.savethechildren.org",
348
+ "https://www.doctorswithoutborders.org",
349
+ "https://www.wikimedia.org",
350
+ "https://www.archive.org",
351
+ "https://www.opendemocracy.net",
352
+ "https://www.projectgutenberg.org",
353
+ "https://www.khanacademy.org",
354
+ "https://www.codecademy.com",
355
+ ]
356
+ ```