Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
marc-thibault-h commited on
Commit
a5afbba
·
verified ·
1 Parent(s): 2dbb590

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -6
README.md CHANGED
@@ -59,18 +59,18 @@ This dataset is intended for benchmarking multimodal models on their ability to
59
 
60
  The dataset contains 1,639 samples divided into three key groups:
61
 
62
- 1. **`agentbrowse` (36%)**: Pages encountered by agents during web retrieval tasks on Web Voyager
63
  2. **`humanbrowse` (31.8%)**: Pages and elements interacted with by humans performing everyday tasks (e-shopping, trip planning, personal organization)
64
  3. **`calendars` (32.2%)**: A specialized subset focusing on calendar interfaces, a known challenge for UI understanding models
65
 
66
  Each sample consists of:
67
  - **`image`**: A screenshot of a web page
68
  - **`instruction`**: A natural language instruction describing the desired action
69
- - **`bbox`**: Precise coordinates of the bounding box (relative to the image dimensions) that identify the correct click target
70
  - **`bucket`**: One of `agentbrowse`, `humanbrowse`, `calendars`: group this row belongs to
71
 
72
  The dataset includes several challenging scenarios:
73
- - Disambiguation between similar elements (e.g., "the login button in the middle")
74
  - Cases where OCR is insufficient because the visible text isn't the interactive element
75
  - Navigation requiring understanding of relative spatial relationships between information and interaction points
76
 
@@ -78,10 +78,14 @@ The dataset includes several challenging scenarios:
78
 
79
  ### Curation Rationale
80
 
81
- Pixel Navigator focuses on realism by capturing authentic interactions: real actions undertaken by humans and real actions undertaken by agents.
82
  The records of Pixel Navigator are English-language, desktop-size screenshots of websites. Each record points to an element outlined by a rectangular bounding box and an intent corresponding to it. In particular, the dataset focuses on providing bounding boxes and intents that are not ambiguous, thus increasing the trustworthiness of the evaluation of a VLM on this data.
83
- This focus on genuine interaction patterns makes our benchmark a superior evaluation tool that will move the needle for agent development. The calendar segment specifically targets known failure points in current systems, demonstrating H Company's commitment to creating targeted benchmarks around challenging areas.
84
- By identifying and focusing on these difficult cases, H Company aims to unlock new capabilities in VLMs and agents, driving progress in the field through carefully designed evaluation challenges.
 
 
 
 
85
 
86
  ### Annotations
87
 
@@ -104,3 +108,5 @@ All labels were hand-written or hand-reviewed. Intents were rewritten when neede
104
 
105
106
 
 
 
 
59
 
60
  The dataset contains 1,639 samples divided into three key groups:
61
 
62
+ 1. **`agentbrowse` (36%)**: Pages encountered by the RunnerH agent while solving web retrieval tasks from [WebVoyager](https://arxiv.org/abs/2401.13919)
63
  2. **`humanbrowse` (31.8%)**: Pages and elements interacted with by humans performing everyday tasks (e-shopping, trip planning, personal organization)
64
  3. **`calendars` (32.2%)**: A specialized subset focusing on calendar interfaces, a known challenge for UI understanding models
65
 
66
  Each sample consists of:
67
  - **`image`**: A screenshot of a web page
68
  - **`instruction`**: A natural language instruction describing the desired action
69
+ - **`bbox`**: Coordinates of the bounding box (relative to the image dimensions) that identify the correct click target
70
  - **`bucket`**: One of `agentbrowse`, `humanbrowse`, `calendars`: group this row belongs to
71
 
72
  The dataset includes several challenging scenarios:
73
+ - Disambiguation between similar elements (e.g., "the login button in the middle", “the login button in the top-right”)
74
  - Cases where OCR is insufficient because the visible text isn't the interactive element
75
  - Navigation requiring understanding of relative spatial relationships between information and interaction points
76
 
 
78
 
79
  ### Curation Rationale
80
 
81
+ Pixel Navigator focuses on realism by capturing authentic interactions: actions taken by humans and agents.
82
  The records of Pixel Navigator are English-language, desktop-size screenshots of websites. Each record points to an element outlined by a rectangular bounding box and an intent corresponding to it. In particular, the dataset focuses on providing bounding boxes and intents that are not ambiguous, thus increasing the trustworthiness of the evaluation of a VLM on this data.
83
+
84
+
85
+ The calendar segment specifically targets known failure points in current systems, demonstrating H Company's commitment to creating targeted benchmarks around challenging areas.
86
+
87
+ With this new benchmark, H Company aims to unlock new capabilities in VLMs, and stimulate the progress of web agents.
88
+
89
 
90
  ### Annotations
91
 
 
108
 
109
110
 
111
+
112
+