Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
WebClick / README.md
marc-thibault-h's picture
Update README.md
9da9082 verified
|
raw
history blame
5.57 kB
metadata
language:
  - en
license: apache-2.0
task_categories:
  - text-generation
  - image-to-text
dataset_info:
  features:
    - name: image
      dtype: image
    - name: instruction
      dtype: string
    - name: bbox
      sequence: float64
    - name: bucket
      dtype: string
  splits:
    - name: test
      num_bytes: 334903619
      num_examples: 1639
  download_size: 334903619
  dataset_size: 334903619
configs:
  - config_name: default
    data_files:
      - split: test
        path: test*

Pixel-Navigator: A Multimodal Localization Benchmark for Web-Navigation Models

We introduce Pixel-Navigator, a high-quality benchmark dataset for evaluating navigation and localization capabilities of multimodal models and agents in Web environments. Pixel-Navigator features 1,639 English-language web screenshots paired with precisely annotated natural-language instructions and pixel-level click targets, in the same format as the widely-used screenspot benchmark.

Design Goals and Use Case

Pixel-Navigator is designed to measure and advance the ability of AI systems to understand web interfaces, interpret user instructions, and take accurate actions within digital environments. The dataset contains three distinct groups of web screenshots that capture a range of real-world navigation scenarios, from agent-based web retrieval to human tasks like online shopping and calendar management.

On a more technical level, this benchmark is intended for assessing multimodal models on their ability to navigate web interfaces, evaluating AI agents' understanding of UI elements and their functions, and testing models' abilities to ground natural language instructions to specific interactive elements.

Technical Details: High Quality Annotations and NLP Instructions

A key strength of this benchmark is its meticulous annotation: all bounding boxes correspond precisely to HTML element boundaries, ensuring rigorous evaluation of model performance. Each screenshot is paired with natural language instructions that simulate realistic navigation requests, requiring models to not only understand UI elements but also interpret contextual relationships between visual elements.

Dataset Structure

The dataset contains 1,639 samples divided into three key groups:

  1. agentbrowse (36%): Pages encountered by the RunnerH agent while solving web retrieval tasks from WebVoyager
  2. humanbrowse (31.8%): Pages and elements interacted with by humans performing everyday tasks (e-shopping, trip planning, personal organization)
  3. calendars (32.2%): A specialized subset focusing on calendar interfaces, a known challenge for UI understanding models

Each sample consists of:

  • image: A screenshot of a web page
  • instruction: A natural language instruction describing the desired action
  • bbox: Coordinates of the bounding box (relative to the image dimensions) that identify the correct click target, such as an input field or a button
  • bucket: One of agentbrowse, humanbrowse, calendars: group this row belongs to

The dataset includes several challenging scenarios:

  • Disambiguation between similar elements (e.g., "the login button in the middle", “the login button in the top-right”)
  • Cases where OCR is insufficient because the visible text isn't the interactive element
  • Navigation requiring understanding of relative spatial relationships between information and interaction points

Dataset Creation

Curation Rationale

Pixel-Navigator focuses on realism by capturing authentic interactions: actions taken by humans and agents. The records of Pixel Navigator are English-language, desktop-size screenshots of 100+ websites. Each record points to an element outlined by a rectangular bounding box and an intent corresponding to it. In particular, the dataset focuses on providing bounding boxes and intents that are not ambiguous, thus increasing the trustworthiness of the evaluation of a VLM on this data.

The calendar segment specifically targets known failure points in current systems, demonstrating H Company's commitment to creating targeted benchmarks around challenging areas.

With this new benchmark, H Company aims to unlock new capabilities in VLMs, and stimulate the progress of web agents.

Examples

UI Understanding

Results of Popular Models

*INSERT TABLE HERE

Annotations

Annotations were created by UI experts with specialized knowledge of web interfaces. Each screenshot was paired with a natural language instruction describing an intended action, and a bounding box precisely matching HTML element boundaries. All labels were hand-written or hand-reviewed. Instructions were rewritten when needed to only contain non-ambiguous intents rather than visual descriptions. Screenshots were manually reviewed to avoid any personal information, with any identifiable data removed or anonymized.

Licence

  • Curated by: H Company
  • Language: English
  • License: Apache 2.0

Dataset Sources

  • Repository: [Hugging Face Repository URL]
  • Paper: [Coming soon]

Citation

BibTeX:

@dataset{hcompany2025uinavigate,
  author = {H Company Research Team},
  title = {Pixel-Navigator: A Benchmark Dataset for Web Navigation and Localization},
  year = {2025},
  publisher = {H Company},
}

Dataset Card Contact

[email protected]