license: cc-by-nc-4.0
Directional Guidance
This dataset provides a benchmark for evaluating Vision-Language Models (VLMs) in their ability to guide users to adjust an image to better answer a relevant question.
Dataset Description
The Directional Guidance dataset focuses on Visual Question Answering (VQA) tasks where a model needs to evaluate visual information sufficiency and guide the user on where to reposition the camera if the image lacks necessary details. This dataset addresses a unique challenge in VLMs by benchmarking their ability to detect information gaps and suggest directional guidance for reframing, particularly useful for visually impaired individuals who may struggle to capture well-framed images.
- Curated by: Researchers at the University of California, Santa Cruz
- Language(s) (NLP): Primarily English
- License: CC BY-NC 4.0
Dataset Sources
- Repository: Directional Guidance on GitHub
- Paper: Right This Way: Can VLMs Guide Us to See More to Answer Questions?
- This dataset is developed based on:
- VizWiz VQA Dataset: VizWiz
Dataset Structure
The dataset includes images paired with questions and directional guidance labels. Each sample is annotated to indicate if a directional adjustment (left, right, up, or down) is needed, whether the image can be left unchanged, or if the question is unanswerable with potential reframing actions. The dataset comprises manually labeled real-world samples, reflecting authentic VQA scenarios encountered by visually impaired users.
The dataset consists of two main components:
Images: Located in the
/images/
folder, this directory contains the images used for the Directional Guidance task.Annotations: Stored in JSON files, the annotations record each sample's
image_id
, original question, and the Directional Guidance label. The JSON structure is a list of dictionaries, with each dictionary containing information about an image-question pair.Each dictionary entry in the JSON file includes:
image
: The filename of the image (e.g.,VizWiz_test_00000969.jpg
)question
: The question associated with the image (e.g.,Question: Is this table lamp on or off?
)answer
: The directional guidance label for adjusting the camera, or an indication if no adjustment is needed (e.g.,leave it unchanged
).
Example JSON entry:
{ "image": "VizWiz_test_00000969.jpg", "question": "Question: Is this table lamp on or off?", "answer": "leave it unchanged" }
There are six classes of Directional Guidance labels: left
, right
, up
, down
, leave it unchanged
, and none of the other options
.
We provide two versions of the annotations:
Directional_Guidance_annotation_v1.json
: This is the original annotation file used in our paper for the experimental results.
Directional_Guidance_annotation_v2(recommended).json
(Newest Version): This version has undergone additional validation with more detailed checks, resulting in minor revisions to improve annotation accuracy.
Uses
Direct Use
The Directional Guidance dataset is intended for evaluating VLMs in tasks requiring directional guidance. It is especially suited for applications that assist visually impaired individuals with interactive VQA, helping them capture better-framed images and obtain accurate answers.
Out-of-Scope Use
The dataset is not recommended for general object detection or use cases outside directional guidance, as it focuses specifically on framing adjustments for question-answering.
Dataset Creation
Curation Rationale
The dataset was created to fill a gap in VLM capabilities, focusing on instances where the model needs to indicate reframing actions. By guiding users to improve image framing, the dataset contributes to more accessible VQA systems.
Source Data
The data is derived from the VizWiz dataset, which contains visual questions from visually impaired individuals. In this dataset, ill-framed images were manually annotated to provide directional guidance, simulating realistic VQA scenarios where camera adjustments are necessary.
Data Collection and Processing
A team of human annotators reviewed the VizWiz dataset to identify images that could benefit from reframing. The annotations specify the most promising directional guidance. Manual validation and quality checks were conducted to ensure consistency. More details can be found in the paper.
Who are the source data producers?
The images and questions originate from visually impaired users participating in the VizWiz project. Annotation was carried out by a team of trained annotators with oversight for quality assurance.
Annotations
Annotation Process
Annotations specify the directional guidance required (left, right, up, down, leave it unchanged, or none of the other options).
Who are the annotators?
Annotations were provided by trained human annotators, who validated and reviewed each sample to ensure consistency and accuracy.
Bias, Risks, and Limitations
This dataset may contain biases related to visual accessibility and image quality, as it primarily focuses on challenges faced by visually impaired individuals. Limitations include potential misclassifications due to ambiguity of the questions or subjective judgments of framing needs.
Recommendations
Users are advised to use this dataset with awareness of its scope and limitations.
Citation
Right This Way: Can VLMs Guide Us to See More to Answer Questions? (NeurIPS 2024)
Arxiv: https://arxiv.org/abs/2411.00394
BibTeX:
@article{liu2024right, title={Right this way: Can VLMs Guide Us to See More to Answer Questions?}, author={Liu, Li and Yang, Diji and Zhong, Sijia and Tholeti, Kalyana Suma Sree and Ding, Lei and Zhang, Yi and Gilpin, Leilani H}, journal={arXiv preprint arXiv:2411.00394}, year={2024} }
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
[More Information Needed]
Dataset Card Contact
For further information and dataset-related queries, please contact the corresponding author, Prof. Leilani H. Gilpin at [email protected], or Li Liu at [email protected].