Datasets:
Update README.md
#2
by
ykarmesh
- opened
FindingDory: A Benchmark to Evaluate Memory in Embodied Agents
Karmesh Yadav*, Yusuf Ali*, Gunshi Gupta, Yarin Gal, Zsolt KiraCurrent vision-language models (VLMs) struggle with long-term memory in embodied tasks. To address this, we introduce FindingDory, a benchmark in Habitat that evaluates memory-based reasoning across 60 long-horizon tasks.
In this repo, we release the FindingDory Video Dataset. Each video contains images collected from a robot’s egocentric view as it navigates realistic indoor environments and interacts with objects. This dataset was used to train and evaluate the high-level agent SFT agent in the FindingDory benchmark.
Usage
from datasets import load_dataset
dataset = load_dataset("yali30/findingdory")
Dataset Structure
Field name | Description |
---|---|
ep_id | Episode id. |
video | Relative path of the video clip. |
question | Question posed to the agent based on the episode. |
answer | Ground-truth answer stored as a list of image indices |
task_id | Identifier indicating which task template the episode belongs to (string). |
high_level_category | Higl-task task category label. (Options: Single-Goal Spatial Tasks, Single-Goal Temporal Tasks, Multi-Goal Tasks). |
low_level_category | Fine-grained task category label. (Example categories: Interaction-Order, Room Visitation, etc) |
num_interactions | Number of objects the robot interacts with, during the experience collection. |
Notes:
- The validation split contains 60 tasks . The training split only contains 55 task because the 5 “Object Attributes” tasks are withheld from the training set.
- A subsampled version of the dataset (96 frames per episode) is available here.
📄 Citation
@article
{yadav2025findingdory,
title = {FindingDory: A Benchmark to Evaluate Memory in Embodied Agents},
author = {Yadav, Karmesh and Ali, Yusuf and Gupta, Gunshi and Gal, Yarin and Kira, Zsolt},
journal = {arXiv preprint arXiv:2506.15635},
year = {2025}
}
yali30
changed pull request status to
merged