findingdory / README.md
yali30's picture
Upload MemoryBench dataset with videos
3158d11 verified
metadata
dataset_info:
  features:
    - name: ep_id
      dtype: string
    - name: video
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: task_id
      dtype: string
    - name: high_level_category
      dtype: string
    - name: low_level_category
      dtype: string
    - name: num_interactions
      dtype: int64
  splits:
    - name: train
      num_bytes: 107506980
      num_examples: 79213
    - name: validation
      num_bytes: 9653447
      num_examples: 5870
  download_size: 14758637
  dataset_size: 117160427
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
license: apache-2.0
task_categories:
  - question-answering
language:
  - en
tags:
  - robotics
  - embodied-ai
pretty_name: findingdory
size_categories:
  - 10K<n<100K
arXiv Website GitHub Code Huggingface Model

FindingDory: A Benchmark to Evaluate Memory in Embodied Agents

Karmesh Yadav*, Yusuf Ali*, Gunshi Gupta, Yarin Gal, Zsolt Kira

Current vision-language models (VLMs) struggle with long-term memory in embodied tasks. To address this, we introduce FindingDory, a benchmark in Habitat that evaluates memory-based reasoning across 60 long-horizon tasks.

In this repo, we release the FindingDory Video Dataset. Each video contains images collected from a robot’s egocentric view as it navigates realistic indoor environments and interacts with objects. This dataset was used to train and evaluate the high-level agent SFT agent in the FindingDory benchmark.

Usage

from datasets import load_dataset
dataset = load_dataset("yali30/findingdory")

Dataset Structure

Field name Description
ep_id Episode id.
video Relative path of the video clip.
question Question posed to the agent based on the episode.
answer Ground-truth answer stored as a list of image indices
task_id Identifier indicating which task template the episode belongs to (string).
high_level_category Higl-task task category label. (Options: Single-Goal Spatial Tasks, Single-Goal Temporal Tasks, Multi-Goal Tasks).
low_level_category Fine-grained task category label. (Example categories: Interaction-Order, Room Visitation, etc)
num_interactions Number of objects the robot interacts with, during the experience collection.

Notes:

  • The validation split contains 60 tasks . The training split only contains 55 task because the 5 “Object Attributes” tasks are withheld from the training set.
  • A subsampled version of the dataset (96 frames per episode) is available here.

📄 Citation

@article{yadav2025findingdory,
  title     = {FindingDory: A Benchmark to Evaluate Memory in Embodied Agents},
  author    = {Yadav, Karmesh and Ali, Yusuf and Gupta, Gunshi and Gal, Yarin and Kira, Zsolt},
  journal   = {arXiv preprint arXiv:2506.15635},
  year      = {2025}
}