Datasets:
Modalities:
Image
Languages:
English
ArXiv:
Tags:
Visual Nagivation
Proxy Map
Waypoint
Reinforcement Learning
Contrastive Learning
Intuitive Robot Motion Intent Visualization
DOI:
License:
license: mit | |
language: | |
- en | |
tags: | |
- Visual Nagivation | |
- Proxy Map | |
- Waypoint | |
- Reinforcement Learning | |
- Contrastive Learning | |
- Intuitive Robot Motion Intent Visualization | |
# LAVN Dataset | |
Accepted to [HRI2025 Short Contributions](https://humanrobotinteraction.org/2025/short-contributions/) | |
Preprint: [arxiv.org/pdf/2308.16682](arxiv.org/pdf/2308.16682) | |
### Dataset Organization | |
After downloading and unzipping the zip files, please reorganize the files in the following tructure: | |
``` | |
LAVN | |
|--src | |
|--makeData_virtual.py | |
|--makeData_real.py | |
... | |
|--Virtual | |
|--Gibson | |
|--traj_<SCENE_ID> | |
|--worker_graph.json | |
|--rgb_<FRAME_ID>.jpg | |
|--depth_<FRAME_ID>.jpg | |
|--traj_Ackermanville | |
|--worker_graph.json | |
|--rgb_00001.jpg | |
|--rgb_00002.jpg | |
... | |
|--depth_00001.jpg | |
|--depth_00002.jpg | |
... | |
... | |
|--Matterport | |
|--traj_<SCENE_ID> | |
|--worker_graph.json | |
|--rgb_<FRAME_ID>.jpg | |
|--depth_<FRAME_ID>.jpg | |
|--traj_00000-kfPV7w3FaU5 | |
|--worker_graph.json | |
|--rgb_00001.jpg | |
|--rgb_00002.jpg | |
... | |
|--depth_00001.jpg | |
|--depth_00002.jpg | |
... | |
... | |
|--Real | |
|--Campus | |
|--worker_graph.json | |
|--traj_480p_<SCENE_ID> | |
|--rgb_<FRAME_ID>.jpg | |
|--traj_480p_scene00 | |
|--rgb_00001.jpg | |
``` | |
where the main landmark annotation scripts ```makeData_virtual.py``` and ```makeData_real.py``` are in folder (1) ```src```. (2) ```Virtual``` and (3) ```Real``` store trajectories collected in the simulation and real world, respectively. Each trajectory's data is collected in the following format: | |
``` | |
|--traj_<SCENE_ID> | |
|--worker_graph.json | |
|--rgb_<FRAME_ID>.jpg | |
|--depth_<FRAME_ID>.jpg | |
``` | |
where ```<SCENE_ID>``` matches exactly the original one in [Gibson](https://github.com/StanfordVL/GibsonEnv/blob/master/gibson/data/README.md) and [Matterport](https://aihabitat.org/datasets/hm3d/) run by the photo-realistic simulator [Habitat](https://github.com/facebookresearch/habitat-sim). Images are saved in either ```.jpg``` or ```.png``` format. Note that ```rgb``` images are the main visual representation while ```depth``` is the auxiliary visual information captured only in the virtual environment. Real-world RGB images are downsampled to a ```640 × 480``` resolution noted by ```480p``` in a trajectory folder name. | |
```worker_graph.json``` stores the meta data in dictionary in Python saved in ```json``` file with the following format: | |
``` | |
{"node<NODE_ID>": | |
{"img_path": "./human_click_dataset/traj_<SCENE_ID>/rgb_<FRAME_ID>.jpg", | |
"depth_path": "./human_click_dataset/traj_<SCENE_ID>/depth_<FRAME_ID>.png", | |
"location": [<LOC_X>, <LOC_Y>, <LOC_Z>], | |
"orientation": <ORIENT>, | |
"click_point": [<COOR_X>, <COOR_Y>], | |
"reason": ""}, | |
... | |
"node0": | |
{"img_path": "./human_click_dataset/traj_00101-n8AnEznQQpv/rgb_00002.jpg", | |
"depth_path": "./human_click_dataset/traj_00101-n8AnEznQQpv/depth_00002.jpg", | |
"location": [0.7419548034667969, -2.079209327697754, -0.5635206699371338], | |
"orientation": 0.2617993967423121, | |
"click_point": [270, 214], | |
"reason": ""} | |
... | |
"edges":... | |
"goal_location": null, | |
"start_location": [<LOC_X>, <LOC_Y>, <LOC_Z>], | |
"landmarks": [[[<COOR_X>, <COOR_Y>], <FRAME_ID>], ...], | |
"actions": ["ACTION_NAME", "turn_right", "move_forward", "turn_right", ...] | |
"env_name": <SCENE_ID> | |
} | |
``` | |
where ```[<LOC_X>, <LOC_Y>, <LOC_Z>]``` is the 3-axis location vector, ```<ORIENT>``` is the orientation only in simulation. ```[<COOR_X>, <COOR_Y>]``` are the image coordinates of landmarks. ```ACTION_NAME``` stores the action of the robot take from the current frame to the next frame. | |
### Dataset Usage | |
The visual navigation task can be formulated as various types of problems, including but not limited to: | |
**1. Supervised Learning** by mapping visual observations (```RGBD```) to waypoints (image coordinates). A developer can | |
design a vision network whose input (```X```) is ```RGBD``` and output (```Y```) is image coordinate, specified by ```img_path```, ```depth_path``` | |
and click point ```[<COOR_X>, <COOR_Y>]``` in the worker ```graph.json``` file in the dataset. The loss function can | |
be designed to minimize the discrepancy between the predicted image coordinate (```Y_pred```) and the ground truth (```Y```), e.g. | |
```loss = ||Y_pred − Y||```. Then ```Y_pred``` can be simply translated to a robot’s moving action, such as ```Y_pred``` in the center or | |
top region of an image means moving forward while ```left/right``` regions represent turning left or right. | |
**2. Map Representation Learning** in the latent space of a neural network. One can train this latent space to represent two | |
observations’ proximity by contrastive learning. The objective is to learn a function ```h()``` that predicts the distance given two | |
observations (```X1```) and (```X2```): ```dist = h(X1, X2)```. Note that ```dist()``` can be a cosine or distance-based function, depending on | |
the design choice. The positive samples can be nodes (a node includes information at a timestep such as ```RGBD``` data and image | |
coordinates) nearby while further nodes can be treated as negative samples. A landmark is a sparse and distinct object or scene | |
in the dataset that facilitates a more structured and global connection between nodes, which further assists in navigation in | |
more complex or longer trajectories. | |
### Long-Term Maintenance Plan | |
We will conduct a long-term maintenance plan to ensure the accessability and quality for future research: | |
**Data Standards**: Data formats will be checked regularly with scripts to validate data consistency. | |
**Data Cleaning**: Data in incorrect formats, missing data or contains invalid values will be removed. | |
**Scheduled Updates**: We set up montly schedule for data updates. | |
**Storage Solutions**: HuggingFace, with DOI (doi:10.57967/hf/2386), is provided as a public repository for online storage. A second copy will be stored in a private cloud server while a third copy will be stored in a local drive. | |
**Data Backup**: Once one of the copies in the aforementioned storage approach is detected inaccessible, it will be restored by one of the other two copies immediately. | |
**Documentation**: Our documentation will be updated regularly reflecting feedback from users. | |
### Citation | |
``` | |
@article{johnson2024landmark, | |
title={A Landmark-Aware Visual Navigation Dataset}, | |
author={Johnson, Faith and Cao, Bryan Bo and Dana, Kristin and Jain, Shubham and Ashok, Ashwin}, | |
journal={arXiv preprint arXiv:2402.14281}, | |
year={2024} | |
} | |
``` | |
``` | |
@misc{visnavdataset_lavn, | |
author = {visnavdataset}, | |
title = {LAVN Dataset}, | |
year = 2025, | |
doi = {10.57967/hf/2386}, | |
url = {https://huggingface.co/datasets/visnavdataset/lavn}, | |
note = {Accessed: 2025-02-07} | |
} | |
``` | |
Note: change the accessed date. | |