Datasets:
Modalities:
Image
Languages:
English
ArXiv:
Tags:
Visual Nagivation
Proxy Map
Waypoint
Reinforcement Learning
Contrastive Learning
Intuitive Robot Motion Intent Visualization
DOI:
License:
Add Dataset Usage
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ tags:
|
|
12 |
---
|
13 |
|
14 |
# LAVN Dataset
|
15 |
-
###
|
16 |
|
17 |
After downloading and unzipping the zip files, please reorganize the files in the following tructure:
|
18 |
```
|
@@ -96,6 +96,28 @@ where ```<SCENE_ID>``` matches exactly the original one in [Gibson](https://gith
|
|
96 |
```
|
97 |
where ```[<LOC_X>, <LOC_Y>, <LOC_Z>]``` is the 3-axis location vector, ```<ORIENT>``` is the orientation only in simulation. ```[<COOR_X>, <COOR_Y>]``` are the image coordinates of landmarks. ```ACTION_NAME``` stores the action of the robot take from the current frame to the next frame.
|
98 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
### Long-Term Maintenance Plan
|
100 |
We will conduct a long-term maintenance plan to ensure the accessability and quality for future research:
|
101 |
|
|
|
12 |
---
|
13 |
|
14 |
# LAVN Dataset
|
15 |
+
### Dataset Organization
|
16 |
|
17 |
After downloading and unzipping the zip files, please reorganize the files in the following tructure:
|
18 |
```
|
|
|
96 |
```
|
97 |
where ```[<LOC_X>, <LOC_Y>, <LOC_Z>]``` is the 3-axis location vector, ```<ORIENT>``` is the orientation only in simulation. ```[<COOR_X>, <COOR_Y>]``` are the image coordinates of landmarks. ```ACTION_NAME``` stores the action of the robot take from the current frame to the next frame.
|
98 |
|
99 |
+
|
100 |
+
### Dataset Usage
|
101 |
+
The visual navigation task can be formulated as various types of problems, including but not limited to:
|
102 |
+
|
103 |
+
**1. Supervised Learning** by mapping visual observations (```RGBD```) to waypoints (image coordinates). A developer can
|
104 |
+
design a vision network whose input (```X```) is ```RGBD``` and output (```Y```) is image coordinate, specified by ```img_path```, ```depth_path```
|
105 |
+
and click point ```[<COOR_X>, <COOR_Y>]``` in the worker ```graph.json``` file in the dataset. The loss function can
|
106 |
+
be designed to minimize the discrepancy between the predicted image coordinate (```Y_pred```) and the ground truth (```Y```), e.g.
|
107 |
+
```loss = ||Y_pred − Y||```. Then ```Y_pred``` can be simply translated to a robot’s moving action, such as ```Y_pred``` in the center or
|
108 |
+
top region of an image means moving forward while ```left/right``` regions represent turning left or right.
|
109 |
+
|
110 |
+
|
111 |
+
**2. Map Representation Learning** in the latent space of a neural network. One can train this latent space to represent two
|
112 |
+
observations’ proximity by contrastive learning. The objective is to learn a function ```h()``` that predicts the distance given two
|
113 |
+
observations (```X1```) and (```X2```): ```dist = h(X1, X2)```. Note that ```dist()``` can be a cosine or distance-based function, depending on
|
114 |
+
the design choice. The positive samples can be nodes (a node includes information at a timestep such as ```RGBD``` data and image
|
115 |
+
coordinates) nearby while further nodes can be treated as negative samples. A landmark is a sparse and distinct object or scene
|
116 |
+
in the dataset that facilitates a more structured and global connection between nodes, which further assists in navigation in
|
117 |
+
more complex or longer trajectories.
|
118 |
+
|
119 |
+
|
120 |
+
|
121 |
### Long-Term Maintenance Plan
|
122 |
We will conduct a long-term maintenance plan to ensure the accessability and quality for future research:
|
123 |
|