Datasets:
Modalities:
Image
Languages:
English
ArXiv:
Tags:
Visual Nagivation
Proxy Map
Waypoint
Reinforcement Learning
Contrastive Learning
Intuitive Robot Motion Intent Visualization
DOI:
License:
Upload 2 files
Browse files
README.md
CHANGED
@@ -1,3 +1,99 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# LAVN Dataset
|
2 |
+
### Data Organization
|
3 |
+
|
4 |
+
After downloading and unzipping the zip files, please reorganize the files in the following tructure:
|
5 |
+
```
|
6 |
+
LAVN
|
7 |
+
|--src
|
8 |
+
|--makeData_virtual.py
|
9 |
+
|--makeData_real.py
|
10 |
+
...
|
11 |
+
|--Virtual
|
12 |
+
|--Gibson
|
13 |
+
|--traj_<SCENE_ID>
|
14 |
+
|--worker_graph.json
|
15 |
+
|--rgb_<FRAME_ID>.jpg
|
16 |
+
|--depth_<FRAME_ID>.jpg
|
17 |
+
|--traj_Ackermanville
|
18 |
+
|--worker_graph.json
|
19 |
+
|--rgb_00001.jpg
|
20 |
+
|--rgb_00002.jpg
|
21 |
+
...
|
22 |
+
|--depth_00001.jpg
|
23 |
+
|--depth_00002.jpg
|
24 |
+
...
|
25 |
+
...
|
26 |
+
|--Matterport
|
27 |
+
|--traj_<SCENE_ID>
|
28 |
+
|--worker_graph.json
|
29 |
+
|--rgb_<FRAME_ID>.jpg
|
30 |
+
|--depth_<FRAME_ID>.jpg
|
31 |
+
|--traj_00000-kfPV7w3FaU5
|
32 |
+
|--worker_graph.json
|
33 |
+
|--rgb_00001.jpg
|
34 |
+
|--rgb_00002.jpg
|
35 |
+
...
|
36 |
+
|--depth_00001.jpg
|
37 |
+
|--depth_00002.jpg
|
38 |
+
...
|
39 |
+
...
|
40 |
+
|--Real
|
41 |
+
|--Campus
|
42 |
+
|--worker_graph.json
|
43 |
+
|--traj_480p_<SCENE_ID>
|
44 |
+
|--rgb_<FRAME_ID>.jpg
|
45 |
+
|--traj_480p_scene00
|
46 |
+
|--rgb_00001.jpg
|
47 |
+
```
|
48 |
+
where the main landmark annotation scripts ```makeData_virtual.py``` and ```makeData_real.py``` are in folder (1) ```src```. (2) ```Virtual``` and (3) ```Real``` stores trajectories collecetd in the simulation and real world, respectively. In each trajectory's data is collected in the following format:
|
49 |
+
```
|
50 |
+
|--traj_<SCENE_ID>
|
51 |
+
|--worker_graph.json
|
52 |
+
|--rgb_<FRAME_ID>.jpg
|
53 |
+
|--depth_<FRAME_ID>.jpg
|
54 |
+
```
|
55 |
+
where ```<SCENE_ID>``` matches exactly the original one in [Gibson](https://github.com/StanfordVL/GibsonEnv/blob/master/gibson/data/README.md) and [Matterport](https://aihabitat.org/datasets/hm3d/) run by the photo-realistic simulator [Habitat](https://github.com/facebookresearch/habitat-sim). Images are saved in either ```.jpg``` or ```.png``` format. Note that ```rgb``` images are the main visual representation while ```depth``` is the auxiliary visual information captured only in the virtual environment.
|
56 |
+
|
57 |
+
```worker_graph.json``` stores the meta data in dictionary in Python saved in ```json``` file with the following format:
|
58 |
+
|
59 |
+
```
|
60 |
+
{"node<NODE_ID>":
|
61 |
+
{"img_path": "./human_click_dataset/traj_<SCENE_ID>/rgb_<FRAME_ID>.jpg",
|
62 |
+
"depth_path": "./human_click_dataset/traj_<SCENE_ID>/depth_<FRAME_ID>.png",
|
63 |
+
"location": [<LOC_X>, <LOC_Y>, <LOC_Z>],
|
64 |
+
"orientation": <ORIENT>,
|
65 |
+
"click_point": [<COOR_X>, <COOR_Y>],
|
66 |
+
"reason": ""},
|
67 |
+
...
|
68 |
+
"node0":
|
69 |
+
{"img_path": "./human_click_dataset/traj_00101-n8AnEznQQpv/rgb_00002.jpg",
|
70 |
+
"depth_path": "./human_click_dataset/traj_00101-n8AnEznQQpv/depth_00002.jpg",
|
71 |
+
"location": [0.7419548034667969, -2.079209327697754, -0.5635206699371338],
|
72 |
+
"orientation": 0.2617993967423121,
|
73 |
+
"click_point": [270, 214],
|
74 |
+
"reason": ""}
|
75 |
+
...
|
76 |
+
"edges":...
|
77 |
+
"goal_location": null,
|
78 |
+
"start_location": [<LOC_X>, <LOC_Y>, <LOC_Z>],
|
79 |
+
"landmarks": [[[<COOR_X>, <COOR_Y>], <FRAME_ID>], ...],
|
80 |
+
"actions": ["ACTION_NAME", "turn_right", "move_forward", "turn_right", ...]
|
81 |
+
"env_name": <SCENE_ID>
|
82 |
+
}
|
83 |
+
```
|
84 |
+
where ```[<LOC_X>, <LOC_Y>, <LOC_Z>]``` is the 3-axis location vector, ```<ORIENT>``` is the orientation only in simulation. ```[<COOR_X>, <COOR_Y>]``` are the image coordinates of landmarks. ```ACTION_NAME``` stores the action of the robot take from the current frame to the next frame.
|
85 |
+
|
86 |
+
### Long-Term Maintenance Plan
|
87 |
+
We will conduct a long-term maintenance plan to ensure the accessability and quality for future research:
|
88 |
+
|
89 |
+
**Data Standards**: Data formats will be checked regularly with scripts to validate data consistency.
|
90 |
+
|
91 |
+
**Data Cleaning**: Data in incorrect formats, missing data or contains invalid values will be removed.
|
92 |
+
|
93 |
+
**Scheduled Updates**: We set up montly schedule for data updates.
|
94 |
+
|
95 |
+
**Storage Solutions**: Zenodo with a DOI will be provided to as an public repository for online storage. A second copy will be stored in a private cloud server while a third copy will be stored in a local drive.
|
96 |
+
|
97 |
+
**Data Backup**: Once one of the copies in the aforementioned storage approach is detected inaccessible, it will be restored by one of the other two copies immediately.
|
98 |
+
|
99 |
+
**Documentation**: Our documentation will be updated regularly reflecting feedback from users.
|
src.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9d6014bd5ff85b89d4b3d40aa0ce4b5c00c8ee500f1e7595603021a96ee56e86
|
3 |
+
size 7111
|