taochenshh commited on
Commit
ef52e43
·
verified ·
1 Parent(s): 9283c78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +187 -0
README.md CHANGED
@@ -1,3 +1,190 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # Visual Dexterity
6
+
7
+ ---
8
+
9
+ This is the codebase for [Visual Dexterity: In-Hand Reorientation of Novel and Complex Object Shapes](https://arxiv.org/abs/2211.11744), accepted by Science Robotics. While we provide the code that uses the D'Claw robot hand, it can be easily adapted to other robot hands.
10
+
11
+ ### [[Project Page]](https://taochenshh.github.io/projects/visual-dexterity), [[Science Robotics]](https://www.science.org/doi/10.1126/scirobotics.adc9244), [[arXiv]](https://arxiv.org/abs/2211.11744), [[Github]](https://github.com/Improbable-AI/dexenv)
12
+
13
+ [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10039109.svg)](https://doi.org/10.5281/zenodo.10039109)
14
+
15
+
16
+ ## :books: Citation
17
+
18
+ ```
19
+ @article{chen2023visual,
20
+ author = {Tao Chen and Megha Tippur and Siyang Wu and Vikash Kumar and Edward Adelson and Pulkit Agrawal },
21
+ title = {Visual dexterity: In-hand reorientation of novel and complex object shapes},
22
+ journal = {Science Robotics},
23
+ volume = {8},
24
+ number = {84},
25
+ pages = {eadc9244},
26
+ year = {2023},
27
+ doi = {10.1126/scirobotics.adc9244},
28
+ URL = {https://www.science.org/doi/abs/10.1126/scirobotics.adc9244},
29
+ eprint = {https://www.science.org/doi/pdf/10.1126/scirobotics.adc9244},
30
+ }
31
+ ```
32
+
33
+ ```
34
+ @article{chen2021system,
35
+ title={A System for General In-Hand Object Re-Orientation},
36
+ author={Chen, Tao and Xu, Jie and Agrawal, Pulkit},
37
+ journal={Conference on Robot Learning},
38
+ year={2021}
39
+ }
40
+ ```
41
+
42
+ ## :gear: Installation
43
+
44
+ #### Dependencies
45
+ * [PyTorch](https://pytorch.org/)
46
+ * [PyTorch3D](https://pytorch3d.org/)
47
+ * [Isaac Gym](https://developer.nvidia.com/isaac-gym) (results in the paper are trained with Preview 3.)
48
+ * [IsaacGymEnvs](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs)
49
+ * [Minkowski Engine](https://github.com/NVIDIA/MinkowskiEngine)
50
+ * [Wandb](https://wandb.ai/site)
51
+
52
+
53
+ #### Download packages
54
+ You can either use a virtual python environment or a docker for training. Below we show the process to set up the docker image. If you prefer using a virtual python environment, you can just install the dependencies in the virtual environment.
55
+
56
+ Here is how the directory looks like:
57
+ ```
58
+ -- Root
59
+ ---- dexenv
60
+ ---- IsaacGymEnvs
61
+ ---- isaacgym
62
+ ```
63
+
64
+ ```
65
+ # download packages
66
+ git clone [email protected]:Improbable-AI/dexenv.git
67
+ git clone https://github.com/NVIDIA-Omniverse/IsaacGymEnvs.git
68
+
69
+ # download IsaacGym from:
70
+ # (https://developer.nvidia.com/isaac-gym)
71
+ # unzip it in the current directory
72
+
73
+ # remove the package dependencies in the setup.py in isaacgym/python and IsaacGymEnvs/
74
+ ```
75
+
76
+ #### Download the assets
77
+
78
+ Download the robot and object assets from [here](https://huggingface.co/datasets/taochenshh/dexenv/blob/main/assets.zip), and unzip it to `dexenv/dexenv/`.
79
+
80
+ #### Download the pretrained models
81
+
82
+ Download the pretrained checkpoints from [here](https://huggingface.co/datasets/taochenshh/dexenv/blob/main/pretrained.zip), and unzip it to `dexenv/dexenv/`.
83
+
84
+ #### Prepare the docker image
85
+ 1. You can download a pre-built docker image:
86
+ ```
87
+ docker pull improbableailab/dexenv:latest
88
+ ```
89
+ 2. Or you can build the docker image locally:
90
+ ```
91
+ cd dexenv/docker
92
+ python docker_build.py -f Dockerfile
93
+ ```
94
+
95
+ #### Launch the docker image
96
+
97
+ To run the docker image, you would need to have the nvidia-docker installed. Follow the instructions [here](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
98
+ ```bash
99
+ # launch docker
100
+ ./run_image.sh # you would need to have wandb installed in the python environment
101
+ ```
102
+
103
+ In another terminal
104
+ ```bash
105
+ ./visualize_access.sh
106
+ # after this, you can close it, just need to run this once after every machine reboot
107
+ ```
108
+
109
+
110
+ ## :scroll: Usage
111
+
112
+ #### :bulb: Training Teacher
113
+
114
+ ```bash
115
+ # if you are running in the docker, you might need to run the following line
116
+ git config --global --add safe.directory /workspace/dexenv
117
+
118
+ # debug teacher (run debug first to make sure everything runs)
119
+ cd /workspace/dexenv/dexenv/train/teacher
120
+ python mlp.py -cn=debug_dclaw # show the GUI
121
+ python mlp.py task.headless=True -cn=debug_dclaw # in headless mode
122
+
123
+ # if you wanna just train the hand to reorient a cube, add `task.env.name=DClawBase`
124
+ python mlp.py task.env.name=DClawBase -cn=debug_dclaw
125
+
126
+ # training teacher
127
+ cd /workspace/dexenv/dexenv/train/teacher
128
+ python mlp.py -cn=dclaw
129
+ python mlp.py task.task.randomize=False -cn=dclaw # turn off domain randomization
130
+ python mlp.py task.env.name=DClawBase task.task.randomize=False -cn=dclaw # reorient a cube without domain randomization
131
+
132
+ # if you wanna change the number of objects or the number of environments
133
+ python mlp.py alg.num_envs=4000 task.obj.num_objs=10 -cn=dclaw
134
+
135
+ # testing teacher
136
+ cd /workspace/dexenv/dexenv/train/teacher
137
+ python mlp.py alg.num_envs=20 resume_id=<wandb exp ID> -cn=test_dclaw
138
+ # e.g. python mlp.py alg.num_envs=20 resume_id=dexenv/1d1tvd0b -cn=test_dclaw
139
+
140
+ ```
141
+
142
+ #### :high_brightness: Training Student with Synthetic Point Cloud (student stage 1)
143
+
144
+ ```
145
+ # debug student
146
+ cd /workspace/dexenv/dexenv/train/student
147
+ python rnn.py -cn=debug_dclaw_fptd
148
+ # by default, the command above used the pretrained teacher model you downloaded above,
149
+ #if you wanna use another teacher model, add `alg.expert_path=<path>`
150
+ python rnn.py alg.expert_path=<path to teacher model> -cn=debug_dclaw_fptd
151
+
152
+ # training student
153
+ cd /workspace/dexenv/dexenv/train/student
154
+ python rnn.py -cn=dclaw_fptd
155
+
156
+ # testing student
157
+ cd /workspace/dexenv/dexenv/train/student
158
+ python rnn.py resume_id=<wandb exp ID> -cn=test_dclaw_fptd
159
+ ```
160
+
161
+ #### :tada: Training Student with rendered Point Cloud (student stage 2)
162
+
163
+ ```
164
+ # debug student
165
+ cd /workspace/dexenv/dexenv/train/student
166
+ python rnn.py -cn=debug_dclaw_rptd
167
+
168
+ # training student
169
+ cd /workspace/dexenv/dexenv/train/student
170
+ python rnn.py -cn=dclaw_rptd
171
+
172
+ # testing student
173
+ cd /workspace/dexenv/dexenv/train/student
174
+ python rnn.py resume_id=<wandb exp ID> -cn=test_dclaw_rptd
175
+ ```
176
+
177
+ ## :rocket: Pre-trained models
178
+
179
+ We provide the pre-trained models for both the teacher and the student (stage 2) in `dexenv/expert/artifacts`. The models were trained using Isaac Gym preview 3.
180
+
181
+ ```
182
+ # to see the teacher pretrained model
183
+ cd /workspace/dexenv/dexenv/train/teacher
184
+ python demo.py
185
+
186
+ # to see the student pretrained model
187
+ cd /workspace/dexenv/dexenv/train/student
188
+ python rnn.py alg.num_envs=20 task.obj.num_objs=10 alg.pretrain_model=/workspace/dexenv/dexenv/pretrained/artifacts/student/train-model.pt test_pretrain=True test_num=3 -cn=debug_dclaw_rptd
189
+ ```
190
+