kiwi-sherbet commited on
Commit
9d64283
·
verified ·
1 Parent(s): 25024da

updated readme

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -6,7 +6,7 @@ language:
6
  # Deep Imitation Learning for Humanoid Loco-manipulation through Human Teleoperation
7
  [Mingyo Seo](https://mingyoseo.com), [Steve Han](https://www.linkedin.com/in/stevehan2001), [Kyutae Sim](https://www.linkedin.com/in/kyutae-sim-888593166), [Seung Hyeon Bang](https://sites.utexas.edu/hcrl/people/), [Carlos Gonzalez](https://sites.utexas.edu/hcrl/people/), [Luis Sentis](https://sites.google.com/view/lsentis), [Yuke Zhu](https://www.cs.utexas.edu/~yukez)
8
 
9
- [Project](https://ut-austin-rpl.github.io/TRILL) | [arXiv](https://arxiv.org/abs/2309.01952)
10
 
11
  ## Abstract
12
  We tackle the problem of developing humanoid loco-manipulation skills with deep imitation learning. The challenge of collecting human demonstrations for humanoids, in conjunction with the difficulty of policy training under a high degree of freedom, presents substantial challenges. We introduce TRILL, a data-efficient framework for learning humanoid loco-manipulation policies from human demonstrations. In this framework, we collect human demonstration data through an intuitive Virtual Reality (VR) interface. We employ the whole-body control formulation to transform task-space commands from human operators into the robot's joint-torque actuation while stabilizing its dynamics. By employing high-level action abstractions tailored for humanoid robots, our method can efficiently learn complex loco-manipulation skills. We demonstrate the effectiveness of TRILL in simulation and on a real-world robot for performing various types of tasks.
 
6
  # Deep Imitation Learning for Humanoid Loco-manipulation through Human Teleoperation
7
  [Mingyo Seo](https://mingyoseo.com), [Steve Han](https://www.linkedin.com/in/stevehan2001), [Kyutae Sim](https://www.linkedin.com/in/kyutae-sim-888593166), [Seung Hyeon Bang](https://sites.utexas.edu/hcrl/people/), [Carlos Gonzalez](https://sites.utexas.edu/hcrl/people/), [Luis Sentis](https://sites.google.com/view/lsentis), [Yuke Zhu](https://www.cs.utexas.edu/~yukez)
8
 
9
+ [Project](https://ut-austin-rpl.github.io/TRILL) | [arXiv](https://arxiv.org/abs/2309.01952) | [code](https://github.com/UT-Austin-RPL/TRILL)
10
 
11
  ## Abstract
12
  We tackle the problem of developing humanoid loco-manipulation skills with deep imitation learning. The challenge of collecting human demonstrations for humanoids, in conjunction with the difficulty of policy training under a high degree of freedom, presents substantial challenges. We introduce TRILL, a data-efficient framework for learning humanoid loco-manipulation policies from human demonstrations. In this framework, we collect human demonstration data through an intuitive Virtual Reality (VR) interface. We employ the whole-body control formulation to transform task-space commands from human operators into the robot's joint-torque actuation while stabilizing its dynamics. By employing high-level action abstractions tailored for humanoid robots, our method can efficiently learn complex loco-manipulation skills. We demonstrate the effectiveness of TRILL in simulation and on a real-world robot for performing various types of tasks.