|
--- |
|
license: mit |
|
language: |
|
- en |
|
--- |
|
# Deep Imitation Learning for Humanoid Loco-manipulation through Human Teleoperation |
|
[Mingyo Seo](https://mingyoseo.com), [Steve Han](https://www.linkedin.com/in/stevehan2001), [Kyutae Sim](https://www.linkedin.com/in/kyutae-sim-888593166), [Seung Hyeon Bang](https://sites.utexas.edu/hcrl/people/), [Carlos Gonzalez](https://sites.utexas.edu/hcrl/people/), [Luis Sentis](https://sites.google.com/view/lsentis), [Yuke Zhu](https://www.cs.utexas.edu/~yukez) |
|
|
|
[Project](https://ut-austin-rpl.github.io/TRILL) | [arXiv](https://arxiv.org/abs/2309.01952) | [code](https://github.com/UT-Austin-RPL/TRILL) |
|
|
|
## Abstract |
|
We tackle the problem of developing humanoid loco-manipulation skills with deep imitation learning. The challenge of collecting human demonstrations for humanoids, in conjunction with the difficulty of policy training under a high degree of freedom, presents substantial challenges. We introduce TRILL, a data-efficient framework for learning humanoid loco-manipulation policies from human demonstrations. In this framework, we collect human demonstration data through an intuitive Virtual Reality (VR) interface. We employ the whole-body control formulation to transform task-space commands from human operators into the robot's joint-torque actuation while stabilizing its dynamics. By employing high-level action abstractions tailored for humanoid robots, our method can efficiently learn complex loco-manipulation skills. We demonstrate the effectiveness of TRILL in simulation and on a real-world robot for performing various types of tasks. |
|
|
|
## Citing |
|
``` |
|
@inproceedings{seo2023trill, |
|
title={Deep Imitation Learning for Humanoid Loco-manipulation through Human Teleoperation}, |
|
author={Seo, Mingyo and Han, Steve and Sim, Kyutae and |
|
Bang, Seung Hyeon and Gonzalez, Carlos and |
|
Sentis, Luis and Zhu, Yuke}, |
|
booktitle={IEEE-RAS International Conference on Humanoid Robots (Humanoids)}, |
|
year={2023} |
|
} |
|
``` |