Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
1.23k
1.67k
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

MotionTrans Dataset

[MotionTrans Main Repository]

motiontrans dataset

Download

Please download this repository and unzip motiontrans_dataset.zip.

Introduction

MotionTrans is the first framework to achieve explicit end-to-end human-to-robot motion transfer, enabling motion-level policy learning directly from human data. This repository contains the MotionTrans Dataset used in our paper, including zero-shot human-robot cotraining data and finetuning data. The dataset provides 15 human tasks and 15 robot tasks (a total of 3,213 demonstrations) for cotraining. For the 13 evaluated human tasks, we also provide 20 finetuning robot demonstrations for each task. To ensure visual background diversity, the dataset was collected across more than 10 different scenes.

Folder Structure

motiontrans_dataset/
β”œβ”€β”€ README.md                 # this file
β”œβ”€β”€ motiontrans_dataset/        
β”‚   β”œβ”€β”€ raw_data_human/       # cotraining human data (human tasks)
β”‚   β”œβ”€β”€ raw_data_robot/       # cotraining robot data (robot tasks)
β”‚   β”œβ”€β”€ raw_data_finetune/    # finetuning robot data (human tasks)

Each raw_data folder contains multiple subfolders, with each subfolder corresponding to a specific task. The naming format is: {embodiment}_{env_type}_{task_description}.

  • embodiment: human or robot
  • env_type: se, me or mix, where se means single scene; me multiple scenes each stored in a separate folder; and mix multiple scenes, all stored in the same folder.
  • task_description: short description of the task, with spaces replaced by underscores.

Human Data Format

The human data format is as follows:

β”œβ”€β”€ device_id.txt    # the device id of ZED camera
β”œβ”€β”€ episode.pkl      # the human state data
β”œβ”€β”€ recording.svo2   # the visual data
β”œβ”€β”€ rgb.mp4          # a video for data checking
β”œβ”€β”€ frame_grasp.txt  # the frame index (svo2) when hand grasping happens
β”œβ”€β”€ frame_cut.txt    # the frame index (svo2) when the task is finished

The files frame_grasp.txt and frame_cut.txt are optional and only provided for few tasks where: (1) the definition of the task’s endpoint is ambiguous, or (2) the hand information recorded by VR is too noisy due to occlusion.

Robot Data Format

The robot data format is as follows:

β”œβ”€β”€ videos
β”‚   β”œβ”€β”€ rgb.mp4             # a video for data checking
β”‚   β”œβ”€β”€ recording.svo2      # the visual data
β”œβ”€β”€ episode_config.yaml     # the configuration of this episode
β”œβ”€β”€ episode.pkl             # the robot state data

Data Processing

For data processing code, please refer to the MotionTrans Code Repository.

Task Lists

Human Tasks:

motiontrans human dataset

Robot Tasks:

motiontrans robot dataset

Citation

If you find this repository useful, please kindly acknowledge our work :

@article{yuan2025motiontrans,
  title={MotionTrans: Human VR Data Enable Motion-Level Learning for Robotic Manipulation Policies},
  author={Yuan, Chengbo and Zhou, Rui and Liu, Mengzhen and Hu, Yingdong and Wang, Shengjie and Yi, Li and Wen, Chuan and Zhang, Shanghang and Gao, Yang},
  journal={arXiv preprint arXiv:2509.17759},
  year={2025}
}
Downloads last month
210