M2T2

M2T2 (Multi-Task Masked Transformer) is a unified transformer model for learning different primitive actions.

Primary Use Cases

Given a raw point cloud observation of the scene, M2T2 reasons about contact points and predicts collision-free gripper poses for 6-DoF object-centric grasping and orientation-aware placement.

Date

This model was trained in June 2023.

Resources for More Information

Citation

If you find our work helpful, please consider citing our paper.

@inproceedings{yuan2023m2t2,
  title     = {M2T2: Multi-Task Masked Transformer for Object-centric Pick and Place},
  author    = {Yuan, Wentao and Murali, Adithyavairavan and Mousavian, Arsalan and Fox, Dieter},
  booktitle = {7th Annual Conference on Robot Learning},
  year      = {2023}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train wentao-yuan/m2t2