Datasets:

ArXiv:
License:
vr-folding / README.md
xiaoxiaoxh's picture
Update README.md
2d49d4c
---
license: mit
pretty_name: garment-tracking
---
# Dataset Card for VR-Folding Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Dataset Example](#dataset-example)
## Dataset Description
- **Homepage:** https://garment-tracking.robotflow.ai
- **Repository:** [GitHub](https://github.com/xiaoxiaoxh/GarmentTracking)
- **Paper:** [GarmentTracking: Category-Level Garment Pose Tracking](https://arxiv.org/pdf/2303.13913.pdf)
- **Point of Contact:**
## Dataset Summary
![VR-Garment](assets/vr_garment.png)
This is the **VR-Folding** dataset created by the CVPR 2023 paper [GarmentTracking: Category-Level Garment Pose Tracking](https://garment-tracking.robotflow.ai).
This dataset is recorded with a system called [VR-Garment](https://github.com/xiaoxiaoxh/VR-Garment), which is a garment-hand interaction environment based on Unity.
To download the dataset, use the following shell snippet:
```
git lfs install
git clone https://huggingface.co/datasets/robotflow/garment-tracking
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var: GIT_LFS_SKIP_SMUDGE=1
# merge multiple .zip files (e.g. folding) into one .zip file
cd data/folding
cat folding_dataset.z* > folding_dataset.zip
# unzip
unzip folding_dataset.zip
```
All the data are stored in [zarr](https://zarr.readthedocs.io/en/stable/) format.
## Dataset Structure
Here is the detailed stucture of a data example ([zarr](https://zarr.readthedocs.io/en/stable/) format) of one frame:
```
00068_Tshirt_000000_000000
β”œβ”€β”€ grip_vertex_id
β”‚ β”œβ”€β”€ left_grip_vertex_id (1,) int32
β”‚ └── right_grip_vertex_id (1,) int32
β”œβ”€β”€ hand_pose
β”‚ β”œβ”€β”€ left_hand_euler (25, 3) float32
β”‚ β”œβ”€β”€ left_hand_pos (25, 3) float32
β”‚ β”œβ”€β”€ right_hand_euler (25, 3) float32
β”‚ └── right_hand_pos (25, 3) float32
β”œβ”€β”€ marching_cube_mesh
β”‚ β”œβ”€β”€ is_vertex_on_surface (6410,) bool
β”‚ β”œβ”€β”€ marching_cube_faces (12816, 3) int32
β”‚ └── marching_cube_verts (6410, 3) float32
β”œβ”€β”€ mesh
β”‚ β”œβ”€β”€ cloth_faces_tri (8312, 3) int32
β”‚ β”œβ”€β”€ cloth_nocs_verts (4434, 3) float32
β”‚ └── cloth_verts (4434, 3) float32
└── point_cloud
β”œβ”€β”€ cls (30000,) uint8
β”œβ”€β”€ nocs (30000, 3) float16
β”œβ”€β”€ point (30000, 3) float16
β”œβ”€β”€ rgb (30000, 3) uint8
└── sizes (4,) int64
```
Specifically, we render 4-view RGB-D images with Unity and generate concated point clouds for each frame. Here `grip_vertex_id` is the vertex index list of the grasped points of the mesh.
# Dataset Example
Please see [example](data/data_examples/README.md) for example data and visualization scripts.
Here are two video examples for flattening and folding task.
![flattening](assets/flattening_example.png)
![folding](assets/folding_example.png)