File size: 3,042 Bytes
aefbc93 48953d1 aefbc93 e154604 aefbc93 e154604 aefbc93 48953d1 258925c aefbc93 80d1daf 48953d1 2d49d4c aefbc93 258925c aefbc93 5fe70b7 aefbc93 5fe70b7 258925c 80d1daf e154604 80d1daf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
license: mit
pretty_name: garment-tracking
---
# Dataset Card for VR-Folding Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Dataset Example](#dataset-example)
## Dataset Description
- **Homepage:** https://garment-tracking.robotflow.ai
- **Repository:** [GitHub](https://github.com/xiaoxiaoxh/GarmentTracking)
- **Paper:** [GarmentTracking: Category-Level Garment Pose Tracking](https://arxiv.org/pdf/2303.13913.pdf)
- **Point of Contact:**
## Dataset Summary

This is the **VR-Folding** dataset created by the CVPR 2023 paper [GarmentTracking: Category-Level Garment Pose Tracking](https://garment-tracking.robotflow.ai).
This dataset is recorded with a system called [VR-Garment](https://github.com/xiaoxiaoxh/VR-Garment), which is a garment-hand interaction environment based on Unity.
To download the dataset, use the following shell snippet:
```
git lfs install
git clone https://huggingface.co/datasets/robotflow/garment-tracking
# if you want to clone without large files β just their pointers
# prepend your git clone with the following env var: GIT_LFS_SKIP_SMUDGE=1
# merge multiple .zip files (e.g. folding) into one .zip file
cd data/folding
cat folding_dataset.z* > folding_dataset.zip
# unzip
unzip folding_dataset.zip
```
All the data are stored in [zarr](https://zarr.readthedocs.io/en/stable/) format.
## Dataset Structure
Here is the detailed stucture of a data example ([zarr](https://zarr.readthedocs.io/en/stable/) format) of one frame:
```
00068_Tshirt_000000_000000
βββ grip_vertex_id
β βββ left_grip_vertex_id (1,) int32
β βββ right_grip_vertex_id (1,) int32
βββ hand_pose
β βββ left_hand_euler (25, 3) float32
β βββ left_hand_pos (25, 3) float32
β βββ right_hand_euler (25, 3) float32
β βββ right_hand_pos (25, 3) float32
βββ marching_cube_mesh
β βββ is_vertex_on_surface (6410,) bool
β βββ marching_cube_faces (12816, 3) int32
β βββ marching_cube_verts (6410, 3) float32
βββ mesh
β βββ cloth_faces_tri (8312, 3) int32
β βββ cloth_nocs_verts (4434, 3) float32
β βββ cloth_verts (4434, 3) float32
βββ point_cloud
βββ cls (30000,) uint8
βββ nocs (30000, 3) float16
βββ point (30000, 3) float16
βββ rgb (30000, 3) uint8
βββ sizes (4,) int64
```
Specifically, we render 4-view RGB-D images with Unity and generate concated point clouds for each frame. Here `grip_vertex_id` is the vertex index list of the grasped points of the mesh.
# Dataset Example
Please see [example](data/data_examples/README.md) for example data and visualization scripts.
Here are two video examples for flattening and folding task.


|