File size: 7,098 Bytes
3dc9768
 
 
 
 
 
 
 
 
 
a99f269
 
 
3dc9768
 
 
 
a99f269
 
3dc9768
 
 
 
 
a99f269
3dc9768
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a99f269
3dc9768
827baef
3dc9768
 
 
 
 
 
 
 
 
0b4633e
3dc9768
0b4633e
3dc9768
 
 
 
9ed0f1f
 
3dc9768
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
827baef
3dc9768
 
 
 
 
 
 
 
9ed0f1f
 
 
3dc9768
9ed0f1f
3dc9768
0b4633e
827baef
9ed0f1f
 
 
 
 
3dc9768
827baef
 
a99f269
 
 
 
 
 
3dc9768
 
 
 
 
9ed0f1f
 
 
 
 
 
 
 
 
3dc9768
 
 
827baef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a99f269
 
 
3dc9768
a99f269
 
 
3dc9768
 
 
0b4633e
3dc9768
0b4633e
3dc9768
0b4633e
3dc9768
0b4633e
3dc9768
 
 
9ed0f1f
 
 
 
 
 
 
 
 
3dc9768
9ed0f1f
827baef
3dc9768
827baef
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
---
annotations_creators: []
language: en
size_categories:
- n<1K
task_ids: []
pretty_name: hand_keypoints
tags:
- fiftyone
- image
- keypoints
- pose-estimation
dataset_summary: >




  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 846
  samples.


  ## Installation


  If you haven't already, install FiftyOne:


  ```bash

  pip install -U fiftyone

  ```


  ## Usage


  ```python

  import fiftyone as fo

  from fiftyone.utils.huggingface import load_from_hub


  # Load the dataset

  # Note: other available arguments include 'max_samples', etc

  dataset = load_from_hub("voxel51/hand-keypoints")


  # Launch the App

  session = fo.launch_app(dataset)

  ```
---

# Dataset Card for Image Hand Keypoint Detection

![image/png](hands-dataset.gif)


This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 846 samples.

**Note:** The images here are from the test set of the [original dataset](http://domedb.perception.cs.cmu.edu/panopticDB/hands/hand_labels.zip) and parsed into FiftyOne format.

## Installation

If you haven't already, install FiftyOne:

```bash
pip install -U fiftyone
```

## Usage

```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub

# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("voxel51/hand-keypoints")

# Launch the App
session = fo.launch_app(dataset)
```


## Dataset Details

As part of their research, the authors created a dataset by manually annotating two publicly available image sets: the MPII Human Pose dataset and images from the New Zealand Sign Language (NZSL) Exercises. 

To date, they collected annotations for 1300 hands on the MPII set and 1500 on NZSL. 

This combined dataset was split into a training set (2000 hands) and a testing set (800 hands)

### Dataset Description

The dataset created in this research is a collection of manually annotated RGB images of hands sourced from the MPII Human Pose dataset and the New Zealand Sign Language (NZSL) Exercises. 

It contains 2D locations for 21 keypoints on 2800 hands, split into a training set of 2000 hands and a testing set of 800 hands. 

This dataset was used to train and evaluate their hand keypoint detection methods for single images.

- **Paper:** https://arxiv.org/abs/1704.07809
- **Demo:** http://domedb.perception.cs.cmu.edu/handdb.html
- **Curated by:** Tomas Simon, Hanbyul Joo, Iain Matthews, Yaser Sheikh
- **Funded by:** Carnegie Mellon University
- **Shared by:** [Harpreet Sahota](https://huggingface.co/harpreetsahota), Hacker-in-Residence at Voxel51
- **License:** This dataset contains images from:
    1. MPII Huaman Pose dataset, which is under the [BSD License](https://github.com/YuliangXiu/MobilePose/blob/master/pose_dataset/mpii/mpii_human_pose_v1_u12_2/bsd.txt)
    2. New Zealand Sign Language Dictionary Dataste, which is under the [CC BY-NC-SA 3.0 License](https://creativecommons.org/licenses/by-nc-sa/3.0/)

## Uses

### Direct Use

This manually annotated dataset was directly used to train and evaluate their hand keypoint detection methods. 

The dataset serves as a benchmark to assess the accuracy of their single image 2D hand keypoint detector. 

It enabled them to train an initial detector and evaluate the improvements gained through their proposed multiview bootstrapping technique. 

The dataset contains images extracted from YouTube videos depicting everyday human activities (MPII) and images showing a variety of hand poses from people using New Zealand Sign Language (NZSL). 

These diverse sets of images allowed the researchers to evaluate the generalization capabilities of their detector.

## Dataset Structure

```text
Name:        hand_keypoints
Media type:  image
Num samples: 846
Persistent:  True
Tags:        []
Sample fields:
    id:               fiftyone.core.fields.ObjectIdField
    filepath:         fiftyone.core.fields.StringField
    tags:             fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)
    metadata:         fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.ImageMetadata)
    created_at:       fiftyone.core.fields.DateTimeField
    last_modified_at: fiftyone.core.fields.DateTimeField
    right_hand:       fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Keypoints)
    body:             fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Keypoints)
    left_hand:        fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Keypoints)
```
### Dataset Sources

#### Source Data

◦ [**MPII Human Pose dataset:**](https://github.com/YuliangXiu/MobilePose/blob/master/pose_dataset/mpii/mpii_human_pose_v1_u12_2/README.md) Contains images from YouTube videos depicting a wide range of everyday human activities. These images vary in quality, resolution, and hand appearance, and include various types of occlusions and hand-object/hand-hand interactions.

◦ [ **New Zealand Sign Language (NZSL) Exercises:**](https://github.com/Bluebie/NZSL-Dictionary) Features images of people making visible hand gestures for communication. This subset provides a variety of hand poses commonly found in conversational contexts

#### Data Collection and Processing

The dataset is composed of manually annotated RGB images of hands sourced from two existing datasets: MPII and NZSL.

• **Annotations:** Each annotated image includes 2D locations for 21 keypoints on the hand (see Fig. 4a for an example). These keypoints represent different landmarks on the hand, such as finger tips and joints.

• **Splits:** The combined dataset of 2800 annotated hands was divided into a training set of 2000 hands and a testing set of 800 hands. The criteria for this split are not explicitly detailed in the provided excerpts.

### Annotations

#### Annotation process

This manually annotated dataset was directly used to train and evaluate their hand keypoint detection methods. 

The dataset serves as a benchmark to assess the accuracy of their single image 2D hand keypoint detector. It enabled them to train an initial detector and evaluate the improvements gained through their proposed multiview bootstrapping technique. 

The dataset contains images extracted from YouTube videos depicting everyday human activities (MPII) and images showing a variety of hand poses from people using New Zealand Sign Language (NZSL). 

These diverse sets of images allowed the researchers to evaluate the generalization capabilities of their detector.

 The process of manually annotating hand keypoints in single images was challenging due to frequent occlusions caused by hand articulation, viewpoint, and grasped objects. 

 In many cases, annotators had to estimate the locations of occluded keypoints, potentially reducing the accuracy of these annotations
## Citation 

```bibtex
@inproceedings{simon2017hand,
  author = {Tomas Simon and Hanbyul Joo and Iain Matthews and Yaser Sheikh},
  booktitle = {CVPR},
  title = {Hand Keypoint Detection in Single Images using Multiview Bootstrapping},
  year = {2017}
}
```