harpreetsahota commited on
Commit
9ed0f1f
·
verified ·
1 Parent(s): a99f269

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -5
README.md CHANGED
@@ -63,6 +63,8 @@ dataset_summary: >
63
 
64
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 846 samples.
65
 
 
 
66
  ## Installation
67
 
68
  If you haven't already, install FiftyOne:
@@ -88,12 +90,19 @@ session = fo.launch_app(dataset)
88
 
89
  ## Dataset Details
90
 
91
- As part of their research, the authors created a dataset by manually annotating two publicly available image sets: the MPII Human Pose dataset and images from the New Zealand Sign Language (NZSL) Exercises. To date, they collected annotations for 1300 hands on the MPII set and 1500 on NZSL. This combined dataset was split into a training set (2000 hands) and a testing set (800 hands)
 
 
92
 
 
93
 
94
  ### Dataset Description
95
 
96
- The dataset created in this research is a collection of manually annotated RGB images of hands sourced from the MPII Human Pose dataset and the New Zealand Sign Language (NZSL) Exercises. It contains 2D locations for 21 keypoints on 2800 hands, split into a training set of 2000 hands and a testing set of 800 hands. This dataset was used to train and evaluate their hand keypoint detection methods for single images.
 
 
 
 
97
 
98
  - **Paper:** https://arxiv.org/abs/1704.07809
99
  - **Demo:** http://domedb.perception.cs.cmu.edu/handdb.html
@@ -108,7 +117,15 @@ The dataset created in this research is a collection of manually annotated RGB i
108
 
109
  ### Direct Use
110
 
111
- This manually annotated dataset was directly used to train and evaluate their hand keypoint detection methods. The dataset serves as a benchmark to assess the accuracy of their single image 2D hand keypoint detector. It enabled them to train an initial detector and evaluate the improvements gained through their proposed multiview bootstrapping technique. The dataset contains images extracted from YouTube videos depicting everyday human activities (MPII) and images showing a variety of hand poses from people using New Zealand Sign Language (NZSL). These diverse sets of images allowed the researchers to evaluate the generalization capabilities of their detector.
 
 
 
 
 
 
 
 
112
 
113
  ## Dataset Structure
114
 
@@ -149,9 +166,17 @@ The dataset is composed of manually annotated RGB images of hands sourced from t
149
 
150
  #### Annotation process
151
 
152
- This manually annotated dataset was directly used to train and evaluate their hand keypoint detection methods. The dataset serves as a benchmark to assess the accuracy of their single image 2D hand keypoint detector. It enabled them to train an initial detector and evaluate the improvements gained through their proposed multiview bootstrapping technique. The dataset contains images extracted from YouTube videos depicting everyday human activities (MPII) and images showing a variety of hand poses from people using New Zealand Sign Language (NZSL). These diverse sets of images allowed the researchers to evaluate the generalization capabilities of their detector
153
- The process of manually annotating hand keypoints in single images was challenging due to frequent occlusions caused by hand articulation, viewpoint, and grasped objects (as illustrated in Fig. 2). In many cases, annotators had to estimate the locations of occluded keypoints, potentially reducing the accuracy of these annotations
 
 
 
 
 
 
 
154
 
 
155
  ## Citation
156
 
157
  ```bibtex
 
63
 
64
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 846 samples.
65
 
66
+ **Note:** The images here are from the test set of the [original dataset](http://domedb.perception.cs.cmu.edu/panopticDB/hands/hand_labels.zip) and parsed into FiftyOne format.
67
+
68
  ## Installation
69
 
70
  If you haven't already, install FiftyOne:
 
90
 
91
  ## Dataset Details
92
 
93
+ As part of their research, the authors created a dataset by manually annotating two publicly available image sets: the MPII Human Pose dataset and images from the New Zealand Sign Language (NZSL) Exercises.
94
+
95
+ To date, they collected annotations for 1300 hands on the MPII set and 1500 on NZSL.
96
 
97
+ This combined dataset was split into a training set (2000 hands) and a testing set (800 hands)
98
 
99
  ### Dataset Description
100
 
101
+ The dataset created in this research is a collection of manually annotated RGB images of hands sourced from the MPII Human Pose dataset and the New Zealand Sign Language (NZSL) Exercises.
102
+
103
+ It contains 2D locations for 21 keypoints on 2800 hands, split into a training set of 2000 hands and a testing set of 800 hands.
104
+
105
+ This dataset was used to train and evaluate their hand keypoint detection methods for single images.
106
 
107
  - **Paper:** https://arxiv.org/abs/1704.07809
108
  - **Demo:** http://domedb.perception.cs.cmu.edu/handdb.html
 
117
 
118
  ### Direct Use
119
 
120
+ This manually annotated dataset was directly used to train and evaluate their hand keypoint detection methods.
121
+
122
+ The dataset serves as a benchmark to assess the accuracy of their single image 2D hand keypoint detector.
123
+
124
+ It enabled them to train an initial detector and evaluate the improvements gained through their proposed multiview bootstrapping technique.
125
+
126
+ The dataset contains images extracted from YouTube videos depicting everyday human activities (MPII) and images showing a variety of hand poses from people using New Zealand Sign Language (NZSL).
127
+
128
+ These diverse sets of images allowed the researchers to evaluate the generalization capabilities of their detector.
129
 
130
  ## Dataset Structure
131
 
 
166
 
167
  #### Annotation process
168
 
169
+ This manually annotated dataset was directly used to train and evaluate their hand keypoint detection methods.
170
+
171
+ The dataset serves as a benchmark to assess the accuracy of their single image 2D hand keypoint detector. It enabled them to train an initial detector and evaluate the improvements gained through their proposed multiview bootstrapping technique.
172
+
173
+ The dataset contains images extracted from YouTube videos depicting everyday human activities (MPII) and images showing a variety of hand poses from people using New Zealand Sign Language (NZSL).
174
+
175
+ These diverse sets of images allowed the researchers to evaluate the generalization capabilities of their detector.
176
+
177
+ The process of manually annotating hand keypoints in single images was challenging due to frequent occlusions caused by hand articulation, viewpoint, and grasped objects.
178
 
179
+ In many cases, annotators had to estimate the locations of occluded keypoints, potentially reducing the accuracy of these annotations
180
  ## Citation
181
 
182
  ```bibtex