harpreetsahota commited on
Commit
0b4633e
·
verified ·
1 Parent(s): 827baef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -69
README.md CHANGED
@@ -56,12 +56,9 @@ dataset_summary: '
56
  '
57
  ---
58
 
59
- # Dataset Card for hand_keypoints
60
-
61
- <!-- Provide a quick summary of the dataset. -->
62
-
63
-
64
 
 
65
 
66
 
67
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 846 samples.
@@ -91,40 +88,26 @@ session = fo.launch_app(dataset)
91
 
92
  ## Dataset Details
93
 
94
- ### Dataset Description
95
-
96
-
97
- ![image/png](hands-dataset.gif)
98
 
99
 
 
100
 
 
101
  - **Curated by:** Tomas Simon, Hanbyul Joo, Iain Matthews, Yaser Sheikh
102
  - **Funded by:** Carnegie Mellon University
103
  - **Shared by:** [Harpreet Sahota](https://huggingface.co/harpreetsahota), Hacker-in-Residence at Voxel51
104
  - **License:** [More Information Needed]
105
 
106
  ### Dataset Sources
107
-
108
- As part of their research, the authors created a dataset by manually annotating two publicly available image sets: the MPII Human Pose dataset and images from the New Zealand Sign Language (NZSL) Exercises. To date, they collected annotations for 1300 hands on the MPII set and 1500 on NZSL. This combined dataset was split into a training set (2000 hands) and a testing set (800 hands)
109
-
110
  - **Paper:** https://arxiv.org/abs/1704.07809
111
  - **Demo:** http://domedb.perception.cs.cmu.edu/handdb.html
112
 
113
  ## Uses
114
 
115
- <!-- Address questions around how the dataset is intended to be used. -->
116
-
117
  ### Direct Use
118
 
119
- <!-- This section describes suitable use cases for the dataset. -->
120
-
121
- [More Information Needed]
122
-
123
- ### Out-of-Scope Use
124
-
125
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
126
-
127
- [More Information Needed]
128
 
129
  ## Dataset Structure
130
 
@@ -146,64 +129,27 @@ Sample fields:
146
  left_hand: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Keypoints)
147
  ```
148
 
149
-
150
- ## Dataset Creation
151
-
152
- ### Curation Rationale
153
-
154
- <!-- Motivation for the creation of this dataset. -->
155
-
156
- [More Information Needed]
157
-
158
  ### Source Data
159
 
160
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
161
-
162
  #### Data Collection and Processing
163
 
164
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
165
 
166
- [More Information Needed]
167
 
168
- #### Who are the source data producers?
169
 
170
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
171
 
172
- [More Information Needed]
 
173
 
174
- ### Annotations [optional]
175
-
176
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
177
 
178
  #### Annotation process
179
 
180
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
181
-
182
- [More Information Needed]
183
-
184
- #### Who are the annotators?
185
-
186
- <!-- This section describes the people or systems who created the annotations. -->
187
-
188
- [More Information Needed]
189
-
190
- #### Personal and Sensitive Information
191
-
192
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
193
-
194
- [More Information Needed]
195
-
196
- ## Bias, Risks, and Limitations
197
-
198
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
199
-
200
- [More Information Needed]
201
-
202
- ### Recommendations
203
-
204
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
205
-
206
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
207
 
208
  ## Citation
209
 
 
56
  '
57
  ---
58
 
59
+ # Dataset Card for Image Hand Keypoint Detection
 
 
 
 
60
 
61
+ ![image/png](hands-dataset.gif)
62
 
63
 
64
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 846 samples.
 
88
 
89
  ## Dataset Details
90
 
91
+ As part of their research, the authors created a dataset by manually annotating two publicly available image sets: the MPII Human Pose dataset and images from the New Zealand Sign Language (NZSL) Exercises. To date, they collected annotations for 1300 hands on the MPII set and 1500 on NZSL. This combined dataset was split into a training set (2000 hands) and a testing set (800 hands)
 
 
 
92
 
93
 
94
+ ### Dataset Description
95
 
96
+ The dataset created in this research is a collection of manually annotated RGB images of hands sourced from the MPII Human Pose dataset and the New Zealand Sign Language (NZSL) Exercises. It contains 2D locations for 21 keypoints on 2800 hands, split into a training set of 2000 hands and a testing set of 800 hands. This dataset was used to train and evaluate their hand keypoint detection methods for single images.
97
  - **Curated by:** Tomas Simon, Hanbyul Joo, Iain Matthews, Yaser Sheikh
98
  - **Funded by:** Carnegie Mellon University
99
  - **Shared by:** [Harpreet Sahota](https://huggingface.co/harpreetsahota), Hacker-in-Residence at Voxel51
100
  - **License:** [More Information Needed]
101
 
102
  ### Dataset Sources
 
 
 
103
  - **Paper:** https://arxiv.org/abs/1704.07809
104
  - **Demo:** http://domedb.perception.cs.cmu.edu/handdb.html
105
 
106
  ## Uses
107
 
 
 
108
  ### Direct Use
109
 
110
+ This manually annotated dataset was directly used to train and evaluate their hand keypoint detection methods. The dataset serves as a benchmark to assess the accuracy of their single image 2D hand keypoint detector. It enabled them to train an initial detector and evaluate the improvements gained through their proposed multiview bootstrapping technique. The dataset contains images extracted from YouTube videos depicting everyday human activities (MPII) and images showing a variety of hand poses from people using New Zealand Sign Language (NZSL). These diverse sets of images allowed the researchers to evaluate the generalization capabilities of their detector.
 
 
 
 
 
 
 
 
111
 
112
  ## Dataset Structure
113
 
 
129
  left_hand: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Keypoints)
130
  ```
131
 
 
 
 
 
 
 
 
 
 
132
  ### Source Data
133
 
 
 
134
  #### Data Collection and Processing
135
 
136
+ The dataset is composed of manually annotated RGB images of hands sourced from two existing datasets: MPII and NZSL.
137
 
138
+ **Annotations:** Each annotated image includes 2D locations for 21 keypoints on the hand (see Fig. 4a for an example). These keypoints represent different landmarks on the hand, such as finger tips and joints.
139
 
140
+ **Splits:** The combined dataset of 2800 annotated hands was divided into a training set of 2000 hands and a testing set of 800 hands. The criteria for this split are not explicitly detailed in the provided excerpts.
141
 
142
+ Source Datasets:
143
 
144
+ **MPII Human Pose dataset:** Contains images from YouTube videos depicting a wide range of everyday human activities. These images vary in quality, resolution, and hand appearance, and include various types of occlusions and hand-object/hand-hand interactions.
145
+ ◦ **New Zealand Sign Language (NZSL) Exercises:** Features images of people making visible hand gestures for communication. This subset provides a variety of hand poses commonly found in conversational contexts
146
 
147
+ ### Annotations
 
 
148
 
149
  #### Annotation process
150
 
151
+ This manually annotated dataset was directly used to train and evaluate their hand keypoint detection methods. The dataset serves as a benchmark to assess the accuracy of their single image 2D hand keypoint detector. It enabled them to train an initial detector and evaluate the improvements gained through their proposed multiview bootstrapping technique. The dataset contains images extracted from YouTube videos depicting everyday human activities (MPII) and images showing a variety of hand poses from people using New Zealand Sign Language (NZSL). These diverse sets of images allowed the researchers to evaluate the generalization capabilities of their detector
152
+ The process of manually annotating hand keypoints in single images was challenging due to frequent occlusions caused by hand articulation, viewpoint, and grasped objects (as illustrated in Fig. 2). In many cases, annotators had to estimate the locations of occluded keypoints, potentially reducing the accuracy of these annotations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
 
154
  ## Citation
155