Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,104 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
task_categories:
|
4 |
+
- object-detection
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pretty_name: CrowdHuman
|
8 |
+
size_categories:
|
9 |
+
- 10K<n<100K
|
10 |
+
---
|
11 |
+
# CrowdHuman: A Benchmark for Detecting Human in a Crowd
|
12 |
+
|
13 |
+
- π homepage: https://www.crowdhuman.org/
|
14 |
+
- π paper: https://arxiv.org/pdf/1805.00123
|
15 |
+
|
16 |
+
CrowdHuman is a benchmark dataset to better evaluate detectors in crowd scenarios. The CrowdHuman dataset is large, rich-annotated and contains high diversity. CrowdHuman contains 15000, 4370 and 5000 images for training, validation, and testing, respectively. There are a total of 470K human instances from train and validation subsets and 23 persons per image, with various kinds of occlusions in the dataset. Each human instance is annotated with a head bounding-box, human visible-region bounding-box and human full-body bounding-box. We hope our dataset will serve as a solid baseline and help promote future research in human detection tasks.
|
17 |
+
|
18 |
+

|
19 |
+
*Volume, density and diversity of different human detection datasets. For fair comparison, we only show the statistics of training subset.*
|
20 |
+
|
21 |
+
## π Samples
|
22 |
+
|
23 |
+
||||
|
24 |
+
|:--:|:--:|:--:|
|
25 |
+
||||
|
26 |
+
||||
|
27 |
+
|
28 |
+
## π Files
|
29 |
+
- `CrowdHuman_train01.zip`
|
30 |
+
- `CrowdHuman_train02.zip`
|
31 |
+
- `CrowdHuman_train03.zip`
|
32 |
+
- `CrowdHuman_val.zip`
|
33 |
+
- `CrowdHuman_test.zip`
|
34 |
+
- `annotation_train.odgt`
|
35 |
+
- `annotation_val.odgt`
|
36 |
+
|
37 |
+
## π¨ Data Format
|
38 |
+
We support `annotation_train.odgt` and `annotation_val.odgt` which contains the annotations of our dataset.
|
39 |
+
|
40 |
+
### What is odgt?
|
41 |
+
`odgt` is a file format that each line of it is a JSON, this JSON contains the whole annotations for the relative image. We prefer using this format since it is reader-friendly.
|
42 |
+
|
43 |
+
### Annotation format
|
44 |
+
```json
|
45 |
+
JSON{
|
46 |
+
"ID" : image_filename,
|
47 |
+
"gtboxes" : [gtbox],
|
48 |
+
}
|
49 |
+
|
50 |
+
gtbox{
|
51 |
+
"tag" : "person" or "mask",
|
52 |
+
"vbox": [x, y, w, h],
|
53 |
+
"fbox": [x, y, w, h],
|
54 |
+
"hbox": [x, y, w, h],
|
55 |
+
"extra" : extra,
|
56 |
+
"head_attr" : head_attr,
|
57 |
+
}
|
58 |
+
|
59 |
+
extra{
|
60 |
+
"ignore": 0 or 1,
|
61 |
+
"box_id": int,
|
62 |
+
"occ": int,
|
63 |
+
}
|
64 |
+
|
65 |
+
head_attr{
|
66 |
+
"ignore": 0 or 1,
|
67 |
+
"unsure": int,
|
68 |
+
"occ": int,
|
69 |
+
}
|
70 |
+
```
|
71 |
+
- `Keys` in `extra` and `head_attr` are **optional**, it means some of them may not exist
|
72 |
+
- `extra/head_attr` contains attributes for `person/head`
|
73 |
+
- `tag` is `mask` means that this box is `crowd/reflection/something like person/...` and need to be `ignore`(the `ignore` in `extra` is `1`)
|
74 |
+
- `vbox, fbox, hbox` means `visible box, full box, head box` respectively
|
75 |
+
|
76 |
+
## β οΈ Terms of use:
|
77 |
+
by downloading the image data you agree to the following terms:
|
78 |
+
1. You will use the data only for non-commercial research and educational purposes.
|
79 |
+
2. You will NOT distribute the above images.
|
80 |
+
3. Megvii Technology makes no representations or warranties regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose.
|
81 |
+
4. You accept full responsibility for your use of the data and shall defend and indemnify Megvii Technology, including its employees, officers and agents, against any and all claims arising from your use of the data, including but not limited to your use of any copies of copyrighted images that you may create from the data.
|
82 |
+
|
83 |
+
## π Related Challenge
|
84 |
+
- [Detection In the Wild Challenge Workshop 2019](https://www.objects365.org/workshop2019.html)
|
85 |
+
|
86 |
+
## π Citaiton
|
87 |
+
Please cite the following paper if you use our dataset.
|
88 |
+
```
|
89 |
+
@article{shao2018crowdhuman,
|
90 |
+
title={CrowdHuman: A Benchmark for Detecting Human in a Crowd},
|
91 |
+
author={Shao, Shuai and Zhao, Zijian and Li, Boxun and Xiao, Tete and Yu, Gang and Zhang, Xiangyu and Sun, Jian},
|
92 |
+
journal={arXiv preprint arXiv:1805.00123},
|
93 |
+
year={2018}
|
94 |
+
}
|
95 |
+
```
|
96 |
+
|
97 |
+
## π₯ People
|
98 |
+
- [Shuai Shao*](https://www.sshao.com/)
|
99 |
+
- [Zijian Zhao*](https://scholar.google.com/citations?user=9Iv3NoIAAAAJ)
|
100 |
+
- Boxun Li
|
101 |
+
- [Tete Xiao](https://tetexiao.com/)
|
102 |
+
- [Gang Yu](https://www.skicyyu.org/)
|
103 |
+
- [Xiangyu Zhang](https://scholar.google.com/citations?user=yuB-cfoAAAAJ)
|
104 |
+
- [Jian Sun](https://scholar.google.com/citations?user=ALVSZAYAAAAJ)
|