sshao0516 commited on
Commit
d97203d
Β·
verified Β·
1 Parent(s): cdec284

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -3
README.md CHANGED
@@ -1,3 +1,104 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - object-detection
5
+ language:
6
+ - en
7
+ pretty_name: CrowdHuman
8
+ size_categories:
9
+ - 10K<n<100K
10
+ ---
11
+ # CrowdHuman: A Benchmark for Detecting Human in a Crowd
12
+
13
+ - 🏠 homepage: https://www.crowdhuman.org/
14
+ - πŸ“„ paper: https://arxiv.org/pdf/1805.00123
15
+
16
+ CrowdHuman is a benchmark dataset to better evaluate detectors in crowd scenarios. The CrowdHuman dataset is large, rich-annotated and contains high diversity. CrowdHuman contains 15000, 4370 and 5000 images for training, validation, and testing, respectively. There are a total of 470K human instances from train and validation subsets and 23 persons per image, with various kinds of occlusions in the dataset. Each human instance is annotated with a head bounding-box, human visible-region bounding-box and human full-body bounding-box. We hope our dataset will serve as a solid baseline and help promote future research in human detection tasks.
17
+
18
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6548f8779be7bd365d04ab91/Gpvj5Yu8QUzxeUIJhGrmJ.png)
19
+ *Volume, density and diversity of different human detection datasets. For fair comparison, we only show the statistics of training subset.*
20
+
21
+ ## πŸ” Samples
22
+
23
+ |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6548f8779be7bd365d04ab91/dPyyTwCTTZIE2cHRAZmNn.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/6548f8779be7bd365d04ab91/rsZAVFTtcocma-Fl7C_QI.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/6548f8779be7bd365d04ab91/MQxjxtQap5hGs6FtxXs1_.png)|
24
+ |:--:|:--:|:--:|
25
+ |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6548f8779be7bd365d04ab91/hcpWVRx6l5HAcLyg8XmxB.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/6548f8779be7bd365d04ab91/csivXdrgg_znDNh3quDTR.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/6548f8779be7bd365d04ab91/RRPrpesNDYG7hNf2RuWMT.png)|
26
+ |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6548f8779be7bd365d04ab91/-4ejs7lZGP9jhG8qBIQV2.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/6548f8779be7bd365d04ab91/gAgUfdpj86vw4f_ovb6hT.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/6548f8779be7bd365d04ab91/T-0bunDoidqaShROa3eKI.png)|
27
+
28
+ ## πŸ“ Files
29
+ - `CrowdHuman_train01.zip`
30
+ - `CrowdHuman_train02.zip`
31
+ - `CrowdHuman_train03.zip`
32
+ - `CrowdHuman_val.zip`
33
+ - `CrowdHuman_test.zip`
34
+ - `annotation_train.odgt`
35
+ - `annotation_val.odgt`
36
+
37
+ ## πŸ–¨ Data Format
38
+ We support `annotation_train.odgt` and `annotation_val.odgt` which contains the annotations of our dataset.
39
+
40
+ ### What is odgt?
41
+ `odgt` is a file format that each line of it is a JSON, this JSON contains the whole annotations for the relative image. We prefer using this format since it is reader-friendly.
42
+
43
+ ### Annotation format
44
+ ```json
45
+ JSON{
46
+ "ID" : image_filename,
47
+ "gtboxes" : [gtbox],
48
+ }
49
+
50
+ gtbox{
51
+ "tag" : "person" or "mask",
52
+ "vbox": [x, y, w, h],
53
+ "fbox": [x, y, w, h],
54
+ "hbox": [x, y, w, h],
55
+ "extra" : extra,
56
+ "head_attr" : head_attr,
57
+ }
58
+
59
+ extra{
60
+ "ignore": 0 or 1,
61
+ "box_id": int,
62
+ "occ": int,
63
+ }
64
+
65
+ head_attr{
66
+ "ignore": 0 or 1,
67
+ "unsure": int,
68
+ "occ": int,
69
+ }
70
+ ```
71
+ - `Keys` in `extra` and `head_attr` are **optional**, it means some of them may not exist
72
+ - `extra/head_attr` contains attributes for `person/head`
73
+ - `tag` is `mask` means that this box is `crowd/reflection/something like person/...` and need to be `ignore`(the `ignore` in `extra` is `1`)
74
+ - `vbox, fbox, hbox` means `visible box, full box, head box` respectively
75
+
76
+ ## ⚠️ Terms of use:
77
+ by downloading the image data you agree to the following terms:
78
+ 1. You will use the data only for non-commercial research and educational purposes.
79
+ 2. You will NOT distribute the above images.
80
+ 3. Megvii Technology makes no representations or warranties regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose.
81
+ 4. You accept full responsibility for your use of the data and shall defend and indemnify Megvii Technology, including its employees, officers and agents, against any and all claims arising from your use of the data, including but not limited to your use of any copies of copyrighted images that you may create from the data.
82
+
83
+ ## πŸ† Related Challenge
84
+ - [Detection In the Wild Challenge Workshop 2019](https://www.objects365.org/workshop2019.html)
85
+
86
+ ## πŸ“š Citaiton
87
+ Please cite the following paper if you use our dataset.
88
+ ```
89
+ @article{shao2018crowdhuman,
90
+ title={CrowdHuman: A Benchmark for Detecting Human in a Crowd},
91
+ author={Shao, Shuai and Zhao, Zijian and Li, Boxun and Xiao, Tete and Yu, Gang and Zhang, Xiangyu and Sun, Jian},
92
+ journal={arXiv preprint arXiv:1805.00123},
93
+ year={2018}
94
+ }
95
+ ```
96
+
97
+ ## πŸ‘₯ People
98
+ - [Shuai Shao*](https://www.sshao.com/)
99
+ - [Zijian Zhao*](https://scholar.google.com/citations?user=9Iv3NoIAAAAJ)
100
+ - Boxun Li
101
+ - [Tete Xiao](https://tetexiao.com/)
102
+ - [Gang Yu](https://www.skicyyu.org/)
103
+ - [Xiangyu Zhang](https://scholar.google.com/citations?user=yuB-cfoAAAAJ)
104
+ - [Jian Sun](https://scholar.google.com/citations?user=ALVSZAYAAAAJ)