nlivathinos commited on
Commit
266dc3e
·
verified ·
1 Parent(s): 85cadd4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -0
README.md CHANGED
@@ -79,3 +79,117 @@ configs:
79
  - split: test
80
  path: data/test-*
81
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  - split: test
80
  path: data/test-*
81
  ---
82
+ # Dataset Card for Docling-DocLayNet dataset
83
+
84
+ ## Dataset Description
85
+
86
+ - **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
87
+ - **Repository:** https://github.com/DS4SD/DocLayNet
88
+ - **Paper:** https://doi.org/10.1145/3534678.3539043
89
+
90
+
91
+ ### Dataset Summary
92
+
93
+ This dataset is an extention of the [original DocLayNet dataset](https://github.com/DS4SD/DocLayNet) which embeds the PDF files of the document images inside a binary column.
94
+
95
+ DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
96
+
97
+ 1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
98
+ 2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
99
+ 3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
100
+ 4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
101
+ 5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
102
+
103
+
104
+ ## Dataset Structure
105
+
106
+ ### Data Fields
107
+
108
+ * image: PIL image of all pages, resized to square 1025 x 1025px.
109
+ * bboxes: Bounding-box annotations in COCO format for each PNG image.
110
+ * category_id: integer representations of the segmentation labels (see below).
111
+ * segmentation:
112
+ * area:
113
+ * pdf_cells:
114
+ * metadata:
115
+ * pdf: Binary blob with the original PDF image.
116
+
117
+
118
+ This is the mapping between the labels and the `category_id`:
119
+
120
+ ```
121
+ 1: "caption"
122
+ 2: "footnote"
123
+ 3: "formula"
124
+ 4: "list_item"
125
+ 5: "page_footer"
126
+ 6: "page_header"
127
+ 7: "picture"
128
+ 8: "section_header"
129
+ 9: "table"
130
+ 10: "text"
131
+ 11: "title"
132
+ ```
133
+
134
+ The COCO image record are defined like this example
135
+
136
+ ```js
137
+ ...
138
+ {
139
+ "id": 1,
140
+ "width": 1025,
141
+ "height": 1025,
142
+ "file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
143
+
144
+ // Custom fields:
145
+ "doc_category": "financial_reports" // high-level document category
146
+ "collection": "ann_reports_00_04_fancy", // sub-collection name
147
+ "doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
148
+ "page_no": 9, // page number in original document
149
+ "precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
150
+ },
151
+ ...
152
+ ```
153
+
154
+ The `doc_category` field uses one of the following constants:
155
+
156
+ ```
157
+ financial_reports,
158
+ scientific_articles,
159
+ laws_and_regulations,
160
+ government_tenders,
161
+ manuals,
162
+ patents
163
+ ```
164
+
165
+ ### Data Splits
166
+
167
+ The dataset provides three splits
168
+ - `train`
169
+ - `val`
170
+ - `test`
171
+
172
+
173
+ ## Additional Information
174
+
175
+ ### Citation Information
176
+
177
+ "DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis" (KDD 2022).
178
+
179
+ Birgit Pfitzmann ([email protected])
180
+ Christoph Auer ([email protected])
181
+ Michele Dolfi ([email protected])
182
+ Ahmed Nassar ([email protected])
183
+ Peter Staar ([email protected])
184
+
185
+ ArXiv link: https://arxiv.org/abs/2206.01062
186
+
187
+ ```bib
188
+ @article{doclaynet2022,
189
+ title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis},
190
+ doi = {10.1145/3534678.353904},
191
+ url = {https://arxiv.org/abs/2206.01062},
192
+ author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
193
+ year = {2022}
194
+ }
195
+ ```