Datasets:

Modalities:
Image
Text
ArXiv:
Libraries:
Datasets
License:
davanstrien HF Staff commited on
Commit
bff3efe
·
1 Parent(s): 9f7c3e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +328 -1
README.md CHANGED
@@ -1,3 +1,330 @@
1
  ---
2
- license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language: []
5
+ language_creators:
6
+ - expert-generated
7
+ license:
8
+ - cc-by-4.0
9
+ multilinguality: []
10
+ pretty_name: YALTAi Tabular Dataset
11
+ size_categories:
12
+ - n<1K
13
+ source_datasets: []
14
+ tags:
15
+ - manuscripts
16
+ - LAM
17
+ task_categories:
18
+ - object-detection
19
+ task_ids: []
20
  ---
21
+
22
+ # YALTAi Segmonto Manuscript and Early Printed Book Dataset
23
+
24
+ ## Table of Contents
25
+ - [YALTAi Segmonto Manuscript and Early Printed Book Dataset](#Segmonto Manuscript and Early Printed Book Dataset)
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-fields)
33
+ - [Data Splits](#data-splits)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
38
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
39
+ - [Annotations](#annotations)
40
+ - [Annotation process](#annotation-process)
41
+ - [Who are the annotators?](#who-are-the-annotators)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+ - [Contributions](#contributions)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:** [https://doi.org/10.5281/zenodo.6814770](https://doi.org/10.5281/zenodo.6814770)
56
+ - **Paper:** [https://arxiv.org/abs/2207.11230](https://arxiv.org/abs/2207.11230)
57
+
58
+ ### Dataset Summary
59
+
60
+ This dataset contains a subset of data used in the paper [You Actually Look Twice At it (YALTAi): using an object detection approach instead of region segmentation within the Kraken engine](https://arxiv.org/abs/2207.11230). This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset contains images from digitised manuscripts and early printed books with the following labels:
61
+
62
+ - DamageZone
63
+ - DigitizationArtefactZone
64
+ - DropCapitalZone
65
+ - GraphicZone
66
+ - MainZone
67
+ - MarginTextZone
68
+ - MusicZone
69
+ - NumberingZone
70
+ - QuireMarksZone
71
+ - RunningTitleZone
72
+ - SealZone
73
+ - StampZone
74
+ - TableZone
75
+ - TitlePageZone
76
+
77
+ ### Supported Tasks and Leaderboards
78
+
79
+ - `object-detection`: This dataset can be used to train a model for object-detection on historic document images.
80
+
81
+
82
+ ## Dataset Structure
83
+
84
+ This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines.
85
+
86
+ - The first configuration `YOLO` uses the original format of the data.
87
+ - The second configuration converts the YOLO format into a format closer to the `COCO` annotation format. This is done in particular to make it easier to work with the `feature_extractor`s from the `Transformers` models for object detection which expect data to be in a COCO style format.
88
+
89
+ ### Data Instances
90
+
91
+ An example instance from the COCO config:
92
+
93
+ ```
94
+ {'height': 2944,
95
+ 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FA413CDA210>,
96
+ 'image_id': 0,
97
+ 'objects': [{'area': 435956,
98
+ 'bbox': [0.0, 244.0, 1493.0, 292.0],
99
+ 'category_id': 0,
100
+ 'id': 0,
101
+ 'image_id': '0',
102
+ 'iscrowd': False,
103
+ 'segmentation': []},
104
+ {'area': 88234,
105
+ 'bbox': [305.0, 127.0, 562.0, 157.0],
106
+ 'category_id': 2,
107
+ 'id': 0,
108
+ 'image_id': '0',
109
+ 'iscrowd': False,
110
+ 'segmentation': []},
111
+ {'area': 5244,
112
+ 'bbox': [1416.0, 196.0, 92.0, 57.0],
113
+ 'category_id': 2,
114
+ 'id': 0,
115
+ 'image_id': '0',
116
+ 'iscrowd': False,
117
+ 'segmentation': []},
118
+ {'area': 5720,
119
+ 'bbox': [1681.0, 182.0, 88.0, 65.0],
120
+ 'category_id': 2,
121
+ 'id': 0,
122
+ 'image_id': '0',
123
+ 'iscrowd': False,
124
+ 'segmentation': []},
125
+ {'area': 374085,
126
+ 'bbox': [0.0, 540.0, 163.0, 2295.0],
127
+ 'category_id': 1,
128
+ 'id': 0,
129
+ 'image_id': '0',
130
+ 'iscrowd': False,
131
+ 'segmentation': []},
132
+ {'area': 577599,
133
+ 'bbox': [104.0, 537.0, 253.0, 2283.0],
134
+ 'category_id': 1,
135
+ 'id': 0,
136
+ 'image_id': '0',
137
+ 'iscrowd': False,
138
+ 'segmentation': []},
139
+ {'area': 598670,
140
+ 'bbox': [304.0, 533.0, 262.0, 2285.0],
141
+ 'category_id': 1,
142
+ 'id': 0,
143
+ 'image_id': '0',
144
+ 'iscrowd': False,
145
+ 'segmentation': []},
146
+ {'area': 56,
147
+ 'bbox': [284.0, 539.0, 8.0, 7.0],
148
+ 'category_id': 1,
149
+ 'id': 0,
150
+ 'image_id': '0',
151
+ 'iscrowd': False,
152
+ 'segmentation': []},
153
+ {'area': 1868412,
154
+ 'bbox': [498.0, 513.0, 812.0, 2301.0],
155
+ 'category_id': 1,
156
+ 'id': 0,
157
+ 'image_id': '0',
158
+ 'iscrowd': False,
159
+ 'segmentation': []},
160
+ {'area': 307800,
161
+ 'bbox': [1250.0, 512.0, 135.0, 2280.0],
162
+ 'category_id': 1,
163
+ 'id': 0,
164
+ 'image_id': '0',
165
+ 'iscrowd': False,
166
+ 'segmentation': []},
167
+ {'area': 494109,
168
+ 'bbox': [1330.0, 503.0, 217.0, 2277.0],
169
+ 'category_id': 1,
170
+ 'id': 0,
171
+ 'image_id': '0',
172
+ 'iscrowd': False,
173
+ 'segmentation': []},
174
+ {'area': 52,
175
+ 'bbox': [1734.0, 1013.0, 4.0, 13.0],
176
+ 'category_id': 1,
177
+ 'id': 0,
178
+ 'image_id': '0',
179
+ 'iscrowd': False,
180
+ 'segmentation': []},
181
+ {'area': 90666,
182
+ 'bbox': [0.0, 1151.0, 54.0, 1679.0],
183
+ 'category_id': 1,
184
+ 'id': 0,
185
+ 'image_id': '0',
186
+ 'iscrowd': False,
187
+ 'segmentation': []}],
188
+ 'width': 2064}
189
+ ```
190
+
191
+ An example instance from the YOLO config:
192
+
193
+ ``` python
194
+ {'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FAA140F2450>,
195
+ 'objects': {'bbox': [[747, 390, 1493, 292],
196
+ [586, 206, 562, 157],
197
+ [1463, 225, 92, 57],
198
+ [1725, 215, 88, 65],
199
+ [80, 1688, 163, 2295],
200
+ [231, 1678, 253, 2283],
201
+ [435, 1675, 262, 2285],
202
+ [288, 543, 8, 7],
203
+ [905, 1663, 812, 2301],
204
+ [1318, 1653, 135, 2280],
205
+ [1439, 1642, 217, 2277],
206
+ [1737, 1019, 4, 13],
207
+ [26, 1991, 54, 1679]],
208
+ 'label': [0, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1]}}
209
+ ```
210
+
211
+
212
+
213
+ ### Data Fields
214
+
215
+ The fields for the YOLO config:
216
+
217
+ - `image`: the image
218
+ - `objects`: the annotations which consits of:
219
+ - `bbox`: a list of bounding boxes for the image
220
+ - `label`: a list of labels for this image
221
+
222
+ The fields for the COCO config:
223
+
224
+ - `height`: height of the image
225
+ - `width`: width of the image
226
+ - `image`: image
227
+ - `image_id`: id for the image
228
+ - `objects`: annotations in COCO format, consisting of a list containing dictionaries with the following keys:
229
+ - `bbox`: bounding boxes for the images
230
+ - `category_id`: a label for the image
231
+ - `image_id`: id for the image
232
+ - `iscrowd`: COCO is a crowd flag
233
+ - `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)
234
+
235
+
236
+
237
+ ### Data Splits
238
+
239
+ The dataset contains a train, validation and test split with the following numbers per split:
240
+
241
+
242
+ | | train | validation | test |
243
+ |----------|-------|------------|------|
244
+ | examples | 196 | 22 | 135 |
245
+
246
+
247
+ ## Dataset Creation
248
+
249
+ > [this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The test set is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domain with column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8
250
+ .
251
+ ### Curation Rationale
252
+
253
+
254
+ [More information needed]
255
+
256
+
257
+ ### Source Data
258
+
259
+ #### Initial Data Collection and Normalization
260
+
261
+ [More information needed]
262
+
263
+
264
+ #### Who are the source language producers?
265
+
266
+ [More information needed]
267
+
268
+ ### Annotations
269
+
270
+ [More information needed]
271
+
272
+
273
+ #### Annotation process
274
+
275
+ [More information needed]
276
+
277
+ #### Who are the annotators?
278
+
279
+ [More information needed]
280
+
281
+ ### Personal and Sensitive Information
282
+
283
+ This data does not contain information relating to living individuals.
284
+
285
+ ## Considerations for Using the Data
286
+
287
+ ### Social Impact of Dataset
288
+
289
+ There are a growing number of datasets related to page layout for historical documents. This dataset offers a different approach to annotating these datasets (focusing on object detection rather than pixel-level annotations). Improving document layout recognition can have a positive impact on downstream tasks, in particular Optical Character Recognition.
290
+
291
+ ### Discussion of Biases
292
+
293
+ Historical documents contain a wide variety of page layouts. This means that the ability of models trained on this dataset to transfer to documents with very different layouts is not guaranteed.
294
+
295
+ ### Other Known Limitations
296
+
297
+ [More information needed]
298
+
299
+
300
+ ## Additional Information
301
+
302
+ ### Dataset Curators
303
+
304
+
305
+ ### Licensing Information
306
+
307
+ [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
308
+
309
+ ### Citation Information
310
+
311
+ ```
312
+ @dataset{clerice_thibault_2022_6814770,
313
+ author = {Clérice, Thibault},
314
+ title = {{YALTAi: Segmonto Manuscript and Early Printed Book
315
+ Dataset}},
316
+ month = jul,
317
+ year = 2022,
318
+ publisher = {Zenodo},
319
+ version = {1.0.0},
320
+ doi = {10.5281/zenodo.6814770},
321
+ url = {https://doi.org/10.5281/zenodo.6814770}
322
+ }
323
+ ```
324
+
325
+ [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6814770.svg)](https://doi.org/10.5281/zenodo.6814770)
326
+
327
+
328
+ ### Contributions
329
+
330
+ Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.