File size: 4,861 Bytes
40b8920
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e4c961c
40b8920
 
e4c961c
40b8920
 
e4c961c
40b8920
 
 
 
 
 
 
e4c961c
 
 
 
 
 
40b8920
79a4ce2
e4c961c
477415c
e4c961c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
477415c
 
 
 
 
e4c961c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: width
    dtype: int64
  - name: height
    dtype: int64
  - name: category
    dtype: string
  - name: label
    dtype: int64
  - name: bboxes_table
    sequence:
      sequence: int64
  - name: bboxes_cell
    sequence:
      sequence:
        sequence: int64
  splits:
  - name: train
    num_bytes: 134578038
    num_examples: 1200
  - name: test
    num_bytes: 44974087
    num_examples: 390
  download_size: 162624154
  dataset_size: 179552125
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
license: other
task_categories:
- image-classification
- object-detection
size_categories:
- 1K<n<10K
---
# Dataset Card for ICDAR2019-cTDaR-TRACKB

**This dataset is a resized version of the original [cndplab-founder/ICDAR2019_cTDaR](https://github.com/cndplab-founder/ICDAR2019_cTDaR), merged with with its supplement [cndplab-founder/ICDAR2019_cTDaR_dataset_supplement](https://github.com/cndplab-founder/ICDAR2019_cTDaR_dataset_supplement).**

You can easily and quickly load it:

```python
dataset = load_dataset("dvgodoy/ICDAR2019_cTDaR_TRACKB_resized")
```

```
DatasetDict({
    train: Dataset({
        features: ['image', 'width', 'height', 'category', 'label', 'bboxes_table', 'bboxes_cell'],
        num_rows: 1200
    })
    test: Dataset({
        features: ['image', 'width', 'height', 'category', 'label', 'bboxes_table', 'bboxes_cell'],
        num_rows: 390
    })
})
```

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-instances)
  - [Data Splits](#data-instances)
- [Additional Information](#additional-information)
  - [Licensing Information](#licensing-information)

## Dataset Description

- **Homepage:** [ICDAR 2019 cTDaR Dataset](https://cndplab-founder.github.io/cTDaR2019/dataset-description.html)
- **Repository:** [GitHUb](https://github.com/cndplab-founder/ICDAR2019_cTDaR)
- **Paper:**
- **Leaderboard:** [Competition Results](https://cndplab-founder.github.io/cTDaR2019/results.html)
- **Point of Contact:** [[email protected]](mailto:[email protected])

### Dataset Summary

From the original ICDAR2019 cTDaR [dataset](https://cndplab-founder.github.io/cTDaR2019/dataset-description.html) page:

> _The dataset consists of modern documents and archival ones with various formats, including document images and born-digital formats such as PDF. The annotated contents contain the table entities and cell entities in a document, while we do not deal with nested tables._

**This "resized" version contains all the images from "Track B" (table recognition) resized so that the largest dimension (either width or height) is 1000px. The annotations were converted from XML to JSON and boxes are represented in Pascal VOC format `(xmin, ymin, xmax, ymax)`.**

> For the modern dataset no training data is available for Track B.

**The original dataset did not contain "modern" tables or annotations for "Track B", so the [supplement dataset](https://github.com/cndplab-founder/ICDAR2019_cTDaR_dataset_supplement) was merged into it, and its annotations converted accordingly.**

## Dataset Structure

### Data Instances

A sample from the training set is provided below :
```
{
    'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=1000x729>,
    'width': 1000,
    'height': 729,
    'category': 'historical',
    'label': 0,
    'bboxes_table': [[...]],
    'bboxes_cell': [[...]]
}
```

### Data Fields

- `image`: A `PIL.Image.Image` object containing a document.
- `width`: image's width.
- `height`: image's height.
- `category`: class label.
- `label`: an `int` classification label.
- `bboxes_table`: list of box coordinates in `(xmin, ymin, xmax, ymax)` format (Pascal VOC).
- `bboxes_cell`: list of lists of box coordinates in `(xmin, ymin, xmax, ymax)` format (Pascal VOC) - the outer list matches the length of the `bboxes_table` list, and each of its elements is a list of cells.

<details>
  <summary>Class Label Mappings</summary>

```json
{
  "0": "historical",
  "1": "modern"
}
```

</details>

### Data Splits

|   |train|test|
|----------|----:|----:|
|# of examples|1200|390|

## Additional Information

### Licensing Information

This dataset is a resized and reorganized version of ICDAR2019 cTDaR from the [ICDAR 2019 Competition on Table Detection and Recognition](https://cndplab-founder.github.io/cTDaR2019/index.html), merged with its [supplement](https://github.com/cndplab-founder/ICDAR2019_cTDaR_dataset_supplement), which is licensed under [BSD 2-Clause License](https://github.com/cndplab-founder/ICDAR2019_cTDaR_dataset_supplement?tab=BSD-2-Clause-1-ov-file#readme).