dataset_info:
features:
- name: image
dtype: image
- name: width
dtype: int64
- name: height
dtype: int64
- name: category
dtype: string
- name: label
dtype: int64
- name: bboxes_table
sequence:
sequence: int64
- name: bboxes_cell
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 134578038
num_examples: 1200
- name: test
num_bytes: 44974087
num_examples: 390
download_size: 162624154
dataset_size: 179552125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: other
task_categories:
- image-classification
- object-detection
size_categories:
- 1K<n<10K
Dataset Card for ICDAR2019-cTDaR-TRACKB
This dataset is a resized version of the original cndplab-founder/ICDAR2019_cTDaR, merged with with its supplement cndplab-founder/ICDAR2019_cTDaR_dataset_supplement.
You can easily and quickly load it:
dataset = load_dataset("dvgodoy/ICDAR2019_cTDaR_TRACKB_resized")
DatasetDict({
train: Dataset({
features: ['image', 'width', 'height', 'category', 'label', 'bboxes_table', 'bboxes_cell'],
num_rows: 1200
})
test: Dataset({
features: ['image', 'width', 'height', 'category', 'label', 'bboxes_table', 'bboxes_cell'],
num_rows: 390
})
})
Table of Contents
Dataset Description
- Homepage: ICDAR 2019 cTDaR Dataset
- Repository: GitHUb
- Paper:
- Leaderboard: Competition Results
- Point of Contact: [email protected]
Dataset Summary
From the original ICDAR2019 cTDaR dataset page:
The dataset consists of modern documents and archival ones with various formats, including document images and born-digital formats such as PDF. The annotated contents contain the table entities and cell entities in a document, while we do not deal with nested tables.
This "resized" version contains all the images from "Track B" (table recognition) resized so that the largest dimension (either width or height) is 1000px. The annotations were converted from XML to JSON and boxes are represented in Pascal VOC format (xmin, ymin, xmax, ymax)
.
For the modern dataset no training data is available for Track B.
The original dataset did not contain "modern" tables or annotations for "Track B", so the supplement dataset was merged into it, and its annotations converted accordingly.
Dataset Structure
Data Instances
A sample from the training set is provided below :
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=1000x729>,
'width': 1000,
'height': 729,
'category': 'historical',
'label': 0,
'bboxes_table': [[...]],
'bboxes_cell': [[...]]
}
Data Fields
image
: APIL.Image.Image
object containing a document.width
: image's width.height
: image's height.category
: class label.label
: anint
classification label.bboxes_table
: list of box coordinates in(xmin, ymin, xmax, ymax)
format (Pascal VOC).bboxes_cell
: list of lists of box coordinates in(xmin, ymin, xmax, ymax)
format (Pascal VOC) - the outer list matches the length of thebboxes_table
list, and each of its elements is a list of cells.
Class Label Mappings
{
"0": "historical",
"1": "modern"
}
Data Splits
train | test | |
---|---|---|
# of examples | 1200 | 390 |
Additional Information
Licensing Information
This dataset is a resized and reorganized version of ICDAR2019 cTDaR from the ICDAR 2019 Competition on Table Detection and Recognition, merged with its supplement, which is licensed under BSD 2-Clause License.