Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Molbap's picture
Molbap HF staff
Update README.md
a46b0b8 verified
|
raw
history blame
3.19 kB
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: question_id
      dtype: int64
    - name: question
      dtype: string
    - name: answers
      sequence: string
    - name: data_split
      dtype: string
    - name: ocr_results
      struct:
        - name: page
          dtype: int64
        - name: clockwise_orientation
          dtype: float64
        - name: width
          dtype: int64
        - name: height
          dtype: int64
        - name: unit
          dtype: string
        - name: lines
          list:
            - name: bounding_box
              sequence: int64
            - name: text
              dtype: string
            - name: words
              list:
                - name: bounding_box
                  sequence: int64
                - name: text
                  dtype: string
                - name: confidence
                  dtype: string
    - name: other_metadata
      struct:
        - name: ucsf_document_id
          dtype: string
        - name: ucsf_document_page_no
          dtype: string
        - name: doc_id
          dtype: int64
        - name: image
          dtype: string
  splits:
    - name: train
      num_examples: 39463
    - name: validation
      num_examples: 5349
    - name: test
      num_examples: 5188
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
license: mit
task_categories:
  - question-answering
language:
  - en
size_categories:
  - 10K<n<100K

Dataset Card for DocVQA Dataset

Dataset Description

Dataset Summary

DocVQA dataset is a document dataset introduced in Mathew et al. (2021) consisting of 50,000 questions defined on 12,000+ document images.

Usage

This dataset can be used with current releases of Hugging Face datasets library. Here is an example using a custom collator to bundle batches in a trainable way on the train split


from datasets import load_dataset

docvqa_dataset = load_dataset("pixparse/docvqa-single-page-questions", split="train"
)
next(iter(dataset["train"])).keys()
>>> dict_keys(['image', 'question_id', 'question', 'answers', 'data_split', 'ocr_results', 'other_metadata'])

image will be a byte string containing the image contents. answers is a list of possible answers, aligned with the expected inputs to the ANLS metric.

The loader can then be iterated on normally and yields questions. Many questions rely on the same image, so there is some amount of data duplication.

Data Splits

Train

  • 10194 images, 39463 questions and answers.

Validation

  • 1286 images, 5349 questions and answers.

Test

  • 1,287 images, 5,188 questions.

Additional Information

Dataset Curators

Pablo Montalvo, Ross Wightman

Licensing Information

MIT

Citation Information

Mathew, Minesh, Dimosthenis Karatzas, and C. V. Jawahar. "Docvqa: A dataset for vqa on document images." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 20