M2KR-Challenge / README.md
Jingbiao's picture
Update README.md
469d4f7 verified
metadata
dataset_info:
  - config_name: challenge_data
    features:
      - name: pos_item_ids
        sequence: string
      - name: pos_item_contents
        sequence: string
      - name: question
        dtype: string
      - name: question_id
        dtype: string
      - name: instruction
        dtype: string
      - name: img_path
        dtype: string
    splits:
      - name: train
        num_bytes: 890417
        num_examples: 6415
    download_size: 169300
    dataset_size: 890417
  - config_name: challenge_passage
    features:
      - name: passage_id
        dtype: string
      - name: passage_content
        dtype: string
      - name: page_screenshot
        dtype: string
    splits:
      - name: train
        num_bytes: 44091445
        num_examples: 47318
    download_size: 24786149
    dataset_size: 44091445
configs:
  - config_name: challenge_data
    data_files:
      - split: train
        path: challenge_data/train-*
  - config_name: challenge_passage
    data_files:
      - split: train
        path: challenge_passage/train-*

M2KR-Challenge Dataset

A multimodal retrieval dataset for image-to-document and image+text-to-document matching tasks.

Dataset Overview

This dataset contains two main subsets designed for multimodal retrieval challenges:

  • challenge_data: Query data with images and optional text questions (6.42k samples)
  • challenge_passage: Document collection with textual passages and associated web screenshot path (47.3k passages)

Dataset Structure

challenge_data (6,420 rows)

Columns:

  • img_path: Image filename (string)
  • instruction: Task instruction for description generation
  • question: Optional text query (53% populated, if exist then it is a image+text-to-document retrieval task, else, it is a image-to-document task )
  • question_id: Unique identifier (string)
  • pos_item_ids: Sequence of positive item IDs, the ground truth passage_id (empty, removed for private test set)
  • pos_item_contents: Sequence of positive item contents (empty, removed for private test set)

Task Types:

  1. Image-to-Document Retrieval: When question is empty (image query)
  2. Multimodal Retrieval: When question contains text (image + text query)

challenge_passage (47,300 rows)

Columns:

  • passage_id: Unique passage identifier (string)
  • passage_content: Textual description containing:
    • Image description
    • Structured details about persons (birth/death dates, occupations, locations, etc.)
  • page_screenshot: Associated image filename (string)

For the retrieval task, you will need to retrieve the corresponding passage from the 47K-passage pool for each sample in challenge_data.

Images

The image data is provided in separate archives:

  • Web_Image.zip.001
  • Web_Image.zip.002
  • Web_Image.zip.003

These archives contain the web screenshots corresponding to the document passages.

  • query_images.zip
    Contains the query images used in the challenge.

References

Paper or resources for more information:

Citation If our work helped your research, please kindly cite our paper for PreFLMR.

       
@inproceedings{lin-etal-2024-preflmr,
    title = "{P}re{FLMR}: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers",
    author = "Lin, Weizhe  and
      Mei, Jingbiao  and
      Chen, Jinghong  and
      Byrne, Bill",
    editor = "Ku, Lun-Wei  and
      Martins, Andre  and
      Srikumar, Vivek",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-long.289",
    pages = "5294--5316",
    abstract = "Large Multimodal Models (LMMs) excel in natural language and visual understanding but are challenged by exacting tasks such as Knowledge-based Visual Question Answering (KB-VQA) which involve the retrieval of relevant information from document collections to use in shaping answers to questions. We present an extensive training and evaluation framework, M2KR, for KB-VQA. M2KR contains a collection of vision and language tasks which we have incorporated into a single suite of benchmark tasks for training and evaluating general-purpose multi-modal retrievers. We use M2KR to develop PreFLMR, a pre-trained version of the recently developed Fine-grained Late-interaction Multi-modal Retriever (FLMR) approach to KB-VQA, and we report new state-of-the-art results across a range of tasks. We also present investigations into the scaling behaviors of PreFLMR intended to be useful in future developments in general-purpose multi-modal retrievers.",
}