FranciscoLozDataScience's picture
Update README.md
0ca3545 verified
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: query
      dtype: string
    - name: relevant
      dtype: int64
    - name: clip_score
      dtype: float64
    - name: inat24_image_id
      dtype: int64
    - name: inat24_file_name
      dtype: string
    - name: supercategory
      dtype: string
    - name: category
      dtype: string
    - name: iconic_group
      dtype: string
    - name: inat24_species_id
      dtype: int64
    - name: inat24_species_name
      dtype: string
    - name: latitude
      dtype: float64
    - name: longitude
      dtype: float64
    - name: location_uncertainty
      dtype: float64
    - name: date
      dtype: string
    - name: license
      dtype: string
    - name: rights_holder
      dtype: string
    - name: query_id
      dtype: int64
  splits:
    - name: validation
      num_bytes: 369572974
      num_examples: 4000
    - name: test
      num_bytes: 1513809798
      num_examples: 16000
  download_size: 1879445739
  dataset_size: 1883382772
configs:
  - config_name: default
    data_files:
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

INQUIRE-Benchmark-small

INQUIRE is a text-to-image retrieval benchmark designed to challenge multimodal models with expert-level queries about the natural world. This dataset aims to emulate real world image retrieval and analysis problems faced by scientists working with large-scale image collections. Therefore, we will use this benchmark to improve Sagecontinuum's Text-to-Image Retrieval Systems.

Dataset Details

This dataset was build off of INQUIRE-Rerank with additional modifications to be able to do full-dataset retrieval. Please refer to modify_inquire_rerank.ipynb to see the modifications we did.

INQUIRE-Rerank Details

The INQUIRE-Rerank is created from 250 expert-level queries. This task fixes an initial ranking of 100 images per query, obtained using CLIP ViT-H-14 zero-shot retrieval on the entire 5 million image iNat24 dataset. The challenge is to rerank all 100 images for each query with the goal of assigning high scores to the relevant images (there are potentially many relevant images for each query). This fixed starting point makes reranking evaluation consistent, and saves time from running the initial retrieval yourself. If you're interested in full-dataset retrieval, check out INQUIRE-Fullrank available from the github repo.

Loading the Dataset

To load the dataset using HugginFace datasets, you first need to pip install datasets, then run the following code:

from datasets import load_dataset

inquire = load_dataset("sagecontinuum/INQUIRE-Benchmark-small", split="validation") # or "test"

Additional Details

For additional details, check out INQUIRE's paper and more.

🌐 Website 📖 Paper GitHub

Citations

@article{vendrow2024inquire,
  title={INQUIRE: A Natural World Text-to-Image Retrieval Benchmark}, 
  author={Vendrow, Edward and Pantazis, Omiros and Shepard, Alexander and Brostow, Gabriel and Jones, Kate E and Mac Aodha, Oisin and Beery, Sara and Van Horn, Grant},
  journal={NeurIPS},
  year={2024},
}