Pix2Cap-COCO / README.md
geshang's picture
Update README.md
f28604a verified
metadata
license: apache-2.0
task_categories:
  - image-segmentation
  - image-to-text
  - text-generation
language:
  - en
pretty_name: s
size_categories:
  - 10K<n<100K

Pix2Cap-COCO

Example Image

Dataset Description

Pix2Cap-COCO is the first pixel-level captioning dataset derived from the panoptic COCO 2017 dataset, designed to provide more precise visual descriptions than traditional region-level captioning datasets. It consists of 20,550 images, partitioned into a training set (18,212 images) and a validation set (2,338 images), mirroring the original COCO split. The dataset includes 167,254 detailed pixel-level captions, each averaging 22.94 words in length. Unlike datasets like Visual Genome, which have significant redundancy, Pix2Cap-COCO ensures one unique caption per mask, eliminating repetition and improving the clarity of object representation.

Pix2Cap-COCO is designed to offer a more accurate match between the captions and visual content, enhancing tasks such as visual understanding, spatial reasoning, and object interaction analysis. Pix2Cap-COCO stands out with its larger number of images and detailed captions, offering significant improvements over existing region-level captioning datasets.

Dataset Version

1.0

Languages

English

Task(s)

  • Pixel-level Captioning: Generating detailed pixel-level captions for segmented objects in images.
  • Visual Reasoning: Analyzing object relationships and spatial interactions in scenes.

Use Case(s)

Pix2Cap-COCO is designed for tasks that require detailed visual understanding and caption generation, including:

  • Object detection and segmentation with contextual captions
  • Spatial reasoning and understanding spatial relations
  • Object interaction analysis and reasoning
  • Improving visual language models by providing more detailed descriptions of visual content

Example(s)

file_name image descriptions
000000231527.png Example Image 1:Another glass cup filled with orange jam or marmalade but slightly smaller in size.
2:A glass cup filled with orange jam or marmalade, it has an open top and is placed to the left side on the table.
3:A wooden-handled knife rests on the table close to a sliced piece of orange.
4:Positioned next to this whole uncut orange has a bright color indicating ripeness.
5:This is a half-sliced orange with juicy pulp visible, placed on the white cloth of the dining table.
6:A juicy slice of an orange that lies flat on the table near the knife.
7:A whole uncut orange sitting next to another one, both are positioned at the top right corner of the image.
8:The dining table is covered with a white cloth, and various items are placed on it, including cups of orange jam, slices of oranges, and a knife.
000000357081.png Example Image 1:The grass is lush and green , covering the ground uniformly. It appears well-maintained and provides a natural base for the other objects in the image.
2:The trees are in the background, their outlines slightly blurred but still visible. They stand tall and provide a contrasting dark green backdrop to the bright foreground.
3:This cow is larger, with a white body adorned with large black spots. It's standing upright and appears healthy and well-fed.
4:This smaller cow has similar coloring to it but is distinguished by its size and posture - it's head is down, suggesting it might be grazing.
000000407298.png Example Image 1:A child is visible from the chest up, wearing a light blue shirt. The child has curly hair and a cheerful expression, with eyes looking towards something interesting.
2:The glove is tan and well-worn, with dark brown lacing. It's open and appears to be in the act of catching a ball.
3:The background consists of vibrant green grass illuminated by natural light, providing a fresh and open atmosphere.
4:A white baseball with brown stitching is partially inside the baseball glove, appearing as if it has just been caught.

Dataset Analysis

Example Image Example Image

Data Scale

  • Total Images: 20,550
  • Training Images: 18,212
  • Validation Images: 2,338
  • Total Captions: 167,254

Caption Quality

  • Average Words per Caption: 22.94
  • Average Sentences per Caption: 2.73
  • Average Nouns per Caption: 7.08
  • Average Adjectives per Caption: 3.46
  • Average Verbs per Caption: 3.42

Pix2Cap-COCO captions are significantly more detailed than datasets like Visual Genome, which averages only 5.09 words per caption. These highly detailed captions allow the dataset to capture intricate relationships within scenes and demonstrate a balanced use of linguistic elements. Pix2Cap-COCO excels in capturing complex spatial relationships, with hierarchical annotations that describe both coarse (e.g., 'next to', 'above') and fine-grained spatial relations (e.g., 'partially occluded by', 'vertically aligned with').

License

This dataset is released under the Apache 2.0 License. Please ensure that you comply with the terms before using the dataset.

Citation

If you use this dataset in your work, please cite the original paper:

@article{you2025pix2cap},
  title={Pix2Cap-COCO: Advancing Visual Comprehension via Pixel-Level Captioning},
  author={Zuyao You and Junke Wang and Lingyu Kong and Bo He and Zuxuan Wu},
  journal={arXiv preprint arXiv:2501.13893},
  year={2025}
}

Acknowledgments

Pix2Cap-COCO is built upon Panoptic COCO 2017 dataset, with the pipeline powered by Set-of-Mark and GPT-4v.