Model Card for PixtralGroundCap
This model is a fine-tuned version of Pixtral-12B on the GroundCap dataset for grounded image captioning. It generates detailed image descriptions with explicit grounding tags that link textual descriptions to specific visual elements in the image. The model was trained on the GroundCap dataset and uses a novel tag system to ground objects (<gdo>
), actions (<gda>
), and locations (<gdl>
) to specific regions in images.
Model Details
Model Description
- Developed by: Daniel A. P. Oliveira, Lourenço Teodoro, and David Martins de Matos (INESC-ID Lisboa and Instituto Superior Técnico, Universidade de Lisboa)
- Model type: Fine-tuned Pixtral-12B model for grounded image captioning
- Language(s): English
- License: Creative Commons Attribution 4.0
- Finetuned from model: mistral-community/pixtral-12b
Model Sources
- Paper: https://arxiv.org/abs/2502.13898
- Dataset: https://huggingface.co/datasets/daniel3303/GroundCap
Uses
Direct Use
The model is designed for generating grounded image captions that explicitly link textual descriptions to visual elements using three types of grounding tags:
<gdo>
for objects<gda>
for actions<gdl>
for locations
Each tag maintains object identity through unique IDs, enabling consistent reference tracking throughout the caption.
Downstream Use
The model can be integrated into:
- Accessibility applications requiring detailed image descriptions
- Content management systems needing verifiable image captions
- Visual question answering systems
- Image retrieval systems
Out-of-Scope Use
The model is not designed for:
- General image classification
- Object detection (requires separate object detection pipeline)
- Video captioning
- Non-English language captioning
How to Get Started with the Model
Input Format
The model expects input in the following format:
You are an AI assistant that can see and understand images. I will provide you with an image and the detected objects in it along with their positions and dimensions in the format [id, x,y,width,height].
[DETECTIONS]
[sky-0: 0.41,0.00,0.20,0.15]
[sky-1: 0.62,0.00,0.26,0.10]
[wall-0: 0.01,0.02,0.35,0.86]
[person-0: 0.38,0.35,0.12,0.40]
[person-1: 0.45,0.35,0.08,0.39]
[wall-1: 0.39,0.10,0.35,0.48]
[person-2: 0.71,0.29,0.20,0.51]
[wall-2: 0.75,0.03,0.24,0.88]
[person-3: 0.00,0.57,0.22,0.42]
[handbag-0: 0.21,0.75,0.11,0.23]
[person-4: 0.26,0.48,0.20,0.52]
[floor-wood-0: 0.40,0.59,0.60,0.41]
[/DETECTIONS]
[IMG]
Example Output
The model will generate a grounded caption using three types of tags:
<gdo>
for objects<gda>
for actions<gdl>
for locations
Example output:
In this scene, a group of individuals is gathered in what appears to be a <gdl class="wall" wall-0 wall-1 wall-2>rugged, makeshift shelter</gdl>. The <gdl class="wall" wall-0 wall-1 wall-2>walls</gdl> are constructed from <gdl class="wall" wall-0 wall-1 wall-2>rough materials</gdl>, giving the space a temporary and utilitarian feel. The <gdl class="sky" sky-0 sky-1>sky</gdl> is visible in the background, suggesting that the shelter is partially open or lacks a complete roof.
Several <gdo class="person" person-0 person-1 person-2 person-3 person-4>people</gdo> are present in the scene, each engaged in their own activities. <gdo class="person" person-0>One individual</gdo> <gda class="sit" person-0>sits</gda> on the <gdl class="floor-wood" floor-wood-0>ground</gdl>, while <gdo class="person" person-1>another person</gdo> <gda class="sit" person-1>is seated</gda> nearby. <gdo class="person" person-2>Another person</gdo> <gda class="sit" person-2>is also sitting</gda> on the <gdl class="floor-wood" floor-wood-0>ground</gdl>, and <gdo class="person" person-3>a fourth individual</gdo> <gda class="sit" person-3>is seated</gda> as well. <gdo class="person" person-4>An additional person</gdo> <gda class="sit" person-4>is sitting</gda> close by.
The <gdo class="handbag" handbag-0>handbag</gdo> is placed on the <gdl class="floor-wood" floor-wood-0>ground</gdl> near one of the individuals, suggesting they might have brought some personal belongings with them. The overall atmosphere of the scene is one of simplicity and resilience, with the individuals making the best of their surroundings in this temporary shelter.
Bias, Risks, and Limitations
- The model was trained on movie scenes from MovieNet, which may introduce biases in terms of scene composition, lighting, and camera angles
- Performance may vary for real-world images that differ significantly from movie scenes
- The model relies on pre-detected objects and their bounding boxes, Mask2Former was used for object detection in the original paper
Recommendations
- Use in conjunction with a robust object detection system
- Verify grounding accuracy for critical applications
- Consider the movie-centric nature of the training data when applying to other domains
Training Details
Training Data
The model was trained on the GroundCap dataset, which contains: - 52,016 images from 77 movies - 344 human-annotated captions - 52,016 automatically generated captions
Training Procedure
The training followed a two-stage approach:
Stage 1:
- Training on 52,016 automatically generated captions
- Learning rate: 2×10^-4
- Epochs: 2
- Batch size: 64 (with gradient accumulation)
Stage 2:
- Fine-tuning on 344 human-refined captions
- Learning rate: 2×10^-6
- Epochs: 2
- Batch size: 32 (with gradient accumulation)
Training Hyperparameters
- LoRA Configuration:
- Rank: 16
- Alpha: 32
- Targeted layers: Self-attention (query, key, value, output) and MLP (gate, up, down)
- Optimizer: AdamW
- Weight decay: 0.01
- Precision: bfloat16
- Hardware: 2x NVIDIA A100 (80GB)
- Training time: 1 day
Evaluation
Testing Data, Factors & Metrics
The model was evaluated on:
- 10,000 test images from GroundCap from which 70 are human-annotated test cases
Metrics
Grounding metrics:
- Precision (P): Correctly grounded objects / Total objects mentioned in caption
- Recall (R): Correctly grounded objects / Total detected objects
- F1 score: Harmonic mean of precision and recall
Caption quality metrics:
- BLEU-4: N-gram overlap with reference captions
- METEOR: Semantic similarity with reference captions
- CIDEr: Consensus-based image description evaluation
- SPICE: Semantic propositional image caption evaluation
- ROUGE-L: Longest common subsequence based evaluation
Combined metric:
- gMETEOR: Harmonic mean of METEOR and grounding F1 score, combining language quality with grounding accuracy
Human evaluation: (5-point Likert scale)
- Object precision: Accuracy of object grounding and tag classification
- Grounding recall: Coverage of detected objects in captions
- Description accuracy: Correctness of described actions and relationships
- Language quality: Grammar, readability, and coherence
- Overall quality: Assessment of caption effectiveness
ChatGPT-4o evaluation: (5-point Likert scale)
- Uses same criteria as human evaluation
- Correlations with human judgments:
- Object Precision: 0.81 (Pearson), 0.73 (Spearman)
- Grounding Recall: 0.76 (Pearson), 0.67 (Spearman)
- Description Accuracy: 0.79 (Pearson), 0.77 (Spearman)
- Language Quality: 0.59 (Pearson), 0.44 (Spearman)
- Overall Quality: 0.78 (Pearson), 0.68 (Spearman)
Results
Automatic metrics on test set for PixtralGroundCap:
- Precision: 0.58
- Recall: 0.96
- F1 Score: 0.69
- BLEU-4: 0.19
- METEOR: 0.23
- CIDEr: 0.51
- SPICE: 0.30
- ROUGE-L: 0.37
- gMETEOR: 0.35
Human evaluation results (scale 1-5):
- Object Precision: 4.22
- Grounding Recall: 4.19
- Description Accuracy: 4.08
- Language Quality: 4.91
- Overall Quality: 4.22
ChatGPT-4o evaluation results (scale 1-5):
- Object Precision: 4.21
- Grounding Recall: 4.13
- Description Accuracy: 4.01
- Language Quality: 4.90
- Overall Quality: 4.19
Environmental Impact
- Hardware Type: 2x NVIDIA A100 GPUs
- Hours used: 24 hours
- Cloud Provider: INESC-ID
- Compute Region: Lisbon, Portugal
Paper
Citation
BibTeX:
@article{Oliveira2025GroundCapAV,
title={GroundCap: A Visually Grounded Image Captioning Dataset},
author={Daniel A. P. Oliveira and Louren{ç}o Teodoro and David Martins de Matos},
year={2025},
url={https://api.semanticscholar.org/CorpusID:276450057}
}
Model Card Authors
Daniel A. P. Oliveira, Lourenço Teodoro, and David Martins de Matos
Model Card Contact
Framework versions
- PEFT 0.13.2
- Downloads last month
- 4
Model tree for daniel3303/PixtralGroundCap
Base model
mistral-community/pixtral-12bDataset used to train daniel3303/PixtralGroundCap
Evaluation results
- Precision on daniel3303/GroundCaptest set self-reported0.580
- Recall on daniel3303/GroundCaptest set self-reported0.960
- F1 on daniel3303/GroundCaptest set self-reported0.690
- BLEU-4 on daniel3303/GroundCaptest set self-reported0.190
- METEOR on daniel3303/GroundCaptest set self-reported0.230
- CIDEr on daniel3303/GroundCaptest set self-reported0.510
- SPICE on daniel3303/GroundCaptest set self-reported0.300
- gMETEOR on daniel3303/GroundCaptest set self-reported0.350