finetuned_paligemma_vqav2_small

This model is a fine-tuned version of google/paligemma-3b-pt-224 using the QLoRA technique on a small chunk of vqav2 dataset by Merve.

How to Use?

import torch
import requests

from PIL import Image
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration

pretrained_model_id = "google/paligemma-3b-pt-224"
finetuned_model_id = "pyimagesearch/finetuned_paligemma_vqav2_small"

processor = AutoProcessor.from_pretrained(pretrained_model_id)
finetuned_model = PaliGemmaForConditionalGeneration.from_pretrained(finetuned_model_id)

prompt = "What is behind the cat?"
image_file = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cat.png?download=true"
raw_image = Image.open(requests.get(image_file, stream=True).raw)

inputs = processor(raw_image.convert("RGB"), prompt, return_tensors="pt")
output = finetuned_model.generate(**inputs, max_new_tokens=20)

print(processor.decode(output[0], skip_special_tokens=True)[len(prompt):])
# gramophone

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • num_epochs: 2

Training results

unnamed.png

Framework versions

  • PEFT 0.13.0
  • Transformers 4.46.0.dev0
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.20.0
Downloads last month
5
Safetensors
Model size
11.3M params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for cosmo3769/finetuned_paligemma_vqav2_small

Adapter
(169)
this model

Space using cosmo3769/finetuned_paligemma_vqav2_small 1