ritabratamaiti's picture
Update README.md
1ca1d29 verified
|
raw
history blame
4.74 kB
metadata
license: mit
datasets:
  - AnyModal/flickr30k
base_model:
  - meta-llama/Llama-3.2-1B
  - google/vit-base-patch16-224
language:
  - en
pipeline_tag: image-to-text
library_name: AnyModal
tags:
  - vlm
  - vision
  - multimodal

AnyModal/Image-Captioning-Llama-3.2-1B

AnyModal/Image-Captioning-Llama-3.2-1B explores the potential of combining advanced visual feature extraction and language modeling techniques to generate descriptive captions for natural images. Built within the AnyModal framework, this model integrates a Vision Transformer (ViT) encoder with the Llama 3.2-1B language model, fine-tuned on the Flickr30k dataset. The model demonstrates a promising approach to bridging visual and textual modalities.


Trained On

This model was trained on the Flickr30k Dataset:

From Image Descriptions to Visual Denotations: New Similarity Metrics for Semantic Inference Over Event Descriptions
Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, Svetlana Lazebnik

The dataset comprises 31,000 images collected from Flickr, each annotated with five descriptive sentences written by human annotators. These annotations offer diverse perspectives on real-world scenes and actions, forming a robust basis for image captioning experiments.


How to Use

Installation

Install the necessary dependencies:

pip install torch transformers torchvision huggingface_hub tqdm matplotlib Pillow

Inference

Below is an example of generating captions for an image using this model:

import llm
import anymodal
import torch
import vision
from torch.utils.data import DataLoader
import numpy as np
import os
from PIL import Image
from huggingface_hub import hf_hub_download

# Load language model and tokenizer
llm_tokenizer, llm_model = llm.get_llm(
    "meta-llama/Llama-3.2-1B",
    access_token="GET_YOUR_OWN_TOKEN_FROM_HUGGINGFACE",
    use_peft=False,
)
llm_hidden_size = llm.get_hidden_size(llm_tokenizer, llm_model)

# Load vision model components
image_processor, vision_model, vision_hidden_size = vision.get_image_encoder("google/vit-base-patch16-224", use_peft=False)

# Initialize vision tokenizer and encoder
vision_encoder = vision.VisionEncoder(vision_model)
vision_tokenizer = vision.Projector(vision_hidden_size, llm_hidden_size, num_hidden=1)

# Initialize MultiModalModel
multimodal_model = anymodal.MultiModalModel(
    input_processor=None,
    input_encoder=vision_encoder,
    input_tokenizer=vision.Projector(vision_hidden_size, llm_hidden_size, num_hidden=1),
    language_tokenizer=llm_tokenizer,
    language_model=llm_model,
    prompt_text="The description of the given image is: ")

# Download pre-trained model weights
if not os.path.exists("image_captioning_model"):
    os.makedirs("image_captioning_model")

hf_hub_download("AnyModal/Image-Captioning-Llama-3.2-1B", filename="input_tokenizer.pt", local_dir="image_captioning_model")
multimodal_model._load_model("image_captioning_model")

# Generate caption for an image
image_path = "example_image.jpg"  # Path to your image
image = Image.open(image_path).convert("RGB")
processed_image = image_processor(image, return_tensors="pt")
processed_image = {key: val.squeeze(0) for key, val in processed_image.items()}  # Remove batch dimension

# Generate caption
generated_caption = multimodal_model.generate(processed_image, max_new_tokens=120)
print("Generated Caption:", generated_caption)

Project and Training Scripts

This model is part of the AnyModal Image Captioning Project.

Explore the full project repository for additional details and potential customization.


Project Details

  • Vision Encoder: Uses a pre-trained Vision Transformer (ViT) model for feature extraction, offering a strong baseline for processing visual information.
  • Projector Network: Maps visual features into a token space that aligns with the Llama 3.2-1B language model.
  • Language Model: Utilizes Llama 3.2-1B, a pre-trained causal language model, to construct coherent and context-sensitive captions.

While trained on the Flickr30k dataset, the model's design highlights the possibilities for integrating vision and language models for captioning tasks, showcasing the feasibility of this approach within the AnyModal framework.