LoRACaptioner / README.md
Rishi Desai
readme + trigger word
524c601
|
raw
history blame
2.49 kB
metadata
title: LoRACaptioner
emoji: 🤠
colorFrom: red
colorTo: green
sdk: gradio
sdk_version: 5.25.2
app_file: demo.py
pinned: false

LoRACaptioner

  • Image Captioning: Automatically generate detailed and structured captions for your LoRA dataset.
  • Prompt Optimization: Enhance prompts during inference to achieve high-quality outputs.

Installation

Prerequisites

Setup

  1. Create the virtual environment:

    python -m venv venv
    source venv/bin/activate
    python -m pip install -r requirements.txt
    
  2. Run inference on one set of images:

    python main.py --input examples/ --output output/
    
    Arguments
    • --input (str): Directory containing images to caption.
    • --output (str): Directory to save images and captions (defaults to input directory).
    • --batch_images (flag): Caption images in batches by category.

Gradio Web Interface

Launch a user-friendly web interface for captioning and prompt optimization:

python demo.py

Notes

  • Images are processed individually in standard mode
  • For large collections, batch processing by category is recommended
  • Each caption is saved as a .txt file with the same name as the image

Troubleshooting

  • API errors: Ensure your Together API key is set and has funds
  • Image formats: Only .png, .jpg, .jpeg, and .webp files are supported

Examples

Original Image Generated Caption
Landscape "tr1gger photorealistic, mountain landscape, hiking gear and outdoor clothing, standing at viewpoint, awestruck, snow-capped peaks and valley, golden sunset light, wide-angle view"
City "tr1gger photorealistic, urban architecture, business suit with briefcase, walking on sidewalk, focused expression, downtown skyscrapers, evening city lights, street-level view"
Food "tr1gger photorealistic, culinary presentation, chef's uniform and hat, plating dish, concentrated, modern restaurant kitchen, soft studio lighting, overhead camera angle"

License

MIT License