Uploaded model
- Developed by: MMoshtaghi
- License: apache-2.0
- Finetuned from model : unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
- Finetuned on dataset: unsloth/Radiology_mini
- PEFT method : Quantized LoRA
Quick start
from datasets import load_dataset
from unsloth import FastVisionModel
model, tokenizer = FastVisionModel.from_pretrained(
model_name = "MMoshtaghi/Llama-3.2-11B-Vision-LoRAAdpt-Radiology",
load_in_4bit = True,
)
FastVisionModel.for_inference(model) # Enable for inference!
dataset = load_dataset("unsloth/Radiology_mini", split = "train")
image = dataset[0]["image"]
instruction = "You are an expert radiographer. Describe accurately what you see in this image."
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": instruction}
]}
]
input_text = tokenizer.apply_chat_template(messages, add_generation_prompt = True)
inputs = tokenizer(
image,
input_text,
add_special_tokens = False,
return_tensors = "pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model_inf.generate(**inputs, streamer = text_streamer, max_new_tokens = 128,
use_cache = True, temperature = 1.5, min_p = 0.1)
Framework versions
- TRL: 0.13.0
- Transformers: 4.47.1
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
- Unsloth: 2025.1.5
Citations
This VLM model was trained 2x faster with Unsloth and Huggingface's TRL library.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.