Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,91 @@
|
|
1 |
-
---
|
2 |
-
license: llama3.2
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3.2
|
3 |
+
base_model:
|
4 |
+
- meta-llama/Llama-3.2-11B-Vision-Instruct
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
- ko
|
8 |
+
tags:
|
9 |
+
- vlm-ko
|
10 |
+
- meta
|
11 |
+
- llama-3.2
|
12 |
+
- llama-3.2-ko
|
13 |
+
datasets:
|
14 |
+
- maum-ai/General-Evol-VQA
|
15 |
+
---
|
16 |
+
<p align="left">
|
17 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/646484cfb90150b2706df03b/BEOyMpnnY9VY2KXlc3V2F.png" width="20%"/>
|
18 |
+
<p>
|
19 |
+
|
20 |
+
# Llama-3.2-MAAL-11B-Vision-v0.1
|
21 |
+
|
22 |
+
We are releasing a [model](https://huggingface.co/maum-ai/Llama-3.2-MAAL-11B-Vision-v0.1), a subset of the [training dataset](https://huggingface.co/datasets/maum-ai/General-Evol-VQA), and a [leaderboard](https://huggingface.co/spaces/maum-ai/KOFFVQA-Leaderboard) to promote and accelerate the development of Korean Vision-Language Models (VLMs).
|
23 |
+
|
24 |
+
- **Developed by:** [maum.ai Brain NLP](https://maum-ai.github.io). Jaeyoon Jung, Yoonshik Kim, Yekyung Nah
|
25 |
+
- **Language(s) (NLP):** Korean, English (currently, bilingual)
|
26 |
+
|
27 |
+
|
28 |
+
## Model Description
|
29 |
+
|
30 |
+
Version 0.1 is fine-tuned by English and Korean VQA dataset with other datasets (OCR, Math, etc)...
|
31 |
+
|
32 |
+
- We trained this model on 8 H100-80G for 2 days with image-text pair multimodal fine-tuning dataset
|
33 |
+
- [maum-ai/General-Evol-VQA](https://huggingface.co/datasets/maum-ai/General-Evol-VQA) is one of the datasets that we used for fine-tuning.
|
34 |
+
|
35 |
+
## sample inference code (GPU)
|
36 |
+
Starting with transformers >= 4.45.0 onward, you can run inference to generate text based on an image and a starting prompt you supply.
|
37 |
+
|
38 |
+
```
|
39 |
+
import requests
|
40 |
+
import torch
|
41 |
+
from PIL import Image
|
42 |
+
from transformers import MllamaForConditionalGeneration, AutoProcessor
|
43 |
+
|
44 |
+
model_id = "maum-ai/Llama-3.2-MAAL-11B-Vision-v0.1"
|
45 |
+
|
46 |
+
model = MllamaForConditionalGeneration.from_pretrained(
|
47 |
+
model_id,
|
48 |
+
torch_dtype=torch.bfloat16,
|
49 |
+
device_map="auto",
|
50 |
+
)
|
51 |
+
processor = AutoProcessor.from_pretrained(model_id)
|
52 |
+
|
53 |
+
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
|
54 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
55 |
+
|
56 |
+
messages = [
|
57 |
+
{"role": "user", "content": [
|
58 |
+
{"type": "image"},
|
59 |
+
{"type": "text", "text": "이 이미지에 대해서 시를 써줘"}
|
60 |
+
]}
|
61 |
+
]
|
62 |
+
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
|
63 |
+
inputs = processor(
|
64 |
+
image,
|
65 |
+
input_text,
|
66 |
+
add_special_tokens=False,
|
67 |
+
return_tensors="pt"
|
68 |
+
).to(model.device)
|
69 |
+
|
70 |
+
output = model.generate(**inputs, max_new_tokens=200)
|
71 |
+
print(processor.decode(output[0]))
|
72 |
+
```
|
73 |
+
|
74 |
+
## Evaluation Results
|
75 |
+
As the main goal of version 0.1 is **leveraging Korean VQA and OCR capabilities tailored to real-world business use cases**, we select [**KOFFVQA**](https://huggingface.co/spaces/maum-ai/KOFFVQA-Leaderboard) as our evaluation method to assess the Korean instruction-following skills.
|
76 |
+
|
77 |
+
|Model|Params (B)|average(↑)|
|
78 |
+
|-|-|-|
|
79 |
+
|NCSOFT/VARCO-VISION-14B|15.2b|66.69|
|
80 |
+
|Qwen/Qwen2-VL-7B-Instruct|8.3b|63.53|
|
81 |
+
|**maum-ai/Llama-3.2-MAAL-11B-Vision-v0.1**|10.7b|61.13|
|
82 |
+
|meta-llama/Llama-3.2-11B-Vision-Instruct|10.7b|50.36|
|
83 |
+
|mistralai/Pixtral-12B-2409|12.7b|44.62|
|
84 |
+
|llava-onevision-qwen2-7b-ov|8b|43.78|
|
85 |
+
|InternVL2-8b|8.1b|32.76|
|
86 |
+
|MiniCPM-V-2_6|8.1b|32.69|
|
87 |
+
|
88 |
+
Our model has achieved a 20% performance improvement compared to the previous base model.
|
89 |
+
You can check more results in [this Leaderboard](https://huggingface.co/spaces/maum-ai/KOFFVQA-Leaderboard)
|
90 |
+
|
91 |
+
### We will release enhanced model, v0.2 soon
|