Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ We introduce InternVL3, an advanced multimodal large language model (MLLM) serie
|
|
33 |
Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
|
34 |
Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
|
35 |
|
36 |
-
 | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
|
50 |
| InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |
|
51 |
|
52 |
-
—reveal that even for tasks primarily involving conventional contexts, relatively small \\( \delta \\) values can achieve optimal performance. These findings provide important insights for future efforts aimed at refining position encoding strategies for visual tokens in MLLMs.
|
170 |
|
171 |
-

|
37 |
|
38 |
## InternVL3 Family
|
39 |
|
|
|
49 |
| InternVL3-38B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
|
50 |
| InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |
|
51 |
|
52 |
+

|
53 |
|
54 |
## Model Architecture
|
55 |
|
|
|
106 |
|
107 |
### Multimodal Reasoning and Mathematics
|
108 |
|
109 |
+

|
110 |
|
111 |
### OCR, Chart, and Document Understanding
|
112 |
|
113 |
+

|
114 |
|
115 |
### Multi-Image & Real-World Comprehension
|
116 |
|
117 |
+

|
118 |
|
119 |
### Comprehensive Multimodal & Hallucination Evaluation
|
120 |
|
121 |
+

|
122 |
|
123 |
### Visual Grounding
|
124 |
|
125 |
+

|
126 |
|
127 |
### Multimodal Multilingual Understanding
|
128 |
|
129 |
+

|
130 |
|
131 |
### Video Understanding
|
132 |
|
133 |
+

|
134 |
|
135 |
### GUI Grounding
|
136 |
|
137 |
+

|
138 |
|
139 |
### Spatial Reasoning
|
140 |
|
141 |
+

|
142 |
|
143 |
## Evaluation on Language Capability
|
144 |
|
|
|
146 |
Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
|
147 |
Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.
|
148 |
|
149 |
+

|
150 |
|
151 |
## Ablation Study
|
152 |
|
|
|
156 |
|
157 |
The evaluation results in the Figure below shows that the model with native multimodal pre-training exhibits performance on most benchmarks that is comparable to the fully multi-stage-trained InternVL2-8B baseline. Furthermore, when followed by instruction tuning on higher-quality data, the model demonstrates further performance gains across evaluated multimodal tasks. These findings underscore the efficiency of native multimodal pre-training in imparting powerful multimodal capabilities to MLLMs.
|
158 |
|
159 |
+

|
160 |
|
161 |
### Mixed Preference Optimization
|
162 |
|
163 |
As shown in the table below, models fine-tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3-78B and InternVL3-38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data.
|
164 |
|
165 |
+

|
166 |
|
167 |
### Variable Visual Position Encoding
|
168 |
|
169 |
As reported in the table below, the introduction of V2PE leads to significant performance gains across most evaluation metrics. In addition, our ablation studies—by varying the positional increment \\( \delta \\)—reveal that even for tasks primarily involving conventional contexts, relatively small \\( \delta \\) values can achieve optimal performance. These findings provide important insights for future efforts aimed at refining position encoding strategies for visual tokens in MLLMs.
|
170 |
|
171 |
+

|
172 |
|
173 |
## Quick Start
|
174 |
|