Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -27,23 +27,23 @@ InternVL 2.0 is a multimodal large language model series, featuring models of va
|
|
27 |
| :--------------------------: | :-------------: | :------------: | :----------------: | :-----------: |
|
28 |
| Model Size | - | - | 25.5B | 25.5B |
|
29 |
| | | | | |
|
30 |
-
| DocVQA<sub>test</sub> | 87.2 | 86.5 | 90.9 |
|
31 |
-
| ChartQA<sub>test</sub> | 78.1 | 81.3 | 83.8 |
|
32 |
-
| InfoVQA<sub>test</sub> | - | 72.7 | 72.5 |
|
33 |
-
| TextVQA<sub>val</sub> | - | 73.5 | 80.6 |
|
34 |
-
| OCRBench | 678 | 754 | 724 |
|
35 |
-
| MME<sub>sum</sub> | 2070.2 |
|
36 |
-
| RealWorldQA | 68.0 | 67.5 | 66.0 |
|
37 |
-
| AI2D<sub>test</sub> | 89.4 | 80.3 | 80.7 |
|
38 |
-
| MMMU<sub>val</sub> | 63.1 | 58.5 | 45.2 |
|
39 |
-
| MMBench-EN<sub>test</sub> | 81.0 | 73.9 | 82.2 |
|
40 |
-
| MMBench-CN<sub>test</sub> | 80.2 | 73.8 | 82.0 |
|
41 |
-
| CCBench<sub>dev</sub> | 57.3 | 28.4 | 69.8 |
|
42 |
-
| MMVet<sub>GPT-4-0613</sub> | - | - | 62.8 |
|
43 |
-
| MMVet<sub>GPT-4-Turbo</sub> | 67.5 | 64.0 | 55.4 |
|
44 |
-
| SEED-Image | - | - | 76.0 |
|
45 |
-
| HallBench<sub>avg</sub> | 43.9 | 45.6 | 49.3 |
|
46 |
-
| MathVista<sub>testmini</sub> | 58.1 | 57.7 | 53.5 |
|
47 |
|
48 |
- We simultaneously use InternVL and VLMEvalKit repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. MMMU, OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
|
49 |
|
@@ -263,26 +263,27 @@ InternVL 2.0 是一个多模态大语言模型系列,包含各种规模的模
|
|
263 |
|
264 |
## 性能测试
|
265 |
|
266 |
-
| 评测数据集 |
|
267 |
-
| :--------------------------: |
|
268 |
-
| 模型大小 |
|
269 |
-
| |
|
270 |
-
| DocVQA<sub>test</sub> |
|
271 |
-
| ChartQA<sub>test</sub> |
|
272 |
-
| InfoVQA<sub>test</sub> |
|
273 |
-
| TextVQA<sub>val</sub> |
|
274 |
-
| OCRBench |
|
275 |
-
| MME<sub>sum</sub> |
|
276 |
-
| RealWorldQA |
|
277 |
-
| AI2D<sub>test</sub> |
|
278 |
-
| MMMU<sub>val</sub> |
|
279 |
-
| MMBench-EN<sub>test</sub> |
|
280 |
-
| MMBench-CN<sub>test</sub> |
|
281 |
-
| CCBench<sub>dev</sub> |
|
282 |
-
| MMVet<sub>GPT-4-0613</sub> |
|
283 |
-
|
|
284 |
-
|
|
285 |
-
|
|
|
|
286 |
|
287 |
- 我们同时使用 InternVL 和 VLMEvalKit 仓库进行模型评估。具体来说,DocVQA、ChartQA、InfoVQA、TextVQA、MME、AI2D、MMBench、CCBench、MMVet 和 SEED-Image 的结果是使用 InternVL 仓库测试的。MMMU、OCRBench、RealWorldQA、HallBench 和 MathVista 是使用 VLMEvalKit 进行评估的。
|
288 |
|
|
|
27 |
| :--------------------------: | :-------------: | :------------: | :----------------: | :-----------: |
|
28 |
| Model Size | - | - | 25.5B | 25.5B |
|
29 |
| | | | | |
|
30 |
+
| DocVQA<sub>test</sub> | 87.2 | 86.5 | 90.9 | 92.9 |
|
31 |
+
| ChartQA<sub>test</sub> | 78.1 | 81.3 | 83.8 | 84.9 |
|
32 |
+
| InfoVQA<sub>test</sub> | - | 72.7 | 72.5 | 75.9 |
|
33 |
+
| TextVQA<sub>val</sub> | - | 73.5 | 80.6 | 82.3 |
|
34 |
+
| OCRBench | 678 | 754 | 724 | 825 |
|
35 |
+
| MME<sub>sum</sub> | 2070.2 | 2110.6 | 2187.8 | 2260.7 |
|
36 |
+
| RealWorldQA | 68.0 | 67.5 | 66.0 | 68.3 |
|
37 |
+
| AI2D<sub>test</sub> | 89.4 | 80.3 | 80.7 | 84.5 |
|
38 |
+
| MMMU<sub>val</sub> | 63.1 | 58.5 | 45.2 | 48.3 |
|
39 |
+
| MMBench-EN<sub>test</sub> | 81.0 | 73.9 | 82.2 | 83.4 |
|
40 |
+
| MMBench-CN<sub>test</sub> | 80.2 | 73.8 | 82.0 | 82.0 |
|
41 |
+
| CCBench<sub>dev</sub> | 57.3 | 28.4 | 69.8 | 73.5 |
|
42 |
+
| MMVet<sub>GPT-4-0613</sub> | - | - | 62.8 | 64.2 |
|
43 |
+
| MMVet<sub>GPT-4-Turbo</sub> | 67.5 | 64.0 | 55.4 | 62.1 |
|
44 |
+
| SEED-Image | - | - | 76.0 | 76.8 |
|
45 |
+
| HallBench<sub>avg</sub> | 43.9 | 45.6 | 49.3 | 50.7 |
|
46 |
+
| MathVista<sub>testmini</sub> | 58.1 | 57.7 | 53.5 | 59.4 |
|
47 |
|
48 |
- We simultaneously use InternVL and VLMEvalKit repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. MMMU, OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
|
49 |
|
|
|
263 |
|
264 |
## 性能测试
|
265 |
|
266 |
+
| 评测数据集 | GPT-4T-20240409 | Gemini-1.5-Pro | InternVL-Chat-V1-5 | InternVL2-26B |
|
267 |
+
| :--------------------------: | :-------------: | :------------: | :----------------: | :-----------: |
|
268 |
+
| 模型大小 | - | - | 25.5B | 25.5B |
|
269 |
+
| | | | | |
|
270 |
+
| DocVQA<sub>test</sub> | 87.2 | 86.5 | 90.9 | 92.9 |
|
271 |
+
| ChartQA<sub>test</sub> | 78.1 | 81.3 | 83.8 | 84.9 |
|
272 |
+
| InfoVQA<sub>test</sub> | - | 72.7 | 72.5 | 75.9 |
|
273 |
+
| TextVQA<sub>val</sub> | - | 73.5 | 80.6 | 82.3 |
|
274 |
+
| OCRBench | 678 | 754 | 724 | 825 |
|
275 |
+
| MME<sub>sum</sub> | 2070.2 | 2110.6 | 2187.8 | 2260.7 |
|
276 |
+
| RealWorldQA | 68.0 | 67.5 | 66.0 | 68.3 |
|
277 |
+
| AI2D<sub>test</sub> | 89.4 | 80.3 | 80.7 | 84.5 |
|
278 |
+
| MMMU<sub>val</sub> | 63.1 | 58.5 | 45.2 | 48.3 |
|
279 |
+
| MMBench-EN<sub>test</sub> | 81.0 | 73.9 | 82.2 | 83.4 |
|
280 |
+
| MMBench-CN<sub>test</sub> | 80.2 | 73.8 | 82.0 | 82.0 |
|
281 |
+
| CCBench<sub>dev</sub> | 57.3 | 28.4 | 69.8 | 73.5 |
|
282 |
+
| MMVet<sub>GPT-4-0613</sub> | - | - | 62.8 | 64.2 |
|
283 |
+
| MMVet<sub>GPT-4-Turbo</sub> | 67.5 | 64.0 | 55.4 | 62.1 |
|
284 |
+
| SEED-Image | - | - | 76.0 | 76.8 |
|
285 |
+
| HallBench<sub>avg</sub> | 43.9 | 45.6 | 49.3 | 50.7 |
|
286 |
+
| MathVista<sub>testmini</sub> | 58.1 | 57.7 | 53.5 | 59.4 |
|
287 |
|
288 |
- 我们同时使用 InternVL 和 VLMEvalKit 仓库进行模型评估。具体来说,DocVQA、ChartQA、InfoVQA、TextVQA、MME、AI2D、MMBench、CCBench、MMVet 和 SEED-Image 的结果是使用 InternVL 仓库测试的。MMMU、OCRBench、RealWorldQA、HallBench 和 MathVista 是使用 VLMEvalKit 进行评估的。
|
289 |
|