File size: 2,560 Bytes
f49bd5b
3e133f3
 
 
 
 
 
 
 
f49bd5b
2d5fee2
f347b52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d2c3333
fe3a4a1
42e4ea9
d2c3333
f49bd5b
2d5fee2
f347b52
 
2a25d91
f49bd5b
d4d98d7
5f24aef
 
 
 
1d6fcb9
5f24aef
 
834c93a
 
5f24aef
834c93a
6e92f8c
 
5f24aef
 
 
 
 
 
 
 
 
 
6e92f8c
5f24aef
 
 
 
 
 
 
 
 
 
 
 
 
 
e9633ba
5f24aef
 
a3d92f9
5f24aef
e9633ba
a3d92f9
5f24aef
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
language:
- en
license: apache-2.0
size_categories:
- n<1K
task_categories:
- question-answering
pretty_name: Mantis-Eval
dataset_info:
- config_name: mantis_eval
  features:
  - name: id
    dtype: string
  - name: question_type
    dtype: string
  - name: question
    dtype: string
  - name: images
    sequence: image
  - name: options
    sequence: string
  - name: answer
    dtype: string
  - name: data_source
    dtype: string
  - name: category
    dtype: string
  splits:
  - name: test
    num_bytes: 479770102
    num_examples: 217
  download_size: 473031413
  dataset_size: 479770102
configs:
- config_name: mantis_eval
  data_files:
  - split: test
    path: mantis_eval/test-*
---

## Overview
This is a newly curated dataset to evaluate multimodal language models' capability to reason over multiple images. More details are shown in https://tiger-ai-lab.github.io/Mantis/. 

### Statistics
This evaluation dataset contains 217 human-annotated challenging multi-image reasoning problems.

### Leaderboard
We list the current results as follows:

| Models            | Size | Mantis-Eval |
|:------------------|:-----|:------------|
| LLaVA OneVision   | 72B  | 77.60       |
| LLaVA OneVision   | 7B   | 64.20       |
| GPT-4V            | -    | 62.67       |
| Mantis-SigLIP     | 8B   | 59.45       |
| Mantis-Idefics2   | 8B   | 57.14       |
| Mantis-CLIP       | 8B   | 55.76       |
| VILA              | 8B   | 51.15       |
| BLIP-2            | 13B  | 49.77       |
| Idefics2          | 8B   | 48.85       |
| InstructBLIP      | 13B  | 45.62       |
| LLaVA-V1.6        | 7B   | 45.62       |
| CogVLM            | 17B  | 45.16       |
| LLaVA OneVision   | 0.5B | 39.60       |
| Qwen-VL-Chat      | 7B   | 39.17       |
| Emu2-Chat         | 37B  | 37.79       |
| VideoLLaVA        | 7B   | 35.04       |
| Mantis-Flamingo   | 9B   | 32.72       |
| LLaVA-v1.5        | 7B   | 31.34       |
| Kosmos2           | 1.6B | 30.41       |
| Idefics1          | 9B   | 28.11       |
| Fuyu              | 8B   | 27.19       |
| OpenFlamingo      | 9B   | 12.44       |
| Otter-Image       | 9B   | 14.29       |

### Citation
If you are using this dataset, please cite our work with
```
@article{Jiang2024MANTISIM,
  title={MANTIS: Interleaved Multi-Image Instruction Tuning},
  author={Dongfu Jiang and Xuan He and Huaye Zeng and Cong Wei and Max W.F. Ku and Qian Liu and Wenhu Chen},
  journal={Transactions on Machine Learning Research},
  year={2024},
  volume={2024},
  url={https://openreview.net/forum?id=skLtdUVaJa}
}
```