id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2309.07915#14 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Table 1: Evaluation results on the MME. Top two scores are highlighted and underlined, respectively. reference. We then have annotators scrutinize every datasetâ s samples and provide task instructions. This practice aids in gaining a comprehensive understanding of the task and helps craft high-quality templates. Next, we employ ChatGPTâ to rewrite the instructions to describe the key characteristics of each task accurately. After ChatGPT generates the instructions, we undergo a manual review to guarantee the high quality of the instructions. We select ten suitable templates matching as candidates, then merge the original datasetâ s input into a randomly chosen template. We assemble demonstrations for each instance from the dataset by selecting a small amount of data and arranging them sequentially. These demonstrations are integrated with the input instance to generate multi-modal contextual dataâ | 2309.07915#13 | 2309.07915#15 | 2309.07915 | [
"2305.15023"
] |
2309.07915#15 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | ¡. We construct multi-image data by extracting eight frames per video from MSRVTT (Xu et al., 2016) and MSRVTTQA (Xu et al., 2016) datasets. We also crop images from the VCR (Zellers et al., 2019) dataset using object bounding boxes to produce intertwined multi-modal data with closely related images. We convert all data into a vision-language Q&A format to create high-quality multi-modal training data and accumulate 5.8M samples in MIC dataset. Due to resource constraints, we use approximately 10% of MIC with the sampling strategy described in Appendix E to finetune MMICL. It is anticipated that a larger model trained on all of our data would yield a more promising result. 2.4 TRAINING PARADIGM Stage I: Pretraining. This stage aims to assist the model in aligning the image and text embeddings. During this stage, both the vision encoder and the LLM remain frozen. The VPG (i.e., Q-Former) and projection layer are trained to learn visual embeddings that can be interpreted by the LLM. Stage II: Multi-Modal In-Context Tuning. In this stage, we aim to address the aforementioned limitations and take our model a step further by extending it to multi-modal in-context learning. Specifically, we aim to make the model understand the intricate referential relationships between the text and images and the complex relationships among multiple images and ultimately acquire a proficient multi-modal in-context learning ability. Therefore, we perform multi-modal In-Context Tuning on MIC dataset. During the stage II, we freeze the image encoder, Q-former, and LLM while jointly training the projection layer and query and value vectors. 3 EXPERIMENT 3.1 EXPERIMENTAL SETUP Evaluation Setup. We aim to develop general-purpose VLMs that can generally adapt to diverse, challenging multi-modal prompts. Therefore, we evaluate our models in several vision-language benchmarks, including tasks that involve images and videos. The metrics used in these benchmarks and further details are shown in Appendix L. | 2309.07915#14 | 2309.07915#16 | 2309.07915 | [
"2305.15023"
] |
2309.07915#16 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | â We use the gpt-3.5-turbo version of ChatGPT. â ¡Except for the video datasets, vcr dataset, and LLaVa dataset. More detail can be found in Appendix B 6 # Preprint C1: Some plants surrounding a lightbulb. Q: Do you agree the following image is: l C2: A lightbulb surrounding some plants. aS +] [4] [o][ ] @ : QU: Is the Caption! matches the image? Py Q2: Is the Caption! matches the image2? t () [@] Cone I Conese (cence Q3: Is the Caption2 matches the image? Q4: Is the Caption2 matches the image2? 4 + â Answer: P{Yes|Q} MCh) Cot) Chg) MCI) A B a D E Figure 5: Illustration of two complex vision language reasoning tasks: Winoground (Thrush et al., 2022b) (Left) and RAVEN (Zhang et al., 2019) (Right). Models and Baselines. We provide two versions of MMICL: (1) MMICL (FLAN-T5) which uses BLIP-2 (Li et al., 2023d) as the backbone and (2) MMICL (Instruct-FLAN-T5) which uses Instruct- BLIP (Dai et al., 2023) as the backbone. We also adopt XL and XXL of FLANT5 (Chung et al., 2022) model for both versions. We compare MMICL with following strong baselines: Flamingo (Alayrac et al., 2022), KOSMOS-1 (Huang et al., 2023a), BLIP-2-FLAN-T5, InstructBLIP-FLAN-T5, Shikra (Chen et al., 2023), Otter (Li et al., 2023a), Ying-VLM (Li et al., 2023e). The details of MMICL and baselines are shown in Appendix G, and Appendix M. 3.2 GENERAL PERFORMANCE EVALUATIONS | 2309.07915#15 | 2309.07915#17 | 2309.07915 | [
"2305.15023"
] |
2309.07915#17 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | We evaluate the general performance of MMICL on both MME (Fu et al., 2023) and MMBench (Liu et al., 2023c) benchmarks§. MME evaluates VLMs with 14 sub-tasks that encompass cognition and perception abilities. Results in Table 1 show that MMICL can achieve the best average scores com- pared with current VLMs on cognition and perception tasks. MMICL also demonstrates outstanding performance and significantly surpasses other VLMs on the MMBench benchmark, which thoroughly evaluates the diverse skills of VLMs. The detailed results are presented in Table 21. See Appendix H and I for MMICLâ s evaluation detail and comparisons with other VLMs. 3.3 PERFORMANCE PROB 3.3.1 UNDERSTANDING TEXT-TO-IMAGE REFERENCE Table 2: Results on Winoground across text, image and group score metrics. The Winoground (Thrush et al., 2022b) proposes a task of correctly matching two given images and captions, as de- picted in the left of Fig. 5. The challenge lies in the fact that both captions consist of the exact same words, albeit in a dif- ferent order. VLMs must compare both images and texts to discern their subtle differences and capture the implicit ref- erence between them. Therefore, we se- lect the Winoground to evaluate whether VLMs understand the text-to-image ref- erence. Results in Table 2 demonstrate that MMICL captures the referential re- lationship between image and text, surpassing previous baselines. Model Image Group Text 85.50 16.67 MTurk Human Random Chance 88.50 25.00 89.50 25.00 CLIP-based Model 42.20 47.00 VQ2 (Yarom et al., 2023) 30.50 Vision-language Model 46.50 44.00 45.00 38.00 26.00 44.99 PALI (Chen et al., 2022) Blip-2 (Li et al., 2023d) MMICL (FLAN-T5-XXL) 28.75 23.50 43.00 3.3.2 UNDERSTANDING COMPLEX IMAGE-TO-IMAGE RELATIONSHIP | 2309.07915#16 | 2309.07915#18 | 2309.07915 | [
"2305.15023"
] |
2309.07915#18 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | RAVEN (Zhang et al., 2019; Huang et al., 2023a) test is widely used to evaluate the nonverbal reason- ing ability of VLMs. It requires visual and logical skills to understand the relationships among images. §All the reported performance for the baseline methods is from the leaderboard of MME (Fu et al., 2023) and MMBench (Liu et al., 2023c). We report the result of MMICL with FLANT5-XXL backbone. | 2309.07915#17 | 2309.07915#19 | 2309.07915 | [
"2305.15023"
] |
2309.07915#19 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 7 # Preprint Model Flickr 30K WebSRC VQAv2 Hateful Memes VizWiz Flamingo-3B (Alayrac et al., 2022) (Zero-Shot) Flamingo-3B (Alayrac et al., 2022) (4-Shot) Flamingo-9B (Alayrac et al., 2022) (Zero-Shot) Flamingo-9B (Alayrac et al., 2022) (4-Shot) 60.60 72.00 61.50 72.60 - - - - 49.20 53.20 51.80 56.30 53.70 53.60 57.00 62.70 28.90 34.00 28.80 34.90 KOSMOS-1 (Huang et al., 2023b) (Zero-Shot) KOSMOS-1 (Huang et al., 2023b) (4-Shot) 67.10 75.30 3.80 - 51.00 51.80 63.90 - 29.20 35.30 Zero-Shot Evaluation BLIP-2 (Li et al., 2023d) (FLANT5-XL) BLIP-2 (Li et al., 2023d) (FLANT5-XXL) 64.51 60.74 12.25 10.10 58.79 60.91 60.00 62.25 25.52 22.50 InstructBLIP (Dai et al., 2023) (FLANT5-XL) InstructBLIP (Dai et al., 2023) (FLANT5-XXL) 77.16 73.13 10.80 11.50 36.77 63.69 58.54 61.70 32.08 15.11 Zero-Shot Evaluation MMICL (FLAN-T5-XL) MMICL (FLAN-T5-XXL) MMICL (Instruct-FLAN-T5-XL) MMICL (Instruct-FLAN-T5-XXL) 60.56 78.64 78.89 44.29 12.55 18.85 14.75 17.05 62.17 69.99 69.13 70.30 60.28 60.32 61.12 62.23 25.04 29.34 29.92 24.45 Few-Shot (4-Shot) Evaluation MMICL (FLAN-T5-XL) MMICL (FLAN-T5-XXL) MMICL (Instruct-FLAN-T5-XL) MMICL (Instruct-FLAN-T5-XXL) 71.95 75.37 74.27 72.04 12.30 18.70 14.80 19.65 62.63 69.83 69.16 70.56 60.80 61.12 61.12 64.60 50.17 33.16 33.16 50.28 | 2309.07915#18 | 2309.07915#20 | 2309.07915 | [
"2305.15023"
] |
2309.07915#20 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Table 4: Main results of multi-modal in-context learning ability of MMICL across vision-language tasks. All evaluation metrics used in the evaluation is introduced as Table 24. # Table 3: Zero-shot generalization on Raven IQ test. We conduct zero-shot experiments on the Raven test to evaluate VLMâ s ability to understand image-to-image relationships. Each instance has 3 or 8 images as inputs and 6 candidate im- ages with a unique answer, and the goal is to predict the right image as shown in the right of Fig. 5. The result in Table 3 shows that MMICL achieves 12 points improvement compared to KOSMOS-1. It indicates that MMICL is able to capture the complex image-to-image relationships and conduct nonverbal visual reasoning tasks. Model Accuracy Random Choice KOSMOS-1 (Huang et al., 2023a) MMICL (FLAN-T5-XXL) 17% 22% 34% 3.4 LEARNING FROM IN-CONTEXT MULTI-MODAL DEMONSTRATIONS As shown in Table 4, we evaluate the multi-modal in-context learning ability of MMICL across various vision-language tasks. MMICL outperforms other VLMs on both the held-in and held-out datasets and achieves the state-of-art few-shot performance. For example, few-shot evaluation (4-shot) of MMICL on the VizWiz benchmark outperforms the baseline Flamingo-9B (Alayrac et al., 2022) and KOSMOS-1 (Huang et al., 2023b) by 15.38 and 14.98 points, respectively. Since VizWiz has never been exposed in the training data, this superior suggests the ability of MMICL to generalize to new tasks with a few exemplars. The few-shot performance of Flickr30K decreases with examples given because the captions examples may provide noise for the VLM to finish the task(i.e., in-context exemplars generally do not provide hinds for models to perform image captioning tasks). 3.5 HALLUCINATION AND LANGUAGE BIAS OF VLMS Current VLMs have significant visual hallucinations (Li et al., 2023f), preventing VLMs from benefiting from multi-modal ICL. | 2309.07915#19 | 2309.07915#21 | 2309.07915 | [
"2305.15023"
] |
2309.07915#21 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Especially when dealing with complex prompts with multiple images (e.g., multi-modal chain of thoughts (Zhang et al., 2023b)), VLMs often overlook visual content when facing extensive text. This language bias reduces their efficiency in answering questions that require both images and text. ScienceQA-IMG (Lu et al., 2022) is a challenging task that requires a model to use both modalities to answer the question. We manually split the dataset into two groups: questions needing images to answer and those not. Extensive experiments in Table 5 demonstrate that MMICL effectively mitigates language bias as it performs equally well in both groups. On the other hand, other VLMs suffer from language bias and exhibit vastly different performances in the two groups. Specifically, MMICL achieves a significant improvement compared to other VLMs with a similar model structure (e.g., Instructblip and Ying-VLM) in reducing language bias. Comparison | 2309.07915#20 | 2309.07915#22 | 2309.07915 | [
"2305.15023"
] |
2309.07915#22 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 8 Preprint Moder Average Performance Donâ t Require Visual Infomation Require Visual Infomation Performance Gap Random Guess Ying-VLM (Li et al., 2023e) InstructBLIP (Dai et al., 2023) Otter (Li et al., 2023a) Shikra (Chen et al., 2023) MMICL 35.50 55.70 71.30 63.10 45.80 82.10 35.80 66.60 82.00 70.90 52.90 82.60 34.90 44.90 60.70 55.70 39.30 81.70 - 21.70 21.30 15.20 13.60 0.90 Table 5: Zero-shot performance of different VLMs on ScienceQA-IMG dataset in different split. MMICL outperforms other VLMs by successfully alleviating language bias. | 2309.07915#21 | 2309.07915#23 | 2309.07915 | [
"2305.15023"
] |
2309.07915#23 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Model VSR IconQA text VisDial IconQA img Bongard HOI Stage I Stage I (Blip-2-FLANT5-XL) Stage I (Blip-2-FLANT5-XXL) 61.62 63.18 45.44 50.08 35.43 36.48 48.42 48.42 52.75 59.20 Stage I (InstructBLIP-FLANT5-XL) Stage I (InstructBLIP-FLANT5-XXL) 61.54 65.06 47.53 51.39 35.36 36.09 50.11 45.10 53.15 63.35 Stage I + Stage II 62.85 Stage I + Stage II (BLIP-2-FLAN-T5-XL) 64.73 Stage I + Stage II (BLIP-2-FLAN-T5-XXL) 70.54 Stage I + Stage II (InstructBLIP-FLAN-T5-XL) Stage I + Stage II (InstructBLIP-FLAN-T5-XXL) 66.45 47.23 50.55 52.55 52.00 35.76 37.00 36.87 37.98 51.24 34.93 47.27 60.85 56.95 68.05 74.20 67.20 Table 6: Ablation study on Training Paradigm across five datasets: VSR (Liu et al., 2022), IconQA- text (Lu et al., 2021), VisDial (Das et al., 2017), IconQA-img, and Bongard-HOI (Jiang et al., 2022). with Otter shows that the lack of understanding in text-to-image reference and multiple-image relationships can result in significant language bias for Otter, even with the multimodal instruction in-context tuning. Shrika¶ mitigates the language bias by including spatial coordinate inputs and achieves the lowest performance gap except for MMICL. We also examined object hallucination in MMICLin Appendix K, which shows impressive performance. 3.6 ABLATION STUDY ON TRAINING PARADIGM | 2309.07915#22 | 2309.07915#24 | 2309.07915 | [
"2305.15023"
] |
2309.07915#24 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | We conduct an ablation study on various tasks to evaluate the effect of multi-modal in-context tuning. Table 6 displays a significant enhancement of MMICLâ s performance due to the multi-modal in-context tuning. Significant improvements can be observed across all types and sizes of models, especially for tasks that involve multiple images. Specifically, MMICL (Stage I + Stage II) gained 15.75 and 21.05 points improvement in IconQA-img and Bongard-HOI respectively, compared to the Stage I only model. This indicates that with the help of Stage II, MMICL can handle complex multi-modal prompts and accomplish challenging tasks with multiple images. Result in Appendix J also confirms this point with the outstanding performance of MMICL across various video datasets. 4 RELATED WORK Vision-Language Pretraining: Recent VLMs (Zhu et al., 2023; Liu et al., 2023b; Li et al., 2022; Alayrac et al., 2022; Dai et al., 2023) have been proven effective for aligning visual inputs and frozen LLMs to obtain cross-modal generalization ability. However, previous works overlooked multi-image VLMs, mainly focusing on handling single-image prompts. Tsimpoukelli et al. (2021) supports multi-image inputs using self-attention for images but performs poorly in downstream tasks. Although Flamingo (Alayrac et al., 2022) supports Few-Shot Learning in VLMs and uses cross-attention to capture text-image relationships, it still suffers from exact reference to specific images. Multi-Modal Instruction Tuning: Instruction tuning (Kung & Peng, 2023; Wei et al., 2022) achieves great success in cross-task generalization for LLMs. However, multi-modal instruction tuning still requires further exploration. Multiinstruct (Xu et al., 2023) introduces instruction tuning to enhance the performance of VLMs in instruction-following ability. | 2309.07915#23 | 2309.07915#25 | 2309.07915 | [
"2305.15023"
] |
2309.07915#25 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Due to the architectural design, ¶We use 0708 version of Shikra, which performs better for multi-choice questions to ensure fair competition. 9 Preprint Multiinstruct still struggles with complex contexts containing multiple images. Otter (Li et al., 2023a) fine-tunes Openflamingo (Awadalla et al., 2023) to augment its instruction comprehension capabilities. However, Otterâ s dataset lacks text-to-image references and interconnected image-to-image data. This limitation hinders its capability to handle complex contexts that involve visual-textual relationships. # 5 CONCLUSION In this paper, we highlight the limitations of VLMs handling the complex multi-modal prompts with multiple images, which makes VLMs less effective in downstream vision-language tasks. We introduce MMICL to address the aforementioned limitations and take our model a step further by extending it to multi-modal in-context learning. This breakthrough enables VLMs to better understand complex multi-modal prompts. Furthermore, MMICL sets a new state-of-the-art performance on the general VLM benchmarks and complex multi-modal reasoning benchmarks. # REFERENCES Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C. Lawrence Zitnick, Dhruv Batra, and Devi Parikh. | 2309.07915#24 | 2309.07915#26 | 2309.07915 | [
"2305.15023"
] |
2309.07915#26 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Vqa: Visual question answering, 2016. Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, and Peter Anderson. nocaps: novel object captioning at scale. In ICCV, pp. 8948â 8957, 2019. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob L Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, MikoÅ aj Bi´nkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karén Simonyan. Flamingo: a visual language model for few-shot learning. | 2309.07915#25 | 2309.07915#27 | 2309.07915 | [
"2305.15023"
] |
2309.07915#27 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 23716â 23736. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_ files/paper/2022/file/960a172bc7fbf0177ccccbb411a7d800-Paper-Conference.pdf. Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Yitzhak Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo: An open-source framework for training large autoregressive vision-language models. ArXiv, abs/2308.01390, 2023. URL https://api.semanticscholar.org/CorpusID:261043320. Jeffrey P Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, et al. Vizwiz: nearly real-time answers to visual questions. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, pp. 333â 342, 2010. | 2309.07915#26 | 2309.07915#28 | 2309.07915 | [
"2305.15023"
] |
2309.07915#28 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Ali Furkan Biten, Ruben Tito, Andres Mafla, Lluis Gomez, Marçal Rusinol, Ernest Valveny, CV Jawa- har, and Dimosthenis Karatzas. Scene text visual question answering. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 4291â 4301, 2019. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. David Chen and William B Dolan. Collecting highly parallel data for paraphrase evaluation. In Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies, pp. 190â 200, 2011. Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. | 2309.07915#27 | 2309.07915#29 | 2309.07915 | [
"2305.15023"
] |
2309.07915#29 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Shikra: Unleashing multimodal llmâ s referential dialogue magic, 2023. 10 Preprint Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794, 2022. Xingyu Chen, Zihan Zhao, Lu Chen, JiaBao Ji, Danyang Zhang, Ao Luo, Yuxuan Xiong, and Kai Yu. WebSRC: A dataset for web-based structural reading comprehension. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 4173â 4185, Online and Punta Cana, Dominican Republic, November 2021a. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.343. URL https://aclanthology.org/2021.emnlp-main. 343. Xingyu Chen, Zihan Zhao, Lu Chen, Danyang Zhang, Jiabao Ji, Ao Luo, Yuxuan Xiong, and Kai Yu. Websrc: A dataset for web-based structural reading comprehension. arXiv preprint arXiv:2101.09465, 2021b. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. | 2309.07915#28 | 2309.07915#30 | 2309.07915 | [
"2305.15023"
] |
2309.07915#30 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //lmsys.org/blog/2023-03-30-vicuna/. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. Maria Cipollone, Catherine C Schifter, and Rick A Moffat. | 2309.07915#29 | 2309.07915#31 | 2309.07915 | [
"2305.15023"
] |
2309.07915#31 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Minecraft as a creative tool: A case study. International Journal of Game-Based Learning (IJGBL), 4(2):1â 14, 2014. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023. | 2309.07915#30 | 2309.07915#32 | 2309.07915 | [
"2305.15023"
] |
2309.07915#32 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José MF Moura, Devi Parikh, and Dhruv Batra. Visual dialog. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 326â 335, 2017. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. A survey on in-context learning, 2023. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. Glm: General language model pretraining with autoregressive blank infilling. arXiv preprint arXiv:2103.10360, 2021. Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. | 2309.07915#31 | 2309.07915#33 | 2309.07915 | [
"2305.15023"
] |
2309.07915#33 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Eva: Exploring the limits of masked visual representation learning at scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 19358â 19369, June 2023. Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023. Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023. Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723, 2020. Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790, 2023. 11 Preprint Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, July 2017. Wenbo Hu, Yifan Xu, Y Li, W Li, Z Chen, and Z Tu. Bliva: A simple multimodal llm for better handling of text-rich visual questions. arXiv preprint arXiv:2308.09936, 2023. | 2309.07915#32 | 2309.07915#34 | 2309.07915 | [
"2305.15023"
] |
2309.07915#34 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et al. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045, 2023a. Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et al. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045, 2023b. | 2309.07915#33 | 2309.07915#35 | 2309.07915 | [
"2305.15023"
] |
2309.07915#35 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Drew A. Hudson and Christopher D. Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019. Huaizu Jiang, Xiaojian Ma, Weili Nie, Zhiding Yu, Yuke Zhu, and Anima Anandkumar. Bongard-hoi: Benchmarking few-shot visual reasoning for human-object interactions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19056â 19065, 2022. Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. | 2309.07915#34 | 2309.07915#36 | 2309.07915 | [
"2305.15023"
] |
2309.07915#36 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | The hateful memes challenge: Detecting hate speech in multimodal memes. Advances in Neural Information Processing Systems, 33:2611â 2624, 2020. Po-Nien Kung and Nanyun Peng. Do models really learn to follow instructions? an empirical study of instruction tuning. arXiv preprint arXiv:2305.11383, 2023. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. | 2309.07915#35 | 2309.07915#37 | 2309.07915 | [
"2305.15023"
] |
2309.07915#37 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Otter: A multi-modal model with in-context instruction tuning, 2023a. Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. arXiv preprint arXiv:2306.00890, 2023b. Juncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Hanwang Zhang, Wei Ji, Wenqiao Zhang, Tat- Seng Chua, Siliang Tang, and Yueting Zhuang. Empowering vision-language models to follow interleaved vision-language instructions. arXiv preprint arXiv:2308.04152, 2023c. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre- training for unified vision-language understanding and generation. In International Conference on Machine Learning, pp. 12888â | 2309.07915#36 | 2309.07915#38 | 2309.07915 | [
"2305.15023"
] |
2309.07915#38 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 12900. PMLR, 2022. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre- training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023d. Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu Sun, Lingpeng Kong, and Qi Liu. | 2309.07915#37 | 2309.07915#39 | 2309.07915 | [
"2305.15023"
] |
2309.07915#39 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | M3it: A large-scale dataset towards multi-modal multilingual instruction tuning, 2023e. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models, 2023f. Yunshui Li, Binyuan Hui, ZhiChao Yin, Min Yang, Fei Huang, and Yongbin Li. PaCE: Unified multi-modal dialogue pre-training with progressive and compositional experts. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 13402â 13416, Toronto, Canada, July 2023g. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.749. URL https://aclanthology.org/2023.acl-long.749. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 12 Preprint Fangyu Liu, Guy Emerson, and Nigel Collier. Visual spatial reasoning. arXiv preprint arXiv:2205.00363, 2022. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565, 2023a. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. 2023b. Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, and Dahua Lin. Mmbench: Is your multi-modal model an all-around player?, 2023c. Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. Iconqa: | 2309.07915#38 | 2309.07915#40 | 2309.07915 | [
"2305.15023"
] |
2309.07915#40 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | A new benchmark for abstract diagram understanding and visual language reasoning. arXiv preprint arXiv:2110.13214, 2021. Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507â 2521, 2022. Gen Luo, Yiyi Zhou, Tianhe Ren, Shengxin Chen, Xiaoshuai Sun, and Rongrong Ji. | 2309.07915#39 | 2309.07915#41 | 2309.07915 | [
"2305.15023"
] |
2309.07915#41 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Cheap and quick: Efficient vision-language instruction tuning for large language models. arXiv preprint arXiv:2305.15023, 2023. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019. Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. | 2309.07915#40 | 2309.07915#42 | 2309.07915 | [
"2305.15023"
] |
2309.07915#42 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Metaicl: Learning to learn in context. arXiv preprint arXiv:2110.15943, 2021. Ivona Najdenkoska, Xiantong Zhen, and Marcel Worring. Meta learning to bridge vision and language models for multimodal few-shot learning. arXiv preprint arXiv:2302.14794, 2023. OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. Junting Pan, Ziyi Lin, Yuying Ge, Xiatian Zhu, Renrui Zhang, Yi Wang, Yu Qiao, and Hongsheng Li. Retrieving-to-answer: Zero-shot video question answering with frozen large language models, 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748â | 2309.07915#41 | 2309.07915#43 | 2309.07915 | [
"2305.15023"
] |
2309.07915#43 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 8763. PMLR, 2021. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1â 16. IEEE, 2020. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System op- In Pro- timizations enable training deep learning models with over 100 billion parameters. ceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD â 20, pp. 3505â 3506, New York, NY, USA, 2020. Association for Com- ISBN 9781450379984. doi: 10.1145/3394486.3406703. URL https: puting Machinery. //doi.org/10.1145/3394486.3406703. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211â 252, 2015. Babak Saleh and Ahmed Elgammal. | 2309.07915#42 | 2309.07915#44 | 2309.07915 | [
"2305.15023"
] |
2309.07915#44 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Large-scale classification of fine-art paintings: Learning the right metric on the right feature. arXiv preprint arXiv:1505.00855, 2015. 13 Preprint Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. | 2309.07915#43 | 2309.07915#45 | 2309.07915 | [
"2305.15023"
] |
2309.07915#45 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-okvqa: A benchmark for visual question answering using world knowledge, 2022. Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards VQA models that can read. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 8317â 8326, 2019. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. | 2309.07915#44 | 2309.07915#46 | 2309.07915 | [
"2305.15023"
] |
2309.07915#46 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491, 2018. Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Can- dace Ross. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5238â 5248, 2022a. Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Can- dace Ross. Winoground: | 2309.07915#45 | 2309.07915#47 | 2309.07915 | [
"2305.15023"
] |
2309.07915#47 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5238â 5248, 2022b. Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. Multimodal few-shot learning with frozen language models. Advances in Neural Information Processing Systems, 34:200â 212, 2021. Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100, 2022a. Zijie J. Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. DiffusionDB: A large-scale prompt gallery dataset for text-to-image generative models. arXiv:2210.14896 [cs], 2022b. URL https://arxiv.org/abs/2210.14896. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. | 2309.07915#46 | 2309.07915#48 | 2309.07915 | [
"2305.15023"
] |
2309.07915#48 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Finetuned language models are zero-shot learners, 2022. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. | 2309.07915#47 | 2309.07915#49 | 2309.07915 | [
"2305.15023"
] |
2309.07915#49 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38â 45, Online, October 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos. 6. Junbin Xiao, Xindi Shang, Angela Yao, and Tat-Seng Chua. Next-qa: Next phase of question- In Proceedings of the IEEE/CVF Conference on answering to explaining temporal actions. Computer Vision and Pattern Recognition, pp. 9777â 9786, 2021. | 2309.07915#48 | 2309.07915#50 | 2309.07915 | [
"2305.15023"
] |
2309.07915#50 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5288â 5296, 2016. Zhiyang Xu, Ying Shen, and Lifu Huang. MultiInstruct: Improving multi-modal zero-shot learn- ing via instruction tuning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 11445â 11465, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.641. URL https://aclanthology.org/2023.acl-long.641. | 2309.07915#49 | 2309.07915#51 | 2309.07915 | [
"2305.15023"
] |
2309.07915#51 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 14 Preprint Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Just ask: Learning to answer questions from millions of narrated videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1686â 1697, 2021. Michal Yarom, Yonatan Bitton, Soravit Changpinyo, Roee Aharoni, Jonathan Herzig, Oran Lang, Eran Ofek, and Idan Szpektor. What you see is what you read? improving text-image alignment evaluation. arXiv preprint arXiv:2305.10400, 2023. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2, 2014. Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling context in referring expressions. In Computer Visionâ ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pp. 69â 85. Springer, 2016. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6720â 6731, 2019. Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, and Tao Kong. What matters in training a gpt4-style language model with multimodal inputs? arXiv preprint arXiv:2307.02469, 2023. | 2309.07915#50 | 2309.07915#52 | 2309.07915 | [
"2305.15023"
] |
2309.07915#52 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Ao Zhang, Hao Fei, Yuan Yao, Wei Ji, Li Li, Zhiyuan Liu, and Tat-Seng Chua. Transfer visual prompt generator across llms. CoRR, abs/23045.01278, 2023a. URL https://doi.org/10. 48550/arXiv.2305.01278. Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: | 2309.07915#51 | 2309.07915#53 | 2309.07915 | [
"2305.15023"
] |
2309.07915#53 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5317â 5327, 2019. Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodal chain-of-thought reasoning in language models, 2023b. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. | 2309.07915#52 | 2309.07915#54 | 2309.07915 | [
"2305.15023"
] |
2309.07915#54 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 15 Preprint # A RELATED WORK A.1 VISION-LANGUAGE PRETRAINING Multi-Image Inputs Multi-modal Instruction Tuning Text-to-Image Reference Flamingo Meta learner BLIP-2 LLAVA MiniGPT-4 InstructBLIP Shikra Kosmos-1 Otter MMICL â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â Table 7: Summary of Vision-Language Pre-Trained Models. Our work is inspired by recent vision-language pre-training works (Zhu et al., 2023; Liu et al., 2023b; Li et al., 2022; 2023d), which have been proven effective for aligning visual inputs and frozen LLMs to obtain cross-modal generalization ability. BLIP-2 BLIP-2 (Li et al., 2023d) bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. InstructBLIP InstructBLIP (Dai et al., 2023) performs vision-language instruction tuning based on the pre-trained BLIP-2 models with converted multi-modal datasets and the LLaVA (Liu et al., 2023b) dataset generated by GPT-4. MiniGPT-4 MiniGPT-4 (Zhu et al., 2023)aligns a CLIP visual encoder with a frozen Vincuna (Chi- ang et al., 2023) with an artificially collected dialog dataset Shikra Shikra (Chen et al., 2023), a VLM which can handle spatial coordinate inputs and outputs in natural language. It makes Shikra excel at referential dialogue and general vision-language tasks, resulting in outstanding performance. However, there is still less work focusing on VLMs with multi-image inputs. Flamingo Flamingo (Tsimpoukelli et al., 2021) achieves multi-visual inputs based on self-attention for images but performs poorly in downstream tasks. | 2309.07915#53 | 2309.07915#55 | 2309.07915 | [
"2305.15023"
] |
2309.07915#55 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Flamingo supports Few-Shot Learning (FSL) in VLM via ICL by leveraging its robust capability to handle multi-visual inputs and uses cross-attention instead of self-attention to get better performance. However, it still suffers from the unableness to explicitly point images, so they introduce a hacky cross-attention mask. Kosmos-1 Kosmos-1 (Huang et al., 2023a), is trained from scratch on billion-scale multi-modal corpora, including interleaved text-image web page data, image-text caption, and language-only instruction tuning data. It can multi-modal Few-Shot Learning and Chain-of-Thought processes, thereby achieving formidable performance. Otter Otter (Li et al., 2023a), an open-source implementation of flamingo and trained with multi- modal instruction in-context tuning data. Meta learner Najdenkoska et al. (2023) uses meta-learning objective to train an adapter that aggregates multiple image features so the original VLM and adapter become a better few-shot learner. | 2309.07915#54 | 2309.07915#56 | 2309.07915 | [
"2305.15023"
] |
2309.07915#56 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 16 Preprint IN-CONTEXT LEARNING It has been well-explored to enable ICL in pre-trained language models (PLM). MetaICL (Min et al., 2021) proposes a meta-training framework for few-shot learning to tune a PLM to do in-context learning on a large set of training tasks. LM-BFF (Gao et al., 2020) studies few-shot fine-tuning of PLMs. However, ICL in VLM is still less explored. Recent works in VLM mainly focus on zero-shot evaluation with single image input. # B MULTI-MODAL ICL DATA We construct two training datasets, text-image interleaved data and in-context learning data, for the text-image relationship challenge and image-image relationship challenge, respectively. In this section, we will cover the data resources. | 2309.07915#55 | 2309.07915#57 | 2309.07915 | [
"2305.15023"
] |
2309.07915#57 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Task Dataset Used Train #samples Val Test License Captioning MS COCO (Lin et al., 2014) DiffusionDB (Wang et al., 2022b) Flickr (Young et al., 2014) NoCaps (Agrawal et al., 2019) Yes Yes Yes Yes 566,747 19,963 144,896 0 25,010 0 768 0 25,010 0 768 4,500 Custom Unknown Unknown Unknown Classification MiniImage (Russakovsky et al., 2015) Yes 38,400 9,600 12,000 Non-commercial VQA VQA v2 (Goyal et al., 2017) ST-VQA (Biten et al., 2019) Text-VQA (Singh et al., 2019) NLVR2 (Suhr et al., 2018) RefCOCO (Yu et al., 2016) Yes Yes Yes Yes Yes 30,000 26,074 27,113 86,373 26,074 30,000 0 0 6,982 0 0 4,070 5,734 6,967 4,070 CC-BY 4.0 Unknown CC BY 4.0 Unknown Unknown KVQA OK-VQA (Marino et al., 2019) Yes 9,009 5,046 0 Unknown Reasoning GQA (Hudson & Manning, 2019) VCR (Zellers et al., 2019) Winoground (Thrush et al., 2022a) Yes Yes No 943,000 25,000 0 132,062 5,000 0 12,578 5,000 800 Unknown Custom Unknown Others WikiART (Saleh & Elgammal, 2015) LLAVA-Instruct-150K (Liu et al., 2023b) Yes Yes 13,000 15,000 5,500 0 0 0 Unknown Non-commercial Table 8: Detailed task descriptions and statistics of our instruction tuning tasks, including all datasets in all types of tasks. The column â Usedâ indicates whether we use this dataset in the multi-modal in-context tuning stage. # C DATA RESOURCE | 2309.07915#56 | 2309.07915#58 | 2309.07915 | [
"2305.15023"
] |
2309.07915#58 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | The data resource used in constructing the MIC dataset is displayed in Fig. 6. Our training dataset comes from 8 task categories and 16 datasets. Image Captioning aims to produce descriptions of the given images according to different needs. Our training dataset includes MS COCO (Lin et al., 2014), DiffusionDB (Wang et al., 2022b), and Flickr 30K (Young et al., 2014). Knowledgeable Visual Question Answering (KVQA) requires the model to make use of commonsense knowledge outside the input image to answer questions. Our training dataset includes OK-VQA (Marino et al., 2019). Image Question Answering (IQA) requires the model to answer the questions based on the image correctly. Our training dataset includes VQAv2 (Goyal et al., 2017), ST-VQA (Biten et al., 2019), Text-VQA (Singh et al., 2019), WikiART (Saleh & Elgammal, 2015) and RefCOCO (Yu et al., 2016). Video Question Answering (VideoQA) requires the model to answer questions based on the video correctly. We extract eight frames per video as visual inputs for Video QA tasks. Our training dataset includes MSRVTTQA (Xu et al., 2016). | 2309.07915#57 | 2309.07915#59 | 2309.07915 | [
"2305.15023"
] |
2309.07915#59 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 17 Preprint Video Question Captioning (Image Captioning Few-Shot Image Classification Flckra0k MSRVTT [cca COCO Caption Visual Commonsense ul Diffusiondb Video Question Answering =a MSRVTT QA Natural Language Visual Bongard-HOl |_Nocaps _ Reasoning v2 \ p ivQa SO â Knowledge Question Answering MvsD Visual Spatial Reasoning Nonverbal Reasoning on ne Raven IQ Test (owen) coe â e oneal lultiple-Choice ee = NextQA- IconQAa- Visual Dialog LLaVa-Instruct-150K zs Tmage Question Answering Enea VQAv2 STVQA TextVQA Cvewe ) 2 (_ texven_] | Web Page Question Answering OOD Generalization (wikia) ~~ (__Refcoco viewi2 Minecraft Figure 6: Illustration of the data resource used to construct MIC dataset. It consists of 11 tasks and 33 different datasets. The held-in datasets are indicated by white and the held-out datasets are indicated by yellow. Video Captioning requires the model to give the caption based on the video. We extract eight frames per video as visual inputs for Video Captioning tasks. Our training dataset includes MSRVTT (Xu et al., 2016). Visual Reasoning requires the model to correctly perform image reasoning and answer questions. Our training dataset includes GQA (Hudson & Manning, 2019), VCR (Zellers et al., 2019), and NLVR2 (Suhr et al., 2018). Image Classification involves classifying an image based on a given set of candidate labels. Our training dataset includes MiniImage (Russakovsky et al., 2015). Visual Dialog requires the model to hold a meaningful dialog about visual content with humans in natural, conversational language. Our training dataset includes LLAVA-Instruct-150K (Liu et al., 2023b). Our testing dataset comes from 10 task categories and 18 datasets. Image Captioning includes the Nocaps (Agrawal et al., 2019) dataset. Knowledgeable Visual Question Answering (KVQA) includes the ScienceQA (Lu et al., 2022) and A-OKVQA (Schwenk et al., 2022) datasets. | 2309.07915#58 | 2309.07915#60 | 2309.07915 | [
"2305.15023"
] |
2309.07915#60 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Image Question Answering (IQA) includes the VizWiz (Bigham et al., 2010) dataset. Visual Reasoning includes the Winoground (Thrush et al., 2022b), VSR (Liu et al., 2022) and IconQA (Lu et al., 2021) dataset. Winoground proposes a task of matching two given images and two captions correctly. The challenge of this task is that both captions contain a completely identical set of words, only in a different order. VSR describes the spatial relation of two individual objects in the image, and a VLM needs to judge whether the caption correctly describes the image (True) or not (False). The IconQA dataset has two sub-datasets: image question answering with multiple text choice and image question answering with multiple image choice. Web Page Question Answering (Web QA) includes the Websrc (Chen et al., 2021a; Huang et al., 2023a) datasets. The model must answer questions based on the web image and the optional extracted texts. We sampled 2000 instances from Websrc for the evaluation. To align with KOSMOS-1 (Huang et al., 2023a), we only use the web image as input. Video Question Answering (VideoQA) includes the iVQA (Yang et al., 2021), MVSD (Chen & Dolan, 2011), and NextQA (Xiao et al., 2021) dateset. The NextQA dataset has two sub-datasets: video question answering with multiple choice and open-domain video question answering. | 2309.07915#59 | 2309.07915#61 | 2309.07915 | [
"2305.15023"
] |
2309.07915#61 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 18 Preprint Interleaved Multi-model Prompts with multiple images | Insrvtions Image 0 is (IMGO) [An image 1 is (IMGI} a Questions % § § § 4 => a> => By g vs Tokenizer & Embedding 6,3 53 =o | ah on 4 jb § gb 4 nm --- £ wee ees ha see eee he ! Value i Key ® Query yet, : hy) Text Embedding : Output . : : Y 1 (A) Visual Prompts Ad & Nota FEN =a Povsseeecseceeeceeeeeeeee o OS Unfreeze ' Add & Normal { 1 . * Freeze eae eee ee eee eee }o-- oer nner Language Response Figure 7: Illustration of the MMICL structure. Few-shot Image Classification includes the HatefulMemes (Kiela et al., 2020) and Bonard-HOI (Jiang et al., 2022) dataset. HatefulMemes requires the model to determine if a meme is hateful based on the image and explanation provided. Bonard-HOI is the benchmark for evaluating the modelâ s ability in Few-Shot Visual Reasoning for Human-Object Interactions. It provides few-shot examples with challenging negatives, where positive and negative images only differ in action labels. The model is then asked whether the final image is positive or negative. We sampled 2000 instances from Bonard-HOI for the evaluation. Nonverbal Reasoning includes the Raven IQ test (Huang et al., 2023a). Each instance in the Raven IQ test has 3 or 8 images as inputs and six candidate images with a unique correct completion, and the goal is to predict the next image from the candidates. Visual Dialog includes the visual dialog dataset (Das et al., 2017). We use the question of the final dialogue as the question for instance and take all preceding dialogues as the context to perform open-domain image question answering. OOD Generalization includes the Minecraft dataset that we construct using Minecraft (Cipollone et al., 2014) game which requires the VLM to identify whether an animal (i.e., cow, llama, chicken, donkey, and so on) is present in a picture. More detailed task descriptions and statistics about the datasets are shown in Table 8. | 2309.07915#60 | 2309.07915#62 | 2309.07915 | [
"2305.15023"
] |
2309.07915#62 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | # D MODEL STRUCTURE As shown in Fig. 7 MMICL treats the image and language representations equally and combines them into interleaved image-text representations, similar to the original input. Each given image is encoded by a vision encoder (e.g., ViT (Radford et al., 2021; Fang et al., 2023)) to get the vision representation of the image. Then, we use the Q-former as the VPG to extract the visual embedding. We utilize a fully connected layer as the projection layer to convert each visual embedding to the same dimension as the text embedding of the LLMs. This alignment helps the LLM to understand the images. Our approach treats the visual and text embedding equally, enabling a flexible combination of visual and textual content. Finally, we combine the visual embeddings of multiple images with text embeddings in an interleaved style and then feed them into the LLM. We set the weights for mapping query and value vectors in the attention layer of LLM as learnable to better adapt to the multi-modal context with multiple images. During the pre-training, we freeze the image encoder, Q-former, and the backbone LLM while jointly training the language projection and the query and value vectors of the LLM. | 2309.07915#61 | 2309.07915#63 | 2309.07915 | [
"2305.15023"
] |
2309.07915#63 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 19 # Preprint Templates of Image Captioning (MSCOCO, Flick30k, Nocaps, Diffusiondb) (1) Carefully analyze image 0: [IMG0] {image} to generate a concise and accurate description that accurately represents the objects, people, and scenery present. (2) Use clear and concise language that accurately describes the content of image 0: [IMG0] {image}. (3) Your caption should provide sufficient information about image 0: [IMG0] {image} so that someone who has not seen the image can understand it. (4) image 0 is [IMG0] {image}. Be specific and detailed in your description of image 0, but also try to capture the essence of image 0 in a succinct way. (5) image 0 is [IMG0] {image}. Based on the image 0, describe what is contained in this photo. Your caption should be no more than a few sentences and should be grammatically correct and free of spelling errors. (6) Include information in your caption that is specific to image 0: [IMG0] {image} and avoid using generic or ambiguous descriptions. (7) image 0 is [IMG0] {image}. Based on the image 0, give a caption about this image. Think about what message or story image 0 is conveying, and try to capture that in your image caption. (8) Based on the image 0, give a caption about this image. Your caption should provide enough detail about image 0: [IMG0] {image} to give the viewer a sense of what is happening in the image. (9) Give a caption about this image. Avoid using overly complex language or jargon in your caption of image 0: [IMG0] {image} that might confuse the viewer. (10) Be creative in your approach to captioning image 0: [IMG0] {image} and try to convey a unique perspective or story. | 2309.07915#62 | 2309.07915#64 | 2309.07915 | [
"2305.15023"
] |
2309.07915#64 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Table 9: Instruction templates used for transforming datasets into instruction tuning data. (I) {image} denotes image embedding encoded by image encoder, image embedding will be concatenated with language embedding as input. <imagej> denotes image token to exact reference the j-th image in an instance as described in Sec. 2.2.1. Templates of Image Classification (MiniImagenet, etc) (1) image 0 is [IMG0] {image}. Please identify the object or concept depicted in image 0. (2) image 0 is [IMG0] {image}. What is the main subject of image 0? (3) image 0 is [IMG0] {image}. Can you recognize and label the object shown in image 0? (4) image 0 is [IMG0] {image}. Identify the category or class to which image 0 belongs. (5) image 0 is [IMG0] {image}. Based on the visual content, determine what image 0 represents. (6) image 0 is [IMG0] {image}. What is the name or label of the item captured in image 0? (7) image 0 is [IMG0] {image}. Please provide a description or identification of the subject in image 0. (8) image 0 is [IMG0] {image}. From the visual cues, determine the object or entity depicted in image 0. (9) image 0 is [IMG0] {image}. Can you recognize and name the primary element shown in image 0? (10) image 0 is [IMG0] {image}. Identify the object or concept that best describes what is depicted in image 0. Table 10: Instruction templates used for transforming datasets into instruction tuning data. (I) {image} denotes image embedding encoded by image encoder, image embedding will be concatenated with language embedding as input. <imagej> denotes image token to exact reference the j-th image in an instance as described in Sec. 2.2.1. # E DATA BALANCE Previous studies have shown that the data balance of training data could significantly influence the model performance (Dai et al., 2023). Mixing the training data of each dataset uniformly could cause the model to overfit smaller datasets and underfit larger datasets, causing poor performance. | 2309.07915#63 | 2309.07915#65 | 2309.07915 | [
"2305.15023"
] |
2309.07915#65 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | In order to alleviate this problem, we employ a sampling strategy to sample datasets with probabilities proportional to the square root of the number of training samples following Dai et al. (2023). Formally, given D datasets with N¨ training samples tN1, N2, . . . , NDu, the probability pd of data samples being selected from a dataset during training is as follows. ? pd â Å Nd ? D iâ 1 Ni (4) # F INSTRUCTION TEMPLATE FOR DATA CONSTRUCTION As Sec. 2.2.3, the constructions of MICrequire carefully designed templates. The instruction templates for each task are presented in this section. The templates for tasks MSCOCO, Flick30k, Nocaps, and Diffusiondb are presented in Table 9. The templates for tasks MiniImagenet are presented in Table 10. The templates for tasks VQAv2, S-VQA, WikiART and RefCOCO are presented in Table 11. The templates for task OKVQA are presented in Table 13. The templates for task MSRVTT are presented in Table 14. The templates for tasks MSRVTTQA and MSVD are presented in Table 15 | 2309.07915#64 | 2309.07915#66 | 2309.07915 | [
"2305.15023"
] |
2309.07915#66 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 20 # Preprint Templates of Image Question Answering (VQAv2, ST-VQA, WikiART, RefCOCO, etc) VQAv2 (1) image 0 is [IMG0] {image}. For the question, carefully examine the image and use your knowledge to determine the correct answer. Question: question Answer: (2) image 0 is [IMG0] {image}. Given the picture [IMG0], pay attention to the wording of question and answer the following question: question Answer: (3) Read the question carefully and look at image 0 labeled [IMG0] {image}. Use your intuition and common sense when answering the question: question (4) Answer each question based on the information presented in image 0: [IMG0] {image}. Given the picture [IMG0], what is the answer to the question: question Answer: (5) Please refer to image 0: [IMG0] {image} when answering the following questions: question Answer: (6) Questions is related to image 0: [IMG0] {image}. Please analyze the image and provide the correct answer for the question: question (7) Read the question carefully and look at image 0 labeled [IMG0] {image}. Use your intuition and common sense when answering the question: question (8) Consider all of the information in image 0 labeled [IMG0] {image} when answering the question: question (9) Take your time when answering each question. | 2309.07915#65 | 2309.07915#67 | 2309.07915 | [
"2305.15023"
] |
2309.07915#67 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Donâ t rush through the questions, and make sure you have carefully considered all of the information provided in image 0 labeled [IMG0] {image} and the question before making your selection. Question: question Answer: (10) Use the image 0: [IMG0] {image} as a visual aid to help you answer the questions accurately. Question:question Answer: # ST-VQA (1) Answer each question based on the information presented in image 0: [IMG0] {image}. Given the picture [IMG0], what is the answer to the question: question Answer: (2) Please refer to image 0: [IMG0] {image} when answering the following questions: question Answer: (3) Questions is related to image 0: [IMG0] {image}. Please analyze the image and provide the correct answer for the question: question (4) For each question, use the image 0: [IMG0] {image} as a reference to answer the question: question (5) Make sure your answers are based on the information presented in the image 0: [IMG0] {image}, and any OCR text associated with it. Question:question Answer: (6) Answer the question as accurately as possible using the information provided in the image 0: [IMG0] {image}, and any OCR text associated with it. Question:question Answer: (7) Please ensure that you are answering the question based on the information presented in the image 0: [IMG0] {image}.Question:question Answer: (8) The image 0: [IMG0] {image} is the primary source of information for answering the questions. Please refer to it carefully when answering question: question Answer: (9) Pay close attention to the details in image 0: [IMG0] {image}, as they may provide important information for answering the questions. Question:question Answer: (10) Use the image 0: [IMG0] {image} as a visual aid to help you understand the context and answer the questions accurately. Ques- tion:question Answer: WikiART (1) image 0 is [IMG0] {image}. Please provide information about the artist, genre, and style of this artwork. (2) image 0 is [IMG0] {image}. I would like to know the artistâ | 2309.07915#66 | 2309.07915#68 | 2309.07915 | [
"2305.15023"
] |
2309.07915#68 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | s name, the genre, and the specific style depicted in this painting. (3) image 0 is [IMG0] {image}. Could you identify the artistic genre, the artist, and the style portrayed in this artwork? (4) image 0 is [IMG0] {image}. In this painting, which genre does it belong to, who is the artist, and what is the predominant style? (5) image 0 is [IMG0] {image}. Tell me about the artist, genre, and style associated with this particular artwork. (6) image 0 is [IMG0] {image}. This piece of art seems intriguing. Can you provide details about the genre, the artist, and the style it represents? (7) image 0 is [IMG0] {image}. Identify the genre, artist, and style of this captivating artwork, please. (8) image 0 is [IMG0] {image}. | 2309.07915#67 | 2309.07915#69 | 2309.07915 | [
"2305.15023"
] |
2309.07915#69 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Iâ m curious to learn about the artistâ s name, the genre, and the distinctive style showcased in this artwork. (9) image 0 is [IMG0] {image}. Could you enlighten me about the genre, artist, and the artistic style that characterizes this beautiful piece? (10) image 0 is [IMG0] {image}. In terms of genre, artist, and style, what information can you provide regarding this fascinating artwork? RefCOCO (1) image 0 is [IMG0] {image}.Given image 0, create a descriptive caption that accurately represents the content of the image, including the item located in the {quadrant} of the image. (2) Use your knowledge of the image 0 and the {quadrant} location to generate a detailed and accurate caption that captures the essence of the scene. Keep in mind that image 0 is [IMG0] {image}. (3) image 0 is [IMG0] {image}. When writing your caption, be sure to include specific details about the item located in the {quadrant} of the image 0, such as its size, shape, color, and position. (4) Think about the intended audience for your caption and use appropriate language and tone. Consider the context of the image: [IMG0] {image} and the {quadrant} location when creating your caption, and make sure that it accurately reflects the content of the image. (5) Your caption should be concise and to the point, while still capturing the essence of the image 0 and the item located in the {quadrant} of the image. Avoid including irrelevant information in your caption that detracts from the main content of the image. Remember that image 0 is [IMG0] {image}. (6) image 0 is [IMG0] {image}. | 2309.07915#68 | 2309.07915#70 | 2309.07915 | [
"2305.15023"
] |
2309.07915#70 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Check your caption for accuracy and grammatical errors before submitting. Be creative in your approach to captioning the image and the item located in the {quadrant}. (7) image 0 is [IMG0] {image}. Given image 0, describe the item in the {quadrant} of the image. (8) image 0 is [IMG0] {image}. Using image 0, provide a caption for the object located in the {quadrant} of the image. (9) For image 0: [IMG0] {image}, describe the object in the {quadrant} of the image. (10) Given the image 0: [IMG0] {image}. Generate a description for the item located in the {quadrant} of the image. (11) image 0 is [IMG0] {image}. Using the provided image 0, describe the object located in the {quadrant} of the image. | 2309.07915#69 | 2309.07915#71 | 2309.07915 | [
"2305.15023"
] |
2309.07915#71 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Table 11: Instruction templates for tasks VQAv2, ST-VQA, WikiART and RefCOCO. 21 # Preprint Templates of Knowledge Visual Question Answering (OK-VAQ) (1) Look at image 0 labeled [IMG0] {image} carefully and read question: question. Try to understand what is being asked before selecting an answer. (2) image 0 is [IMG0] {image}. Consider all of the information in image 0 labeled [IMG0] when answering question. Look at objects, colors, shapes, and other details that may be relevant to question: question Answer: (3) image 0 is [IMG0] {image}. Read each answer choice carefully and answers question : question based on the information provided in image 0. (4) image 0 is [IMG0] {image}. Given the picture [IMG0], pay attention to the wording of question and answer the following question: question Answer: (5) Read the question carefully and look at image 0 labeled [IMG0] {image}. Use your intuition and common sense when answering the question: question (6) Consider all of the information in image 0 labeled [IMG0] {image} when answering the question: question (7) Take your time when answering each question. | 2309.07915#70 | 2309.07915#72 | 2309.07915 | [
"2305.15023"
] |
2309.07915#72 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Donâ t rush through the questions, and make sure you have carefully considered all of the information provided in image 0 labeled [IMG0] {image} and the question before making your selection. Question: question Answer: (8) Make sure your answers are based on the information presented in the image 0: [IMG0] {image}. Question:question Answer: (9) Carefully examine image 0 labeled [IMG0] {image} before answering the question. Question:question Answer: (10) Please refer to image 0: [IMG0] {image} when answering the following questions: question Answer: Table 12: Instruction templates for task OKVQA. # Templates of Video Question Captioning (MSRVTT) (1) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Watch the images carefully and write a detailed description of what you see. (2) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. After viewing the images, provide a summary of the main events or key points depicted. (3) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. | 2309.07915#71 | 2309.07915#73 | 2309.07915 | [
"2305.15023"
] |
2309.07915#73 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Pay close attention to the details in the images and provide accurate description to the images based on what you see. (4) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Utilize your comprehension skills to describe the context and events depicted in the images. (5) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Reflect on the imagesâ s narrative structure and identify any storytelling techniques or narrative devices used. Write a detailed description of what you see. (6) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Consider both the explicit and implicit information conveyed in the images to provide comprehensive description of the images. Table 13: Instruction templates for task MSRVTT. # Templates of Video Question Answering (MSRVTT QA, MSVD, etc) (1) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Watch the provided images carefully and answer the following questions based on your understanding of the images content. | 2309.07915#72 | 2309.07915#74 | 2309.07915 | [
"2305.15023"
] |
2309.07915#74 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Qusetion: {question}. Answer: (2) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Carefully analyze the visual elements of the images and answer the questions based on your observations. Qusetion: {question}. Answer: (3) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Pay close attention to the details in the images and provide accurate answers to the questions based on what you see. Qusetion: {question}. Answer: (4) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Utilize your comprehension skills to answer the questions based on the context and events depicted in the images. Qusetion: {question}. Answer: (5) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Consider the relationships between the images frames, scenes, and the provided questions to formulate accurate answers. Qusetion: {question}. | 2309.07915#73 | 2309.07915#75 | 2309.07915 | [
"2305.15023"
] |
2309.07915#75 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Answer: (6) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Use your knowledge of the imagesâ s content to answer the questions by recalling specific details and events. Qusetion: {question}. Answer: (7) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Make logical inferences based on the information presented in the images to answer the questions with reasoned explanations. Qusetion: {question}. Answer: (8) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. While answering the questions, consider both the explicit and implicit information conveyed in the images to provide comprehensive responses. Qusetion: {question}. Answer: (9) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Formulate your answers by considering the temporal context of the images and the chronological order of events. Qusetion: {question}. | 2309.07915#74 | 2309.07915#76 | 2309.07915 | [
"2305.15023"
] |
2309.07915#76 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Answer: (10) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Take into account the emotions, actions, and interactions of the characters in the images when answering the questions. Qusetion: {question}. Answer: # Table 14: Instruction templates for task MSRVTT QA and MSVD. 22 # Preprint Templates of Visual Reasoning (GQA, VCR, NLVR v2, etc) GQA (1) image 0 is [IMG0] {image}. For the question, carefully examine the image and use your knowledge to determine the correct answer. Question: {question} Answer: (2) image 0 is [IMG0] {image}. Given the picture [IMG0], pay attention to the wording of question and answer the following question: {question} Answer: (3) Read the question carefully and look at image 0 labeled [IMG0] {image}. Use your intuition and common sense when answering the question: {question} (4) Consider all of the information in image 0 labeled [IMG0] {image} when answering the question: {question} (5) The image 0: [IMG0] {image} is the primary source of information for answering the questions. Please refer to it carefully when answering question: {question} Answer: (6) Pay close attention to the details in image 0: [IMG0] {image}, as they may provide important information for answering the questions. Question:{question} Answer: (7) image 0 is [IMG0] {image}. Make sure your answer is relevant to the question and the image 0. Question:{question} Answer: (8) image 0 is [IMG0] {image}. Do not provide answers based on assumptions or personal opinions; only use the information presented in the image 0 and the question. Question:{question} Answer: (9) Look at image 0 labeled [IMG0] {image} carefully and read question: {question}. | 2309.07915#75 | 2309.07915#77 | 2309.07915 | [
"2305.15023"
] |
2309.07915#77 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Try to understand what is being asked before selecting an answer. (10) image 0 is [IMG0] {image}. Consider all of the information in image 0 labeled [IMG0] when answering question. Look at objects, colors, shapes, and other details that may be relevant to question: {question} Answer: VCR (1) {prompt}. Given the options below, based on the photo [IMG0], select the most suitable answer for the following question: {question}. Options: {options} (2) Please read the question and answer choices carefully. Select the option that best answers the question. {prompt}. Given the images, select the best option that answers the question from the available answer choices. Question: {question} Options: {options} Answer: (3) Choose the answer that best fits the description or action in the image. {prompt}. Consider the scene depicted in the images, choose the answer that best fits the description or action in the image from the available answer choices. Question: {question} Options: {options} Answer: (4) {prompt}. Examine the details in the pictures and use them to inform your answer to the question. Choose the best answer from the available options. Question: {question} Options: {options} Answer: (5) Look closely at the images and think about what is happening in the scene. {prompt}. Given the pictures, carefully examine the images and select the best answer that describes what is happening in the scene from the available answer choices. Question: {question} Options: {options} Answer: (6) Consider all of the details in the image and the wording of the question before making your selection. {prompt}. Given the pictures, consider all of the details in the image and the wording of the question before selecting the best answer choice from the available options. Question: {question} Options: {options} Answer: (7) Remember to use your common sense and reasoning skills to choose the best answer. {prompt}. Think about the images, use your common sense and reasoning skills to select the best answer choice from the available options. Question: {question} Options: {options} Answer: (8) {prompt}. Select the answer that most closely matches the description or action in images, based on the available options. Given the picture [IMG0], select the answer choice that most closely matches the description or action in the image from the available options. | 2309.07915#76 | 2309.07915#78 | 2309.07915 | [
"2305.15023"
] |
2309.07915#78 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Question: {question} Options: {options} Answer: (9) Choose the option that provides the most accurate and complete answer to the question, based on the available information. {prompt} Given the images, select the option that provides the most accurate and complete answer to the question from the available answer choices. Question: {question} Options: {options} Answer: (10) {prompt}. Use the information in the images to help you make the best choice from the available answer options for the question Question: {question} Options: {options} Answer: NLVR v2 (1) image 0 is [IMG0] {image}. Given the picture [IMG0], answer the following question: {question} Is this correct? True or False. Answer: (2) For the question: {question}, carefully examine image 0: [IMG0] {image} and use your knowledge to determine if the statement is True or False. (3) Please refer to image 0: [IMG0] {image} when answering the question: {question} Is this correct? True or False. Answer: (4) Remember to consider both the question and the information presented in image 0: [IMG0] {image} when answering the True or False question: {question} (5) image 0 is [IMG0] {image}.Answer the question: {question} based on the information presented in the image 0 and determine if the statement is True or False. (6) Carefully examine the image 0: [IMG0] {image} and use your knowledge to determine whether the statement is True or False. Question: {question} (7) Remember that the answer to each question is either True or False, so make sure you choose the correct option based on the information presented in image 0: [IMG0] {image}. Question: {question} (8) Make sure your answers are based on the information presented in the image 0: [IMG0] {image}. Question:{question} Is this correct?True or False. Answer: (9) Carefully examine image 0 labeled [IMG0] {image} before answering the question. Question:{question} True or False? Answer: Table 15: Instruction templates for tasks GQV, VCR and NLVR v2. 23 Preprint Model Commonsense Reasoning Numerical Calculation Text Translation Code Reasoning Avg. | 2309.07915#77 | 2309.07915#79 | 2309.07915 | [
"2305.15023"
] |
2309.07915#79 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | MiniGPT-4 (Zhu et al., 2023) VisualGLM-6B (Du et al., 2021) LLaVA (Liu et al., 2023b) Lynx (Zeng et al., 2023) MultiModal-GPT (Gong et al., 2023) LLaMA-Adapter-V2 (Gao et al., 2023) VPGTrans (Zhang et al., 2023a) LaVIN (Luo et al., 2023) GIT2 (Wang et al., 2022a) mPLUG-Owl (Ye et al., 2023) BLIP-2 (Li et al., 2023d) InstructBLIP (Dai et al., 2023) Otter (Li et al., 2023a) Cheetor (Li et al., 2023c) LRV-Instruction (Liu et al., 2023a) BLIVA (Hu et al., 2023) 59.29 39.29 57.14 110.71 49.29 81.43 64.29 87.14 99.29 78.57 110.00 129.29 106.43 98.57 100.71 136.43 45.00 45.00 50.00 17.50 62.50 62.50 50.00 65.00 50.00 60.00 40.00 40.00 72.50 77.50 70.00 57.50 0.00 50.00 57.50 42.50 60.00 50.00 77.50 47.50 67.50 80.00 65.00 65.00 57.50 57.50 85.00 77.50 40.00 47.50 50.00 45.00 55.00 55.00 57.50 50.00 45.00 57.50 75.00 57.50 70.00 87.50 72.50 60.00 36.07 45.45 53.66 53.93 56.70 62.23 62.32 62.41 65.45 69.02 72.50 72.95 76.61 78.02 82.05 82.86 MMICL 136.43 82.50 132.50 77.50 107.23 | 2309.07915#78 | 2309.07915#80 | 2309.07915 | [
"2305.15023"
] |
2309.07915#80 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Table 16: Evaluation of cognition. In the MME benchmark, each image will have two questions, with answers restricted to â yesâ or â noâ . The evaluation metrics for this benchmark include ACC and ACC+. ACC refers to the accuracy calculated for each question, while ACC+ represents the accuracy for each image, where both questions must be answered correctly. The Avg. metric denotes the average value across all numbers. It is important to note that all the reported figures for the baseline methods are obtained from the MME benchmark (Fu et al., 2023). We use the FLAN-T5-XXL version of MMICL to evaluate the performance. # G EXPERIMENT DETAILS Following Chung et al. (2022), we use FLANT5-XL and FLANT5-XXL (Chung et al., 2022) as the backbone LLMs. In Stage I, we set the vision encoder and language model to be frozen and utilize the COCO captioning data and LAION-400M data (Schuhmann et al., 2021) to perform feature alignment training on the Q-former. We keep the other part of the VLM frozen and jointly train the Q-former and projection layer. To benefit from BLIP-2â s significant visual representation extraction ability, we integrate its powerful vision encoder to initialize the Q-former and projection layer. ||. In Stage II, we train the model for three epochs with a lower learning rate of 1e ´ 5. The weights of mapping query and value vectors in the attention layer of LLMs are learnable in this stage to better adapt to the multi-modal prompts with multiple images. In this stage, we freeze the visual encoder, Q-former, and the backbone LLM and jointly train the projection layer, the query vectors, and the value vectors of the LLM. All experiments are conducted with 6 NVIDIA A40 GPUs with the zero2-offload (Rajbhandari et al., 2020) of Deepspeed (Rasley et al., 2020) with the trainer of huggingface transformers (Wolf et al., 2020). The batch size is 10 and 4 for MMICL (FLAN-T5-XL) and MMICL (FLAN-T5-XXL), respectively. | 2309.07915#79 | 2309.07915#81 | 2309.07915 | [
"2305.15023"
] |
2309.07915#81 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | The largest MMICL (FLAN-T5-XXL) requires about two days for the Stage II. # H MME BENCHMARK MME comprehensively evaluates VLMs with 14 sub-tasks that encompass perception and cognition abilities. Other than OCR, perception ability includes the recognition of coarse-grained and fine- grained objects. The former identifies the existence, count, position, and color of objects. The latter recognizes movie posters, celebrities, scenes, landmarks, and artworks. The cognition includes commonsense reasoning, numerical calculation, text translation, and code reasoning. MME evaluates a wide range of multi-modal abilities. The compared baselines include LLaVA (Liu et al., 2023b), MiniGPT-4 (Zhu et al., 2023), MultiModal-GPT (Gong et al., 2023), VisualGPM- ||We use BLIP-2 and InstructBlip as the backbone for MMICL, so Stage I is skipped. | 2309.07915#80 | 2309.07915#82 | 2309.07915 | [
"2305.15023"
] |
2309.07915#82 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 24 Preprint Avg. Model 50.28 50.00 LLaVA 58.17 68.33 MiniGPT-4 65.47 61.67 MultiModal-GPT 70.53 85.00 VisualGLM-6B 79.05 70.00 VPGTrans 96.36 LaVIN 185.00 97.27 LLaMA-Adapter-V2 120.00 120.00 mPLUG-Owl 96.73 185.00 143.33 66.67 153.33 72.50 123.81 101.18 153.00 79.75 134.25 121.28 InstructBLIP 160.00 135.00 73.33 148.33 110.00 141.84 105.59 145.25 138.00 136.50 129.38 BLIP-2 195.00 151.67 90.00 170.00 77.50 124.83 118.24 164.50 162.00 119.50 137.32 Lynx 190.00 118.33 96.67 158.33 65.00 112.59 145.88 158.50 140.50 146.25 133.21 GIT2 88.33 86.67 113.33 72.50 138.78 172.65 158.75 137.25 129.00 129.23 195.00 Otter 180.00 Cheetor 96.67 80.00 116.67 100.00 147.28 164.12 156.00 145.73 113.50 130.00 165.00 111.67 86.67 165.00 110.00 139.04 112.65 147.98 160.53 101.25 129.98 LRV-Instruction 180.00 138.33 81.67 180.00 87.50 155.10 140.88 151.50 89.50 133.25 133.77 BLIVA Table 17: Evaluation of coarse-grained and fine-grained recognition and OCR. | 2309.07915#81 | 2309.07915#83 | 2309.07915 | [
"2305.15023"
] |
2309.07915#83 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | The settings are the same as Table 16. It is important to note that all the reported figures for the baseline methods are obtained from the MME benchmark (Fu et al., 2023). We use the FLAN-T5-XXL version of MMICL to evaluate the performance. Model Position ACC ACC+ ACC ACC+ ACC ACC+ ACC ACC+ ACC ACC+ Existence Count Color OCR 86.67 73.33 75.00 60.00 56.67 16.67 81.67 66.67 70.00 40.00 62.67 BLIP-2 50.00 25.49 0.00 LLaVA 75.00 60.00 66.67 56.67 56.67 33.33 71.67 53.33 62.50 35.00 57.08 MiniGPT-4 51.67 0.00 mPLUG-Owl 55.00 10.00 34.00 73.33 46.67 50.00 50.00 55.00 16.67 57.50 15.00 38.92 3.33 LLaMA-Adapter-V2 76.67 56.67 58.33 43.33 28.08 42.50 51.67 0.00 61.67 23.33 50.00 VisualGLM-6B 3.33 48.33 50.00 53.33 Otter 26.50 50.00 51.67 0.00 50.00 6.67 3.33 45.00 13.33 55.00 13.33 57.50 25.00 32.42 46.67 10.00 51.67 Multimodal-GPT 27.00 0.00 50.00 56.67 13.33 50.00 PandaGPT 0.00 50.00 0.00 50.00 51.67 3.33 50.00 0.00 0.00 6.67 0.00 0.00 6.67 0.00 3.33 0.00 0.00 0.00 50.00 50.00 0.00 MMICL 90.00 80.00 86.67 73.33 55.00 26.67 88.33 73.33 60.00 40.00 67.33 Avg. | 2309.07915#82 | 2309.07915#84 | 2309.07915 | [
"2305.15023"
] |
2309.07915#84 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Table 18: Fine-grained result of MME benchmark 6B (Du et al., 2021), VPGTrans (Zhang et al., 2023a) , LaVIN (Luo et al., 2023), mPLUG-Owl (Ye et al., 2023), LLaMA-Adapter-V2 (Gao et al., 2023), InstructBLIP (Dai et al., 2023), Otter (Li et al., 2023a), BLIP-2 (Li et al., 2023d), LRV-Instruction (Liu et al., 2023a), Cheetor (Li et al., 2023c), GIT2 (Wang et al., 2022a), Lynx (Zeng et al., 2023), BLIVA (Hu et al., 2023). We also provide more detail evaluation results for MMICL at Table 17, Table 18, Table 19, and Table 20. Results show that MMICL can achieve the best average scores in comparisons with current VLMs. # I MMBENCH BENCHMARK MMBench (Liu et al., 2023c) is a thoughtfully designed benchmark that thoroughly evaluates the diverse skills of vision-language models. The results of all different VLMs from the test set are presented in Table 21. # J UNDERSTANDING MULTIPLE IMAGES IN THE MULTI-MODAL PROMPT Videos contain more temporal information compared to static images. We test MMICL across different video-languages tasks to evaluate whether the MMICL is able to support the multiple images in the complex prompts. The result is present in Table 22. Our model, MMICL, achieved significant improvement of 10.86, 4.53, and 2.45 points for MSVD-QA (Chen & Dolan, 2011), 25 | 2309.07915#83 | 2309.07915#85 | 2309.07915 | [
"2305.15023"
] |
2309.07915#85 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Preprint Model Scene ACC ACC+ ACC ACC+ ACC ACC+ ACC ACC+ ACC ACC+ Poster Celebrity Landmark Artwork Avg. BLIP-2 LLaVA MiniGPT-4 mPLUG-Owl LLaMA-Adapter-V2 52.72 10.88 55.00 21.18 68.75 44.50 53.00 InstructBLIP VisualGLM-6B Otter Multimodal-GPT PandaGPT 79.25 62.59 58.53 37.06 81.25 64.00 79.00 59.00 76.50 60.00 66.72 50.00 24.78 0.00 49.32 19.73 58.82 24.71 68.25 45.50 59.75 30.50 56.25 27.00 44.00 77.89 57.14 66.18 34.12 78.00 57.50 86.25 73.00 63.25 33.00 62.63 38.2 74.15 49.66 67.06 34.12 84.00 69.00 59.75 20.00 76.75 57.50 59.20 81.75 64.50 59.75 24.00 55.75 20.00 42.56 54.42 12.24 50.88 27.47 4.50 55.00 14.50 52.00 50.00 0.00 45.24 45.24 17.01 49.12 24.12 50.50 17.50 50.50 23.00 46.00 12.00 33.50 37.26 56.80 19.73 46.47 10.59 72.50 45.50 56.25 13.50 50.25 0.00 48.82 0.00 50.00 50.00 0.00 49.00 0.00 9.00 52.50 14.50 2.35 0.00 48.00 5.50 1.00 MMICL 81.63 64.63 79.41 62.35 83.75 70.00 76.96 59.16 76.50 59.00 71.04 | 2309.07915#84 | 2309.07915#86 | 2309.07915 | [
"2305.15023"
] |
2309.07915#86 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Table 19: Fine-grained result of MME benchmark Model Common. Reason. Numerical Calculation Text Translation Code Reason. ACC ACC+ ACC ACC ACC ACC+ ACC ACC+ Avg. BLIP-2 68.57 LLaVA 49.29 MiniGPT-4 58.57 59.29 mPLUG-Owl LLaMA-Ada.-V2 54.29 75.00 InstructBLIP 45.71 VisualGLM-6B Otter 48.57 MultiModal-GPT 45.71 56.43 PandaGPT 41.43 11.43 34.29 24.29 14.29 54.29 12.86 10.00 5.71 17.14 40.00 50.00 47.50 50.00 52.50 35.00 45.00 47.50 50.00 50.00 0.00 0.00 20.00 10.00 5.00 5.00 0.00 10.00 20.00 0.00 55.00 52.50 42.50 60.00 52.50 55.00 55.00 55.00 50.00 52.50 10.00 5.00 15.00 20.00 5.00 10.00 10.00 10.00 5.00 5.00 55.00 20.00 36.25 50.00 27.27 0.00 67.50 45.00 41.30 47.50 10.00 35.14 52.50 10.00 30.76 35.22 0.00 47.50 27.32 0.00 50.00 50.00 28.88 0.00 45.00 10.00 28.93 28.67 0.00 47.50 MMICL 76.43 60.00 47.50 35.00 72.50 60.00 47.50 30.00 53.62 Table 20: Fine-grained result of MME benchmark | 2309.07915#85 | 2309.07915#87 | 2309.07915 | [
"2305.15023"
] |
2309.07915#87 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Method Language Model Vision Model Overall LR AR RR FP-S FP-C CP MMGPT MiniGPT-4 PandaGPT VisualGLM InstructBLIP LLaVA G2PT Otter-I Shikra LMEye mPLUG-Owl JiuTian LLaMA-7B Vincuna-7B Vincuna-13B ChatGLM-6B Vincuna-7B LLaMA-7B LLaMA-7B LLaMA-7B Vincuna-7B Flan-XL LLaMA-7B FLANT5-XXL CLIP ViT-L/14 EVA-G ImageBind ViT-H/14 EVA-CLIP EVA-G CLIP ViT-L/14 ViT-G CLIP ViT-L/14 CLIP ViT-L/14 CLIP ViT-L/14 CLIP ViT-L/14 EVA-G 16.0 12.0 30.6 33.5 33.9 36.2 39.8 48.3 60.2 61.3 62.3 64.7 1.1 13.6 15.3 11.4 21.6 15.9 14.8 22.2 33.5 36.9 37.5 46.6 23.8 32.9 41.5 48.8 47.4 53.6 46.7 63.3 69.6 73.0 75.4 76.5 20.7 8.9 22.0 27.7 22.5 28.6 31.5 39.4 53.1 55.4 56.8 66.7 18.3 28.8 20.3 35.8 33.0 41.8 41.8 46.8 61.8 60.0 67.3 66.5 5.2 11.2 20.4 17.6 24.4 20.0 34.4 36.4 50.4 68.0 52.4 51.6 18.3 28.3 47.9 41.5 41.1 40.4 49.8 60.6 71.7 68.9 67.2 68.7 MMICL FLAN-T5-XXL EVA-G 65.24 44.32 77.85 64.78 66.5 53.6 70.64 | 2309.07915#86 | 2309.07915#88 | 2309.07915 | [
"2305.15023"
] |
2309.07915#88 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Table 21: Evaluation of MM benchmark dev set. All the reported performance for the baseline methods is from the leaderboard of MM benchmark (Liu et al., 2023c). We use the FLAN-T5-XXL version of MMICL to evaluate the performance. NExT-QA (Xiao et al., 2021), and iVQA (Yang et al., 2021) respectively, when compared to the strongest baselines. It is important to note that our training dataset did not include any videos. This indicates that MMICL effectively enhances the modelâ s ability to understand temporal information in videos. | 2309.07915#87 | 2309.07915#89 | 2309.07915 | [
"2305.15023"
] |
2309.07915#89 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 26 # Preprint Model MSVD QA NExT QA Multi-choice iVQA Flamingo-3B (Alayrac et al., 2022) (Zero-Shot) Flamingo-3B (Alayrac et al., 2022) (4-Shot) Flamingo-9B (Alayrac et al., 2022) (Zero-Shot) Flamingo-9B (Alayrac et al., 2022) (4-Shot) Flamingo-80B (Alayrac et al., 2022) (Zero-Shot) Flamingo-80B (Alayrac et al., 2022) (4-Shot) 27.50 33.00 30.20 36.20 35.60 41.70 - - - - - - 32.70 35.20 35.20 37.70 40.70 44.10 R2A (Pan et al., 2023) 37.00 - 29.30 BLIP-2 (Li et al., 2023d) (FLANT5-XL) BLIP-2 (Li et al., 2023d) (FLANT5-XXL) 33.70 34.40 61.73 61.97 37.30 49.38 InstructBLIP (Dai et al., 2023) (FLANT5-XL) InstructBLIP (Dai et al., 2023) (FLANT5-XXL) 43.40 44.30 36.10 64.27 25.18 36.15 MMICL (FLAN-T5-XL) MMICL (FLAN-T5-XXL) MMICL (Instruct-FLAN-T5-XL) MMICL (Instruct-FLAN-T5-XXL) 47.31 55.16 53.68 52.19 66.17 64.67 65.33 68.80 41.68 41.13 49.28 51.83 Table 22: Results of MMICL compared with other VLMs across different video-languages tasks. | 2309.07915#88 | 2309.07915#90 | 2309.07915 | [
"2305.15023"
] |
2309.07915#90 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | For Blip-2 and Instructblip, We concatenate the visual embeddings of all frames and place them on the top of the textual prompts following Dai et al. (2023). # K OBJECT HALLUCINATION EVALUATION We test the following VLMs on the POPE benchmark to evaluate their object hallucination perfor- mance: MMICL, Shikra (Chen et al., 2023), InstructBLIP (Dai et al., 2023), MiniGPT-4 (Zhu et al., 2023), LLaVA (Liu et al., 2023b), MM-GPT (Gong et al., 2023) and mPLUG-Owl (Ye et al., 2023). The result is present in the Table 23. Table 23: Performance result of different VLMs on the POPE benchmark | 2309.07915#89 | 2309.07915#91 | 2309.07915 | [
"2305.15023"
] |
2309.07915#91 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Dataset Metric Models MMICL Shikra Random Accuracy Precision Recall F1-Score Yes 0.8729 0.9463 0.7987 0.8662 0.4351 86.90 94.40 79.27 86.19 43.26 88.57 84.09 95.13 89.27 56.57 79.67 78.24 82.20 80.17 52.53 50.37 50.19 99.13 66.64 98.77 50.10 50.05 100.00 66.71 99.90 53.97 52.07 99.60 68.39 95.63 Popular Accuracy Precision Recall F1-Score Yes 0.8270 0.8511 0.7927 0.8208 0.4657 83.97 87.55 79.20 83.16 45.23 82.77 76.27 95.13 84.66 62.37 69.73 65.86 81.93 73.02 62.20 49.87 49.93 99.27 66.44 99.40 50.00 50.00 100.00 66.67 100.00 50.90 50.46 99.40 66.94 98.57 Adversarial Accuracy Precision Recall F1-Score Yes 0.8097 0.8188 0.7953 0.8069 0.4857 83.10 85.60 79.60 82.49 46.50 72.10 65.13 95.13 77.32 73.03 65.17 61.19 82.93 70.42 67.77 49.70 49.85 99.07 66.32 99.37 50.00 50.00 100.00 66.67 100.00 50.67 50.34 99.33 66.82 98.67 # L DETAILS FOR EVALUATION In this Section. we provide details for evaluation in our experiments as Sec. 3. 27 | 2309.07915#90 | 2309.07915#92 | 2309.07915 | [
"2305.15023"
] |
2309.07915#92 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Preprint L.1 EVALUATION METRICS We provide evaluation metrics as Table 24 Dataset Metrics MSVD (Chen & Dolan, 2011) iVQA (Yang et al., 2021) NExT-QA-multiple-choice (Xiao et al., 2021) NExT-QA-opendomain (Xiao et al., 2021) Top-1 Acc. iVQA Acc. Top-1 Acc. WUPS Score. Hateful Memes (Kiela et al., 2020) WebSRC (Chen et al., 2021b) VSR (Liu et al., 2022) Ë VQAv2 (Goyal et al., 2017) VizWiz (Bigham et al., 2010) IconQA-text (Lu et al., 2021) IconQA-img (Lu et al., 2021) ScienceQA-IMG (Lu et al., 2022) Bongard-HOI (Jiang et al., 2022) VisDial (Das et al., 2017) NoCaps (Agrawal et al., 2019) A-OKVQA (Agrawal et al., 2019) Ë Flickr (Young et al., 2014) Winoground (Thrush et al., 2022b) Raven IQ Test (Huang et al., 2023a) Minecraft AUC Score Exact Match Top-1 Acc. VQA Acc. VQA Acc. Top-1 Acc. Top-1 Acc. Top-1 Acc. Top-1 Acc. Exact Match Cider Score Top-1 Acc. Cider Score Winoground mertic. Top-1 Acc. Top-1 Acc. Table 24: Summary of the evaluation datasets and metrics. These datasets are used to validate the general design of MMICL. The datasets marked with Ë are the hold-in datasets, where their training set is used in training the MMICL. # L.2 VQA TOOLS We use the same VQA Tools as the original VQA paper (Agrawal et al., 2016) and use it in all metrics using the VQA accuracy. # M BASELINES Baselines We primarily compare MMICL with recently proposed powerful multi-modal approaches, including: | 2309.07915#91 | 2309.07915#93 | 2309.07915 | [
"2305.15023"
] |
2309.07915#93 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | (1) Flamingo (Alayrac et al., 2022) where a VLM is trained on large-scale multi-modal- web corpora containing arbitrarily interleaved text and images; (2) KOSMOS-1 (Huang et al., 2023a) which is trained from scratch on web-scale multi-modal corpora; (3) BLIP-2-FLAN-T5 (Li et al., 2023d) where an instruction-tuned Flan-T5 (Chung et al., 2022) is connected with a powerful visual encoder to perform a series of multi-modal tasks; (4) InstructBLIP-FLAN-T5 (Dai et al., 2023), a recently proposed instruction tuning enhanced multi-modal agents with FLAN-T5 with converted multi-modal datasets and the LLaVA (Liu et al., 2023b) dataset generated by GPT-4 (OpenAI, 2023); (5) Shikra (Chen et al., 2023), a VLM that can handle spatial coordinate inputs and outputs in natural language without the need for extra vocabularies or external plugin models. All inputs and outputs of Shikra are in natural language form. (6) Otter (Li et al., 2023a), an open-source implementation of flamingo (Alayrac et al., 2022). By utilizing multi-modal instruction in-context tuning data, Otter fine-tunes Openflamingo to augment its instruction comprehension capabilities while maintaining its ability to learn in context; | 2309.07915#92 | 2309.07915#94 | 2309.07915 | [
"2305.15023"
] |
2309.07915#94 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 28 Preprint (7) Ying-VLM (Li et al., 2023e), a VLM model trained on Multi-Modal multilingual instruction tuning dataset, showcasing its potential to answer complex questions requiring world knowledge, generalize to unseen video tasks, and comprehend unseen instructions in Chinese. # N OOD GENERALIZATION TO UNSEEN DOMAIN Method Shot Top-1 Acc. MiniGPT-4 (Vincuna-7B) MiniGPT-4 (Vincuna-13B) Zero-Shot Zero-Shot 35.10% 48.40% MMICL (FLAN-T5-XL) MMICL (FLAN-T5-XL) MMICL (FLAN-T5-XXL) Zero-Shot 4-Shot 8-Shot 55.41% 64.05% 65.41% Table 25: Results of generalization of MMICL to unseen domain in Minecraft. Results show that MMICL is able to generalize to unseen domains and tasks given a few examples. In an unseen challenging domain with limited exemplars, analyzing regular patterns, reasoning, and learning new knowledge (OOD Generalization to unseen domain) is a great way to test multi-modal ICL ability. We construct a task using Minecraft (Cipollone et al., 2014), which requires the VLM to identify whether an animal (i.e., cow, llama, chicken, donkey, and so on) is present in case (d) of Fig. 1. We collect 550 cases and transfer the task to a vision-to-text question-answering task to evaluate the performance of OOD generalization of MMICL. The results are shown in Table 25. Results demonstrate that MMICL is able to generalize to the Minecraft domain even if the images are extremely different compared to the images used by training at Stage I and II as Sec. 2.4. | 2309.07915#93 | 2309.07915#95 | 2309.07915 | [
"2305.15023"
] |
2309.07915#95 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | 29 | 2309.07915#94 | 2309.07915 | [
"2305.15023"
] |
|
2309.07864#0 | The Rise and Potential of Large Language Model Based Agents: A Survey | 3 2 0 2 p e S 9 1 ] I A . s c [ 3 v 4 6 8 7 0 . 9 0 3 2 : v i X r a # The Rise and Potential of Large Language Model Based Agents: A Survey Zhiheng Xiâ â , Wenxiang Chenâ , Xin Guoâ , Wei Heâ , Yiwen Dingâ , Boyang Hongâ , Ming Zhangâ , Junzhe Wangâ , Senjie Jinâ , Enyu Zhouâ , Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhangâ , Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang and Tao Guiâ Fudan NLP Group # Abstract For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human- agent cooperation. | 2309.07864#1 | 2309.07864 | [
"2305.08982"
] |
|
2309.07864#1 | The Rise and Potential of Large Language Model Based Agents: A Survey | Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List. # â Correspondence to: [email protected], {qz, tgui}@fudan.edu.cn â Equal Contribution. # Contents # 1 Introduction # 2 Background # 2.1 Origin of AI Agent | 2309.07864#0 | 2309.07864#2 | 2309.07864 | [
"2305.08982"
] |
2309.07864#2 | The Rise and Potential of Large Language Model Based Agents: A Survey | . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Technological Trends in Agent Research . . . . . . . . . . . . . . . . . . . . . . . 2.3 Why is LLM suitable as the primary component of an Agentâ s brain? . . . . . . . . 3.1 Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Natural Language Interaction . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.4 Reasoning and Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.5 Transferability and Generalization . . . . . . . . . . . . . . . . . . . . . . 3.2 Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Textual Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Visual Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Auditory Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Other Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Textual Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Tool Using . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Embodied Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 General Ability of Single Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Task-oriented Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Innovation-oriented Deployment . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Lifecycle-oriented Deployment . . . . . . . . . . . . . . . . . . . . . . . 4.2 Coordinating Potential of Multiple Agents . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Cooperative Interaction for Complementarity . . . . . . . . . . . . . . . . 4.2.2 Adversarial Interaction for Advancement . . . . . . . . . . . . . . . . . . 4.3 Interactive Engagement between Human and Agent . . . . . . . . . . . . . . . . . 4.3.1 Instructor-Executor Paradigm . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Equal Partnership Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Behavior and Personality of LLM-based Agents . . . . . . . . . . . . . . . . . . . 9 10 11 12 13 14 15 16 17 17 17 18 19 19 20 20 21 24 25 25 27 27 28 28 30 30 31 32 33 34 | 2309.07864#1 | 2309.07864#3 | 2309.07864 | [
"2305.08982"
] |
2309.07864#3 | The Rise and Potential of Large Language Model Based Agents: A Survey | # 3 The Birth of An Agent: Construction of LLM-based Agents # 4 Agents in Practice: Harnessing AI for Good # 5 Agent Society: From Individuality to Sociality 5.1.1 # Social Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 4 6 # onan 6 7 34 5.2 Environment for Agent Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Text-based Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Virtual Sandbox Environment . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Physical Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Society Simulation with LLM-based Agents . . . . . . . . . . . . . . . . . . . . . 5.3.1 Key Properties and Mechanism of Agent Society . . . . . . . . . . . . . . 5.3.2 Insights from Agent Society . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Ethical and Social Risks in Agent Society . . . . . . . . . . . . . . . . . . 6.1 Mutual Benefits between LLM Research and Agent Research . . . . . . . . . . . . 6.2 Evaluation for LLM-based Agents . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Security, Trustworthiness and Other Potential Risks of LLM-based Agents . . . . . 6.3.1 Adversarial Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Trustworthiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Other Potential Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Scaling Up the Number of Agents . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Open Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 37 37 37 38 38 39 40 41 41 42 44 44 44 45 45 46 | 2309.07864#2 | 2309.07864#4 | 2309.07864 | [
"2305.08982"
] |
2309.07864#4 | The Rise and Potential of Large Language Model Based Agents: A Survey | # 6 Discussion 7 Conclusion 3 48 # Introduction â If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation.â â Denis Diderot, 1875 Artificial Intelligence (AI) is a field dedicated to designing and developing systems that can replicate human-like intelligence and abilities [1]. As early as the 18th century, philosopher Denis Diderot introduced the idea that if a parrot could respond to every question, it could be considered intelligent [2]. While Diderot was referring to living beings, like the parrot, his notion highlights the profound concept that a highly intelligent organism could resemble human intelligence. In the 1950s, Alan Turing expanded this notion to artificial entities and proposed the renowned Turing Test [3]. This test is a cornerstone in AI and aims to explore whether machines can display intelligent behavior comparable to humans. These AI entities are often termed â agentsâ , forming the essential building blocks of AI systems. Typically in AI, an agent refers to an artificial entity capable of perceiving its surroundings using sensors, making decisions, and then taking actions in response using actuators [1; 4]. The concept of agents originated in Philosophy, with roots tracing back to thinkers like Aristotle and Hume [5]. It describes entities possessing desires, beliefs, intentions, and the ability to take actions [5]. This idea transitioned into computer science, intending to enable computers to understand usersâ interests and autonomously perform actions on their behalf [6; 7; 8]. As AI advanced, the term â agentâ found its place in AI research to depict entities showcasing intelligent behavior and possessing qualities like autonomy, reactivity, pro-activeness, and social ability [4; 9]. Since then, the exploration and technical advancement of agents have become focal points within the AI community [1; 10]. AI agents are now acknowledged as a pivotal stride towards achieving Artificial General Intelligence (AGI) 1, as they encompass the potential for a wide range of intelligent activities [4; 11; 12]. From the mid-20th century, significant strides were made in developing smart AI agents as research delved deep into their design and advancement [13; 14; 15; 16; 17; 18]. | 2309.07864#3 | 2309.07864#5 | 2309.07864 | [
"2305.08982"
] |
2309.07864#5 | The Rise and Potential of Large Language Model Based Agents: A Survey | However, these efforts have predominantly focused on enhancing specific capabilities, such as symbolic reasoning, or mastering particular tasks like Go or Chess [19; 20; 21]. Achieving a broad adaptability across varied scenarios remained elusive. Moreover, previous studies have placed more emphasis on the design of algorithms and training strategies, overlooking the development of the modelâ s inherent general abilities like knowledge memorization, long-term planning, effective generalization, and efficient interaction [22; 23]. Actually, enhancing the inherent capabilities of the model is the pivotal factor for advancing the agent further, and the domain is in need of a powerful foundational model endowed with a variety of key attributes mentioned above to serve as a starting point for agent systems. The development of large language models (LLMs) has brought a glimmer of hope for the further development of agents [24; 25; 26], and significant progress has been made by the community [22; 27; 28; 29]. According to the notion of World Scope (WS) [30] which encompasses five levels that depict the research progress from NLP to general AI (i.e., Corpus, Internet, Perception, Embodiment, and Social), the pure LLMs are built on the second level with internet-scale textual inputs and outputs. Despite this, LLMs have demonstrated powerful capabilities in knowledge acquisition, instruction comprehension, generalization, planning, and reasoning, while displaying effective natural language interactions with humans. These advantages have earned LLMs the designation of sparks for AGI [31], making them highly desirable for building intelligent agents to foster a world where humans and agents coexist harmoniously [22]. Starting from this, if we elevate LLMs to the status of agents and equip them with an expanded perception space and action space, they have the potential to reach the third and fourth levels of WS. Furthermore, these LLMs-based agents can tackle more complex tasks through cooperation or competition, and emergent social phenomena can be observed when placing them together, potentially achieving the fifth WS level. As shown in Figure 1, we envision a harmonious society composed of AI agents where human can also participate. In this paper, we present a comprehensive and systematic survey focusing on LLM-based agents, attempting to investigate the existing studies and prospective avenues in this burgeoning field. To this end, we begin by delving into crucial background information (§ 2). | 2309.07864#4 | 2309.07864#6 | 2309.07864 | [
"2305.08982"
] |
2309.07864#6 | The Rise and Potential of Large Language Model Based Agents: A Survey | In particular, we commence by tracing the origin of AI agents from philosophy to the AI domain, along with a brief overview of the 1Also known as Strong AI. 4 An Envisioned Agent Society | Planning to cook... Making lanterns. a : T need: We need: dishes? 1.Fish, 2.Sauce ... ; OC "10: 100+ Ordering dishes and cooking Discussing decoration Outdoors Cooperation Let me experience the Band performing festival in this world... Figure 1: Scenario of an envisioned society composed of AI agents, in which humans can also participate. The above image depicts some specific scenes within society. In the kitchen, one agent orders dishes, while another agent is responsible for planning and solving the cooking task. At the concert, three agents are collaborating to perform in a band. Outdoors, two agents are discussing lantern-making, planning the required materials, and finances by selecting and using tools. Users can participate in any of these stages of this social activity. debate surrounding the existence of artificial agents (§ 2.1). Next, we take the lens of technological trends to provide a concise historical review of the development of AI agents (§ 2.2). Finally, we delve into an in-depth introduction of the essential characteristics of agents and elucidate why large language models are well-suited to serve as the main component of brains or controllers for AI agents (§ 2.3). Inspired by the definition of the agent, we present a general conceptual framework for the LLM- based agents with three key parts: brain, perception, and action (§ 3), and the framework can be tailored to suit different applications. We first introduce the brain, which is primarily composed of a large language model (§ 3.1). Similar to humans, the brain is the core of an AI agent because it not only stores crucial memories, information, and knowledge but also undertakes essential tasks of information processing, decision-making, reasoning, and planning. It is the key determinant of whether the agent can exhibit intelligent behaviors. Next, we introduce the perception module (§ 3.2). For an agent, this module serves a role similar to that of sensory organs for humans. Its primary function is to expand the agentâ s perceptual space from text-only to a multimodal space that includes diverse sensory modalities like text, sound, visuals, touch, smell, and more. | 2309.07864#5 | 2309.07864#7 | 2309.07864 | [
"2305.08982"
] |
2309.07864#7 | The Rise and Potential of Large Language Model Based Agents: A Survey | This expansion enables the agent to better perceive information from the external environment. Finally, we present the action module for expanding the action space of an agent (§ 3.3). Specifically, we expect the agent to be able to possess textual output, take embodied actions, and use tools so that it can better respond to environmental changes and provide feedback, and even alter and shape the environment. After that, we provide a detailed and thorough introduction to the practical applications of LLM- based agents and elucidate the foundational design pursuitâ | 2309.07864#6 | 2309.07864#8 | 2309.07864 | [
"2305.08982"
] |
2309.07864#8 | The Rise and Potential of Large Language Model Based Agents: A Survey | â Harnessing AI for goodâ (§ 4). To start, we delve into the current applications of a single agent and discuss their performance in text-based tasks and simulated exploration environments, with a highlight on their capabilities in handling specific tasks, driving innovation, and exhibiting human-like survival skills and adaptability (§ 4.1). Following that, we take a retrospective look at the development history of multi-agents. We introduce the interactions between agents in LLM-based multi-agent system applications, where they engage in | 2309.07864#7 | 2309.07864#9 | 2309.07864 | [
"2305.08982"
] |
2309.07864#9 | The Rise and Potential of Large Language Model Based Agents: A Survey | 5 collaboration, negotiation, or competition. Regardless of the mode of interaction, agents collectively strive toward a shared objective (§ 4.2). Lastly, considering the potential limitations of LLM-based agents in aspects such as privacy security, ethical constraints, and data deficiencies, we discuss the human-agent collaboration. We summarize the paradigms of collaboration between agents and humans: the instructor-executor paradigm and the equal partnership paradigm, along with specific applications in practice (§ 4.3). Building upon the exploration of practical applications of LLM-based agents, we now shift our focus to the concept of the â Agent Societyâ , examining the intricate interactions between agents and their surrounding environments (§ 5). This section begins with an investigation into whether these agents exhibit human-like behavior and possess corresponding personality (§5.1). Furthermore, we introduce the social environments within which the agents operate, including text-based environment, virtual sandbox, and the physical world (§5.2). Unlike the previous section (§ 3.2), here we will focus on diverse types of the environment rather than how the agents perceive it. Having established the foundation of agents and their environments, we proceed to unveil the simulated societies that they form (§5.3). We will discuss the construction of a simulated society, and go on to examine the social phenomena that emerge from it. Specifically, we will emphasize the lessons and potential risks inherent in simulated societies. Finally, we discuss a range of key topics (§ 6) and open problems within the field of LLM-based agents: (1) the mutual benefits and inspirations of the LLM research and the agent research, where we demonstrate that the development of LLM-based agents has provided many opportunities for both agent and LLM communities (§ 6.1); (2) existing evaluation efforts and some prospects for LLM-based agents from four dimensions, including utility, sociability, values and the ability to continually evolve (§ 6.2); (3) potential risks of LLM-based agents, where we discuss adversarial robustness and trustworthiness of LLM-based agents. | 2309.07864#8 | 2309.07864#10 | 2309.07864 | [
"2305.08982"
] |
2309.07864#10 | The Rise and Potential of Large Language Model Based Agents: A Survey | We also include the discussion of some other risks like misuse, unemployment and the threat to the well-being of the human race (§ 6.3); (4) scaling up the number of agents, where we discuss the potential advantages and challenges of scaling up agent counts, along with the approaches of pre-determined and dynamic scaling (§ 6.4); (5) several open problems, such as the debate over whether LLM-based agents represent a potential path to AGI, challenges from virtual simulated environment to physical environment, collective Intelligence in AI agents, and Agent as a Service (§ 6.5). After all, we hope this paper could provide inspiration to the researchers and practitioners from relevant fields. # 2 Background In this section, we provide crucial background information to lay the groundwork for the subsequent content (§ 2.1). We first discuss the origin of AI agents, from philosophy to the realm of AI, coupled with a discussion of the discourse regarding the existence of artificial agents (§ 2.2). Subsequently, we summarize the development of AI agents through the lens of technological trends. Finally, we introduce the key characteristics of agents and demonstrate why LLMs are suitable to serve as the main part of the brains of AI agents (§ 2.3). # 2.1 Origin of AI Agent | 2309.07864#9 | 2309.07864#11 | 2309.07864 | [
"2305.08982"
] |
2309.07864#11 | The Rise and Potential of Large Language Model Based Agents: A Survey | â Agentâ is a concept with a long history that has been explored and interpreted in many fields. Here, we first explore its origins in philosophy, discuss whether artificial products can possess agency in a philosophical sense, and examine how related concepts have been introduced into the field of AI. Agent in philosophy. The core idea of an agent has a historical background in philosophical discussions, with its roots traceable to influential thinkers such as Aristotle and Hume, among others [5]. In a general sense, an â agentâ is an entity with the capacity to act, and the term â agencyâ denotes the exercise or manifestation of this capacity [5]. While in a narrow sense, â agencyâ is usually used to refer to the performance of intentional actions; and correspondingly, the term â agentâ denotes entities that possess desires, beliefs, intentions, and the ability to act [32; 33; 34; 35]. Note that agents can encompass not only individual human beings but also other entities in both the physical and virtual world. Importantly, the concept of an agent involves individual autonomy, granting them the ability to exercise volition, make choices, and take actions, rather than passively reacting to external stimuli. | 2309.07864#10 | 2309.07864#12 | 2309.07864 | [
"2305.08982"
] |
2309.07864#12 | The Rise and Potential of Large Language Model Based Agents: A Survey | 6 From the perspective of philosophy, is artificial entities capable of agency? In a general sense, if we define agents as entities with the capacity to act, AI systems do exhibit a form of agency [5]. However, the term agent is more usually used to refer to entities or subjects that possess consciousness, intentionality, and the ability to act [32; 33; 34]. Within this framework, itâ s not immediately clear whether artificial systems can possess agency, as it remains uncertain whether they possess internal states that form the basis for attributing desires, beliefs, and intentions. Some people argue that attributing psychological states like intention to artificial agents is a form of anthropomorphism and lacks scientific rigor [5; 36]. As Barandiaran et al. [36] stated, â Being specific about the requirements for agency has told us a lot about how much is still needed for the development of artificial forms of agency.â In contrast, there are also researchers who believe that, in certain circumstances, employing the intentional stance (that is, interpreting agent behavior in terms of intentions) can provide a better description, explanation and abstraction of the actions of artificial agents, much like it is done for humans [11; 37; 38]. With the advancement of language models, the potential emergence of artificial intentional agents appears more promising [24; 25; 39; 40; 41]. In a rigorous sense, language models merely function as conditional probability models, using input to predict the next token [42]. Different from this, humans incorporate social and perceptual context, and speak according to their mental states [43; 44]. Consequently, some researchers argue that the current paradigm of language modeling is not compatible with the intentional actions of an agent [30; 45]. However, there are also researchers who propose that language models can, in a narrow sense, serve as models of agents [46; 47]. They argue that during the process of context-based next-word prediction, current language models can sometimes infer approximate, partial representations of the beliefs, desires, and intentions held by the agent who generated the context. With these representations, the language models can then generate utterances like humans. To support their viewpoint, they conduct experiments to provide some empirical evidence [46; 48; 49]. Introduction of agents into AI. | 2309.07864#11 | 2309.07864#13 | 2309.07864 | [
"2305.08982"
] |
2309.07864#13 | The Rise and Potential of Large Language Model Based Agents: A Survey | It might come as a surprise that researchers within the mainstream AI community devoted relatively minimal attention to concepts related to agents until the mid to late 1980s. Nevertheless, there has been a significant surge of interest in this topic within the realms of computer science and artificial intelligence communities since then [50; 51; 52; 53]. As Wooldridge et al. [4] stated, we can define AI by saying that it is a subfield of computer science that aims to design and build computer-based agents that exhibit aspects of intelligent behavior. | 2309.07864#12 | 2309.07864#14 | 2309.07864 | [
"2305.08982"
] |
2309.07864#14 | The Rise and Potential of Large Language Model Based Agents: A Survey | So we can treat â agentâ as a central concept in AI. When the concept of agent is introduced into the field of AI, its meaning undergoes some changes. In the realm of Philosophy, an agent can be a human, an animal, or even a concept or entity with autonomy [5]. However, in the field of artificial intelligence, an agent is a computational entity [4; 7]. Due to the seemingly metaphysical nature of concepts like consciousness and desires for computational entities [11], and given that we can only observe the behavior of the machine, many AI researchers, including Alan Turing, suggest temporarily setting aside the question of whether an agent is â actuallyâ thinking or literally possesses a â mindâ [3]. Instead, researchers employ other attributes to help describe an agent, such as properties of autonomy, reactivity, pro-activeness and social ability [4; 9]. There are also researchers who held that intelligence is â in the eye of the beholderâ ; it is not an innate, isolated property [15; 16; 54; 55]. In essence, an AI agent is not equivalent to a philosophical agent; rather, it is a concretization of the philosophical concept of an agent in the context of AI. In this paper, we treat AI agents as artificial entities that are capable of perceiving their surroundings using sensors, making decisions, and then taking actions in response using actuators [1; 4]. # 2.2 Technological Trends in Agent Research The evolution of AI agents has undergone several stages, and here we take the lens of technological trends to review its development briefly. Symbolic Agents. In the early stages of artificial intelligence research, the predominant approach utilized was symbolic AI, characterized by its reliance on symbolic logic [56; 57]. This approach employed logical rules and symbolic representations to encapsulate knowledge and facilitate reasoning processes. Early AI agents were built based on this approach [58], and they primarily focused on two problems: the transduction problem and the representation/reasoning problem [59]. These agents are aimed to emulate human thinking patterns. They possess explicit and interpretable reasoning | 2309.07864#13 | 2309.07864#15 | 2309.07864 | [
"2305.08982"
] |
2309.07864#15 | The Rise and Potential of Large Language Model Based Agents: A Survey | 7 frameworks, and due to their symbolic nature, they exhibit a high degree of expressive capability [13; 14; 60]. A classic example of this approach is knowledge-based expert systems. However, symbolic agents faced limitations in handling uncertainty and large-scale real-world problems [19; 20]. Additionally, due to the intricacies of symbolic reasoning algorithms, it was challenging to find an efficient algorithm capable of producing meaningful results within a finite timeframe [20; 61]. Reactive agents. Different from symbolic agents, reactive agents do not use complex symbolic reasoning. Instead, they primarily focus on the interaction between the agent and its environment, emphasizing quick and real-time responses [15; 16; 20; 62; 63]. These agents are mainly based on a sense-act loop, efficiently perceiving and reacting to the environment. The design of such agents prioritizes direct input-output mappings rather than intricate reasoning and symbolic operations [52]. However, Reactive agents also have limitations. They typically require fewer computational resources, enabling quicker responses, but they might lack complex higher-level decision-making and planning capabilities. Reinforcement learning-based agents. With the improvement of computational capabilities and data availability, along with a growing interest in simulating interactions between intelligent agents and their environments, researchers have begun to utilize reinforcement learning methods to train agents for tackling more challenging and complex tasks [17; 18; 64; 65]. The primary concern in this field is how to enable agents to learn through interactions with their environments, enabling them to achieve maximum cumulative rewards in specific tasks [21]. Initially, reinforcement learning (RL) agents were primarily based on fundamental techniques such as policy search and value function optimization, exemplified by Q-learning [66] and SARSA [67]. With the rise of deep learning, the integration of deep neural networks and reinforcement learning, known as Deep Reinforcement Learning (DRL), has emerged [68; 69]. This allows agents to learn intricate policies from high- dimensional inputs, leading to numerous significant accomplishments like AlphaGo [70] and DQN [71]. The advantage of this approach lies in its capacity to enable agents to autonomously learn in unknown environments, without explicit human intervention. This allows for its wide application in an array of domains, from gaming to robot control and beyond. Nonetheless, reinforcement learning faces challenges including long training times, low sample efficiency, and stability concerns, particularly when applied in complex real-world environments [21]. | 2309.07864#14 | 2309.07864#16 | 2309.07864 | [
"2305.08982"
] |
2309.07864#16 | The Rise and Potential of Large Language Model Based Agents: A Survey | Agents with transfer learning and meta learning. Traditionally, training a reinforcement learning agent requires huge sample sizes and long training time, and lacks generalization capability [72; 73; 74; 75; 76]. Consequently, researchers have introduced transfer learning to expedite an agentâ s learning on new tasks [77; 78; 79]. Transfer learning reduces the burden of training on new tasks and facilitates the sharing and migration of knowledge across different tasks, thereby enhancing learning efficiency, performance, and generalization capabilities. Furthermore, meta-learning has also been introduced to AI agents [80; 81; 82; 83; 84]. Meta-learning focuses on learning how to learn, enabling an agent to swiftly infer optimal policies for new tasks from a small number of samples [85]. Such an agent, when confronted with a new task, can rapidly adjust its learning approach by leveraging acquired general knowledge and policies, consequently reducing the reliance on a large volume of samples. However, when there exist significant disparities between source and target tasks, the effectiveness of transfer learning might fall short of expectations and there may exist negative transfer [86; 87]. Additionally, the substantial amount of pre-training and large sample sizes required by meta learning make it hard to establish a universal learning policy [81; 88]. Large language model-based agents. As large language models have demonstrated impressive emergent capabilities and have gained immense popularity [24; 25; 26; 41], researchers have started to leverage these models to construct AI agents [22; 27; 28; 89]. Specifically, they employ LLMs as the primary component of brain or controller of these agents and expand their perceptual and action space through strategies such as multimodal perception and tool utilization [90; 91; 92; 93; 94]. These LLM- based agents can exhibit reasoning and planning abilities comparable to symbolic agents through techniques like Chain-of-Thought (CoT) and problem decomposition [95; 96; 97; 98; 99; 100; 101]. They can also acquire interactive capabilities with the environment, akin to reactive agents, by learning from feedback and performing new actions [102; 103; 104]. | 2309.07864#15 | 2309.07864#17 | 2309.07864 | [
"2305.08982"
] |
2309.07864#17 | The Rise and Potential of Large Language Model Based Agents: A Survey | Similarly, large language models undergo pre-training on large-scale corpora and demonstrate the capacity for few-shot and zero-shot generalization, allowing for seamless transfer between tasks without the need to update parameters [41; 105; 106; 107]. LLM-based agents have been applied to various real-world scenarios, 8 such as software development [108; 109] and scientific research [110]. Due to their natural language comprehension and generation capabilities, they can interact with each other seamlessly, giving rise to collaboration and competition among multiple agents [108; 109; 111; 112]. Furthermore, research suggests that allowing multiple agents to coexist can lead to the emergence of social phenomena [22]. # 2.3 Why is LLM suitable as the primary component of an Agentâ | 2309.07864#16 | 2309.07864#18 | 2309.07864 | [
"2305.08982"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.