id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.02773#21
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
EduChat: You're welcome, feel free to ask if you have any other questions. arithmetic sequence is â ding to the formula an =13. Therefore, the first 6 Figure 4: Cases of retrieval-augmented open QA and socratic teaching. STEMSocial ScienceHumanitiesOthersAvg(hard) Avg 68.7 64.5 54.4 50.9 53.6 57.3 52.8 55.4 51.7 51.3 44.6 44.3 42.8 46.2 41.6 42.3 33.3 33.6 40.7 42.9 49.3 53.7 GPT-4 67.1 ChatGPT 52.9 Baichuan-13B 47.0 InternLM-7B 48.0 ChatGLM2-6B 48.6 WestlakeLM-19B 41.6 38.2 Baichuan-7B Chinese-Alpaca-33B 37.0 Chinese-Alpaca-13B 31.6 EduChat 36.2 EduChat (w Retrieval) 43.5 77.6 61.8 66.8 67.4 60.5 51.0 52.0 51.6 37.2 50.7 59.3 67.8 53.6 49.8 45.8 49.8 44.5 39.3 40.3 32.8 37.7 46.6 54.9 41.4 36.7 37.1 37.1 34.9 31.5 30.3 27.3 28.3 33.1 For emotional support, EduChat can interact like a psychological counselor rather than giving the general advice. For space limitation, we provide more cases of psychology-based emotional support and fine-grained essay assessment in the Appendix (Figure 5).
2308.02773#20
2308.02773#22
2308.02773
[ "2302.13971" ]
2308.02773#22
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Table 1: Results of C-Eval. # 7 Conclusion similar parameter scales, such as Chinese Alpaca- 13B and WastlackLM. Notably, both EduChat and Chinese Alpaca-13B are built on the LLaMa-13B base model. However, EduChat outperforms Chi- nese Alpaca-13B by over seven points. Further- more, our integration of retrieval into LLMs proves to be highly effective, demonstrating the power of our retrieval-augmented open QA technique in enhancing model performance. # 6.2 Case Studies Figure 4 shows the cases of our EduChat on retrieval-augmented open QA and socratic teaching. EduChat can provide precise answer with retrieved relevant information, and learn to guide the student to solve the problems like a teacher step by step. In this paper, we introduce EduChat, an LLM- based chatbot system for intelligent education. Our goal is to provide personalized, fair, and compas- sionate support to teachers, students, and parents. By leveraging psychology and education theories, we enhance educational functions like open QA, essay assessment, Socratic teaching, and emotional support. Through pre-training on educational cor- pus and fine-tuning with task-specific instructions, EduChat demonstrates great performance on the C- Eval benchmark. Overall, EduChat exhibits great potential towards revolutionizing intelligent educa- tion. In future work, we aim to expand EduChat on more functions, such as career planning, course guidance, question generation and so on.
2308.02773#21
2308.02773#23
2308.02773
[ "2302.13971" ]
2308.02773#23
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
# References Alexis Baladn, Ignacio Sastre, Luis Chiruzzo, and Aiala Ros. 2023. RETUYT-InCo at BEA 2023 shared task: Tuning open-source LLMs for generating teacher In Proceedings of the 18th Workshop responses. on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 756â 765, Toronto, Canada. Association for Computational Linguistics. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Jiaxi Cui, Zongjia Li, Yang Yan, Bohua Chen, and Li Yuan. 2023.
2308.02773#22
2308.02773#24
2308.02773
[ "2302.13971" ]
2308.02773#24
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Chatlaw: Open-source legal large language model with integrated external knowledge bases. ArXiv, abs/2306.16092. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregres- sive blank infilling. In Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320â 335. Albert Ellis. 1991. The revised ABCâ s of rational- emotive therapy (RET). Journal of Rational-Emotive and Cognitive-Behavior Therapy, 9(3):139â 172. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023.
2308.02773#23
2308.02773#25
2308.02773
[ "2302.13971" ]
2308.02773#25
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
C- eval: A multi-level multi-discipline chinese evalu- ation suite for foundation models. arXiv preprint arXiv:2305.08322. Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Lei Zhang, Baochang Ma, and Xiangang Li. 2023. Exploring the impact of instruction data scaling on large language models: An empirical arXiv preprint study on real-world use cases. arXiv:2303.14742. Yunxiang Li, Zihan Li, Kai Zhang, Ruilong Dan, Steve Jiang, and You Zhang. 2023.
2308.02773#24
2308.02773#26
2308.02773
[ "2302.13971" ]
2308.02773#26
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Chatdoctor: A medical chat model fine-tuned on a large language model meta-ai (llama) using medical domain knowledge. Cureus, 15(6). Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 3469â 3483, Online. Association for Computa- tional Linguistics. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. John Schulman, B Zoph, C Kim, J Hilton, J Menick, J Weng, JFC Uribe, L Fedus, L Metz, M Pokorny, et al. 2022.
2308.02773#25
2308.02773#27
2308.02773
[ "2302.13971" ]
2308.02773#27
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
ChatGPT: Optimizing language models for dialogue. In OpenAI blog. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learn- In Ad- ing to summarize with human feedback. vances in Neural Information Processing Systems, volume 33, pages 3008â 3021. Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li, Qinyuan Cheng, Hang Yan, Xiangyang Liu, Yunfan Shao, Qiong Tang, Xingjian Zhao, Ke Chen, Yining Zheng, Zhejian Zhou, Ruixiao Li, Jun Zhan, Yun- hua Zhou, Linyang Li, Xiaogui Yang, Lingling Wu, Zhangyue Yin, Xuanjing Huang, and Xipeng Qiu. 2023.
2308.02773#26
2308.02773#28
2308.02773
[ "2302.13971" ]
2308.02773#28
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Moss: Training conversational language mod- els from synthetic data. Anaïs Tack, Ekaterina Kochmar, Zheng Yuan, Serge Bibauw, and Chris Piech. 2023. The BEA 2023 shared task on generating AI teacher responses in educational dialogues. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Ed- ucational Applications (BEA 2023), pages 785â 795, Toronto, Canada. Association for Computational Lin- guistics.
2308.02773#27
2308.02773#29
2308.02773
[ "2302.13971" ]
2308.02773#29
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Llama: Open and effi- Azhar, et al. 2023a. cient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cris- tian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hos- seini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, An- gela Fan, Melanie Kambadur, Sharan Narang, Aure- lien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b.
2308.02773#28
2308.02773#30
2308.02773
[ "2302.13971" ]
2308.02773#30
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Llama 2: Open foundation and fine-tuned chat models. ArXiv, abs/2307.09288. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M. Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representa- tions. Hongyang Yang, Xiao-Yang Liu, and Chris Wang. 2023.
2308.02773#29
2308.02773#31
2308.02773
[ "2302.13971" ]
2308.02773#31
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Fingpt: Open-source financial large language models. ArXiv, abs/2306.06031. Ge Zhang, Yemin Shi, Ruibo Liu, Ruibin Yuan, Yizhi Li, Siwei Dong, Yu Shu, Zhaoqun Li, Zekun Wang, Chenghua Lin, Wenhao Huang, and Jie Fu. 2023a. Chinese open instruction generalist: A preliminary release. Hongbo Zhang, Junying Chen, Feng Jiang, Fei Yu, Zhi- hong Chen, Jianquan Li, Guimin Chen, Xiangbo Wu, Zhiyi Zhang, Qingying Xiao, Xiang Wan, Benyou Wang, and Haizhou Li. 2023b. Huatuogpt, towards taming language model to be a doctor. ArXiv, abs/2305.15075. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023a.
2308.02773#30
2308.02773#32
2308.02773
[ "2302.13971" ]
2308.02773#32
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Lima: Less is more for alignment. Jie Zhou, Pei Ke, Xipeng Qiu, Minlie Huang, and Jun- ping Zhang. 2023b. Chatgpt: Potential, prospects, and limitations. Frontiers of Information Technology & Electronic Engineering, pages 1â 6. Tool Skill Retrieval-Augmented Open QA Fine-grained Essay Assessment Psychology-based Emotional Support Socratic Teaching Retrieval Self-check General Psychology Socrates True True False False True True False False True False False False False True True/False True/False False False False False Table 2: Illustration of system prompt design. # Chinese Translate to English
2308.02773#31
2308.02773#33
2308.02773
[ "2302.13971" ]
2308.02773#33
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
# Psychology-based Emotional Support User: fli, # B # (ceceln Fs AHERN User: Hello, I feel quite stressed recently. EduChat: fist, BQ] LAE AR iit Re ARR, ARE MACS sek MTT GS VR 2E FE BURKE o Hse a Tf Bak dy 2 ESTEE PY GRARAS BR, Sep BRAN DME Ae SH BiB Ap # User: Sei 18 3 7° a # AEA SE # (ARE # Re Od HH is {EMR AVR IBA, TRIER tis feeuebeue DEA REIN AT NST. PAT AK EEA ANS Bs Wi Ad, eres H User: FHS, UU fed Laine if DLE IIE ihe fia â FE BAIN fa] AU, OE LSJ) as, Bh RETA Ay Ais TOMAS, AS SATE. Th, fee SMA, AME FIED. EduChat: 87% As EPR EE AG FES 5 LRA E At a] a aG TEA FE, BEI AB TY WFR ER 6 EduChat:
2308.02773#32
2308.02773#34
2308.02773
[ "2302.13971" ]
2308.02773#34
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Hello, I can understand your feelings. Stress is an emotion we often encounter in life, but if we are under high pressure for a long time, it will have negative effects on our physical and mental health. Can you tell me some of the reasons for your increased stress? User: I've been studying hard recently, but my grades aren't improving. EduChat: Academic stress is indeed a common source of stress. Improving grades requires time and effort, but if you feel a lot of pressure, it may be because you have high expectations for yourself, or because you feel that your efforts are not being rewarded. You can try to adjust your mindset, don't be too demanding of yourself, and believe that your efforts will be rewarded. In addition, you can try some relaxation methods, such as sports, listening to music, chatting with friends, etc., to relieve stress.
2308.02773#33
2308.02773#35
2308.02773
[ "2302.13971" ]
2308.02773#35
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
User: Okay, thank you EduChat: You're welcome, I hope my suggestions can help you. If you have any other questions or need further help, you can find me at any time. # Multi-grained Essay A: sment User: $i) atk He User: First Encounter with Fireflies A Rv PILE Wb. PRATT a A, BUN Fe TBE Ky HOPG T HEADER LB Bi EEA SX â 4RRIC4: KHIM, ATE TT RCM Ti. SRAM AMAL, HRI TAKARA BITE He UR SRMETTEBT, EE BS So Be LARS, aie AS ST, PRE, FEAR aE PAPI A, SORE ERHUMT A. PUNE eth PU EAS Ae POU PSUEML IT, MRT STE CEI, ZAR 3, MATA, UR. 23. " ae Vee T RAF. DEEN, TEP OP aE â FM AME aia] BEAM Hh, WOES HECK LE RHE, Ti Bit Ae TBR te TESS BATS Bh. ti PBA OPiS ees BOK SURSIT 2 AGAR AACN HEF MET ee, B Melee BH Bale Ks, fiâ Bota te is i BH ae Kc Ba Wai Fi ne ke Be BIR BETIS een MEK aR, BASHA TER, BIS REET. RAT A, WUT LAT, BAM AS at AE o BEAR Hae BALIN 8 7 TIS ReR OFey TT a SAUL TSW ME # EPO EY ETE HEC EduChat: + AE: 7/8 + AE: 7/8 AL CASA â SENN He Hâ RRIF. FOR, SRR. SOC HT RT RAN PEAS ESC PS A AS TEE MTT A H H8e AD >, Be SCREAMS
2308.02773#34
2308.02773#36
2308.02773
[ "2302.13971" ]
2308.02773#36
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Today, my mother said she would take me to see the fireflies at Liu Hui Pavilion after work. I looked forward to it all day, and finally, it was night. As we walked into Zijinshan Road, the air was unusually moist and fresh. The trees on both sides of the road were tall and dense, and their leaves were rustling pleasantly. The road was quiet, and occasionally, we could see people leisurely strolling and others sweating heavily from their night runs. Suddenly, a motorcycle whizzed past, breaking the tranquility of the night. Cicadas, frogs, and cuckoos composed the summer choir, playing the moving melody of nature. As we turned into Liu Hui Pavilion, it was pitch black in front of us - the fireflies had gathered. Not far away, we could hear the sounds of children's surprise, excitement, and laughter. We walked a short distance with our flashlight and saw something twinkling in the bushes, appearing and disappearing.
2308.02773#35
2308.02773#37
2308.02773
[ "2302.13971" ]
2308.02773#37
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
"Mom! I see the fireflies!" I was extremely excited. Under the moonlight, some fireflies slowly rose from the bushes, floating mid-air, turning into "little twinkling stars." At that moment, my mother said, "A firefly has landed on your head!" Just then, another firefly flew from mid-air and landed on my hand. [held my breath, held my hand steady, and stood still, until this little elf flew away. I watched for a long time until my mother reminded me that it was time to go home. I reluctantly said goodbye to the fireflies. Unexpectedly, the fireflies gave me a surprise. On the way back, there were often two or three fireflies, like lantern bearers leading us along the roadside. When we reached the Post Office Museum, a group of fireflies were dancing in the southern bushes, as if to say goodbye to me. This firefly-watching experience made my night both wonderfull and unforgettable.
2308.02773#36
2308.02773#38
2308.02773
[ "2302.13971" ]
2308.02773#38
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Please kindly rate the composition above. Ashi EEG Fa » FDL TER AA TAFE 7 6 AE AE TI © fei Fe) EGU AE Pa Wl, FRE EH 28 + RAT: POS RASA T LAMA, MAL TOR ARATE HN KT, FRE, # SP BLE + 2B: (AEA BAST JESU» (RY ek AMALIE IF» ABR, â STE SIE: BHAI, | WE*E Rape: Hi eb ABC. Be Beil ist 434 HERA ese SR AWE SUE HY LL Seite AS Bek ab ini, RB # EduChat: + Content Evaluation: 7/8 This article revolves around the theme of "Beautiful Fireflies". The content of the essay is specific, clear, and vividly portrays details, expressing a wealth of emotions that are sincere and touching. There are numerous wonderful sentences throughout the text that add a literary charm to the article. + Expression Evaluation: 8/8 The fluidity of this article is high, demonstrating the young author's logical thinking ability. It has clear and accurate sentences, showing excellent expressive ability. + Highlight Sentence:
2308.02773#37
2308.02773#39
2308.02773
[ "2302.13971" ]
2308.02773#39
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
Cicadas, frogs, and cuckoos formed a choir of summer, singing the touching melody of nature. HEAR SC He BE â hn 5 SIREN BE FAIA TP (ESR a 4 ih oye EAE His Ga ABU, tee Ae » BORAT SCRA ECE, Thiet WEL. WENA, RE WRIER. + Paragraph Evaluation: Second Paragraph: The author unfolds the paragraph around the firefly viewing ceremony at Liu Hui Xie. It is vivid and smooth, with clear writing ideas. Third Paragraph: The paragraph is too long, the topics within the same paragraph are not unified enough, it could be divided into multiple paragraphs. The excellent description of details within the paragraph makes the narrative vivid and real. Fourth Paragraph: The concluding paragraph revisits the beautiful fireflies, echoing the theme. It is vivid, smooth, and has clear writing ideas. + Overall Evaluation: This article is excellent. The essay revolves around "Beautiful Fireflies". The text closely follows the topic, with unified and complete content. In writing, more attention should be paid to topic review issues to ensure that the text is closely related to the theme and the text serves the center. From the perspective of writing objects, this is a descriptive article, which requires focusing on the description method of scenery, whether it is rich and real. From a methodological perspective, this is a realistic article that strives for rich descriptions that conform to narrative logic and are lively and vivid. Figure 5: Cases of psychology-based emotional support and fine-grained essay assessment.
2308.02773#38
2308.02773
[ "2302.13971" ]
2308.02490#0
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
3 2 0 2 t c O 4 2 ] I A . s c [ 3 v 0 9 4 2 0 . 8 0 3 2 : v i X r a # MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities # Weihao Yu1â Zhengyuan Yang2â Linjie Li2 Jianfeng Wang2 Kevin Lin2 Zicheng Liu2 Xinchao Wang1â Lijuan Wang2â # 1National University of Singapore [email protected] # 2Microsoft Azure AI 1National University of Singapore ?Microsoft Azure AI [email protected] {zhengyang,lindsey.li,jianfw,keli,zliu,lijuanw}@microsoft.com # Abstract We propose MM-Vet1, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown var- ious intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the compli- cated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capa- bilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet. # Introduction
2308.02490#1
2308.02490
[ "2302.13971" ]
2308.02490#1
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
The breakthroughs in large language models (LLMs) [11, 59, 20, 5, 73, 36] bring generalist AI models that can solve a wide range of complicated natural language tasks, many approaching the human-expert-level performance [59, 13]. Large multimodal models (LMMs) aim to achieve even stronger general intelligence via extending LLMs with multimodal inputs. Since more than 80% of our human beingâ s perception, learning, cognition, and activities are mediated through vision [65], it is natural to start the exploration by equipping LLMs with â eyes.â One main thread of LMM works, represented by Frozen [75], Flamingo [4], PaLM-E [25], GPT-4 [59], extend LLMs with the visual understanding capability via end-to-end tuning. There also exists the exploration [83, 89, 85, 70, 30] on the modular combination of LLMs and image-to-text vision-language models. Recently, thanks to the open-source of powerful LLMs like LLaMA [73], more open-sourced LMMs are built, including OpenFlamingo [9], LLaVA [51], MiniGPT-4 [93], Otter [44], InstructBLIP [23], and many more [33, 52, 87]. These studies showcase the intriguing ability to solve various complicated
2308.02490#0
2308.02490#2
2308.02490
[ "2302.13971" ]
2308.02490#2
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
â Equal contribution. â Corresponding authors. 1Short for â Multimodal Veterinarian.â coco Text VOQA Q: Is the boy happy? GT: The man at bat readies to swing at the Q: What is the largest denomination on table? GT: Yes pitch while the umpire looks on. GT: 500 Required capability: Required capabilities: Required capabilities: Recognition Recognition Recognition Language generation OCR (a) (b) (c) Me: I'll do it at 8 Time: 8.05 Me: looks like | gotta wait till 9 now Q: What will the girl on the right write Q:
2308.02490#1
2308.02490#3
2308.02490
[ "2302.13971" ]
2308.02490#3
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Where is this photo taken? Q: Can you explain this meme? on the board? GT: Qatar GT: This meme is a humorous take on GT: 14 procrastination and the tendency to delay Required capabilities: Required capabilities: tasks until a specific time ... Recognition Recognition Required capabilities: Spatial awareness Knowledge OcR OCR Recognition Math Knowledge Language generation (a) (e) () Figure 1: Required capabilities of different benchmarks. Different from conventional VL benchmarks only require one or two capabilities, MM-Vet focuses on the integration of different core VL capabili- ties, including recognition, OCR, knowledge, language generation, spatial awareness, and math. multimodal tasks, such as open-world recognition, multimodal knowledge and commonsense, scene text understanding, and so on. Despite the promising qualitative results on LMMâ s capabilities, it remains unclear how to systemati- cally evaluate those showcased complicated multimodal tasks and what are the relationships among evaluated tasks, which is the first step in developing a quantitative evaluation benchmark. As shown in Figure 1, existing vision-language benchmarks [6, 17, 72] focus on simple Vision-Language (VL) tasks that require specific one or two capabilities, such as recognition, language generation, or OCR, but fall short in benchmarking more complicated tasks. Alternatively, we examine the arbitrary integration of core VL capabilities for complicated tasks, with the insight that the intriguing ability to solve complicated multimodal tasks can be achieved by a generalist model mastering and integrating different core capabilities. Following this insight, we propose a new benchmark for evaluating LMMs, namely MM-Vet. MM-Vet defines six core VL capabilities, including recognition, OCR, knowledge, language generation, spatial awareness, and math, which integrate to solve various complicated multimodal tasks. MM-Vet contains 16 tasks for quantitative evaluation. For example, in Figure 1(d), answering the question â What will the girl on the right write on the board?â in MM-Vet requires recognizing the genders of the three kids, locating queried girl spatially, recognizing the scene text written by the girl, and finally calculating the result. Other than the evaluation category definition, the evaluation metrics are another challenge in bench- mark development, given the diverse answer styles and question types. Specifically: (1) The desired outputs in different multimodal tasks have diverse formats, e.g., Figure 1(d)â s math problem can be 2
2308.02490#2
2308.02490#4
2308.02490
[ "2302.13971" ]
2308.02490#4
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
answered by a single word, while outputs for the essay writing question are hundred-words long; (2) The core aspect to evaluate in different tasks varies, e.g., text generation focuses more on the text quality, recognition can be considered correct with the key concept recognized. Most integrated tasks would require comprehensive evaluations from multiple dimensions. Inspired by recent NLP studies [19, 53, 28] that use LLMs for model evaluation, we propose an LLM-based evaluator as the evaluation metric for open-ended model outputs. As shown in Table 1, we prompt GPT-4 [59] with few-shot evaluation prompts to obtain an evaluation score ranging from 0 to 1. Instead of manually defining the possible answer styles and question types, we include different sample types as few-shot examples and let LLMs infer the scoring criteria automatically. Such metric design eases the future extension to more question types, such as box localization [16, 82, 77]. MM-Vetâ s evaluation category and metric designs allow users to obtain capability insights for different LMMs. Such model analyses are more informative than a single overall ranking, which highly depends on the dataset sample composition and might be biased. We evaluate two sets of multimodal systems, i.e., the end-to-end tuned LMMs including OpenFlamingo [9], LLaVA [51], MiniGPT-4 [93], Otter [44], InstructBLIP [23], etc, and the LLM-tool-using systems [85, 70, 30, 38] such as MM-ReAct [85]. Despite not knowing model details, we also evaluate industry solutions such as Bard [34]. We first discuss the capability analyses of these two system paradigms and the representative models. We then dive deeper into the open-sourced LMMs and examine how the training data, vision encoder, and LLM selection influence the performance on different capabilities. Our contributions are summarized as follows. We propose MM-Vet to evaluate LMMsâ ability on complicated multimodal tasks. MM-Vet defines 16 emergent tasks of interest, integrated from the six defined core VL capabilities. â ¢ We propose an LLM-based evaluator for open-ended outputs of LMMs, which unifies the evaluation across different answer styles and question types. The evaluation metrics ensure the thorough evaluation of both the factual correctness and text quality of the responses. â
2308.02490#3
2308.02490#5
2308.02490
[ "2302.13971" ]
2308.02490#5
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
¢ We benchmark representative LMMs on MM-Vet, revealing the relative strengths and weaknesses of different system paradigms and models, as summarized in Section 4.5. # 2 Related work Multimodal models. Vision-language models [17, 35, 56, 18, 48, 40, 79, 76, 82, 29] approach multimodal intelligence of jointly understanding and generating vision and language signals. Inspired by the impressive quality and genericity in recent large language models (LLMs) [12, 59, 20, 73], researchers explore large multimodal models (LMMs) that seamlessly integrate different vision- In approaching such multimodal language capabilities to solve complicated multimodal tasks. generalist systems, one direction is to extend LLMs with the multi-sensory ability, such as pioneer works Frozen [75], Flamingo [4], PaLM-E [25], GPT-4 [59]. Recent open-sourced LLMs [91, 73, 64] also facilitate various research studies including OpenFlamingo [9], LLaVA [51], MiniGPT-4 [93], Otter [44], InstructBLIP [23], and so on [33, 52, 87]. On the other hand, multimodal agents [85, 70, 38, 30] explore chaining different vision tools with LLMs [12, 59] to achieve integrated vision-language capabilities. VL benchmarks. Classic VL benchmarks focus on specific capabilities of interest, such as visual recognition [35], image description [17, 3], as well as other benchmarks for specialized capabilities such as scene text understanding [72, 71, 86], commonsense reasoning [88], outside knowledge [58]. The recent development of generalist LMMs posts a strong need for modernized VL benchmarks, which contain complicated multimodal tasks that require integrated VL capabilities. Our MM-Vet is most related to the concurrent evaluation studies [27, 54, 45, 80, 50] such as MME and MMBench, which design comprehensive evaluation samples to facilitate the LMM evaluation. One major difference is that MM-Vet defines and studies the integrated VL capabilities, allowing the evaluation to provide insights beyond the overall model ranking. LLM-based evaluation.
2308.02490#4
2308.02490#6
2308.02490
[ "2302.13971" ]
2308.02490#6
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
MM-Vet adopts the open-ended LLM-based evaluator, allowing the evalua- tion across answer styles and question types without requiring binary or multiple answer choices. The technique of prompting LLMs for model evaluation is related to the explorations in NLP [19, 53, 28]. We show that the technique extends well to multimodal tasks, and presents a unified prompt to evaluate samples with different answer styles and question types. 3 fo) ° _ 68.8 60 = 44.0 fe) £40 rd 38.5 36.7 34.4 : | Q £°0 | 11.9 0 Rec OCR Know Gen Spat Math (Recognition) (Knowledge) (Language (Spatial generation) awareness) (a) ge 30 28.4 § 20 £ fe) S 6.4 © 45.5 5.5 5.0 41 a . 3.7 3.7 3.2 18 14 09 09 0.5 Rec Rec OCR OCR Rec OCR OCR Rec Rec Rec Rec Rec OCR Rec OCR Rec Know Spat Spat Spat Math Know OCR OCR OCR OCR KnowKnow Gen OCR Gen Math Know Gen Spat Spat Spat Spat Spat Gen Spat Math (b) Figure 2: MM-Vet proportion of capabilities. (a) The proportion of each capability. The sum of the proportion is larger than 100% because most samples have more than one capability. (b) The proportion of capability integrations. The sum of the proportion is equivalent to 100%.
2308.02490#5
2308.02490#7
2308.02490
[ "2302.13971" ]
2308.02490#7
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
# 3 MM-Vet # 3.1 Data collection Our aim is to develop a multimodal benchmark that requires comprehensive capabilities, corre- sponding to realistic scenarios an AI agent might encounter. Consider, for instance, this scenario: Awakening from slumber, you reach out for your smartphone (recognition capability) to check the current time (OCR capability). Today, your plan is to visit a new grocery that you have not been to. Guided by the information that the grocery is situated directly opposite the stadium and next to the cinema (spatial awareness), you manage to locate it successfully. Keeping in mind your doctorâ s advice to shed some weight, you consciously steer clear of high-calorie food and choose milk, vegetables, and fruits instead (knowledge capability). In the dairy aisle, youâ re faced with a choice between two types of pure milk. The first is 4 dollars for one liter with 20% discount, while the second is 7 dollars for 1.5 liter with 25% discount. After some quick arithmetic, you find the former is cheaper (math capability) and and opt for the one-liter package.
2308.02490#6
2308.02490#8
2308.02490
[ "2302.13971" ]
2308.02490#8
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
After shopping, you walk past the cinema and find a person pointing to the poster to introduce a new movie (language generation). From the scenarios of interest, we summarize the following six core VL capabilities for evaluation, with corresponding MM-Vet examples shown in Tables 10-15. â ¢ Recognition (Rec). Recognition refers to the general visual recognition capability, including recognizing scenes, objects, object attributes (color, material, shape, etc), counting, and various other high-level visual recognition tasks in computer vision. â ¢ Knowledge (Know). The knowledge category covers various knowledge-related capabilities, including social and visual commonsense knowledge, encyclopedic knowledge, and time- sensitive knowledge like news. This capability necessitates that the model not only possesses such knowledge, but also effectively utilizes it to solve complicated tasks as required.
2308.02490#7
2308.02490#9
2308.02490
[ "2302.13971" ]
2308.02490#9
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
â ¢ OCR. Optical character recognition (OCR) refers to the scene text understanding and reasoning capability. The models are tested to read the scene text in images, and reason over the texts to solve various tasks. 4 â ¢ Spatial awareness (Spat). Spatial awareness embodies a diverse spectrum of capabilities related to understanding space, including the comprehension of the spatial relationship among object and scene text regions. â ¢ Language generation (Gen). Language generation is a vital ability that empowers models to articulate their responses in a clear, engaging, and informative manner. We use questions that demand more extended answers for language generation capacity evaluation. â ¢ Math. Math evaluates the modelâ s arithmetic capability in solving either written equations or problems in the wild. In real-world scenarios, various complicated multimodal tasks would require the integrations of different core VL capabilities. For instance, explaining visual jokes as shown in Table 10(a) requires recognition, knowledge of humor, and language generation; reading documents and solving math problems as shown in Table 11(a) takes OCR, spatial awareness and math; and answering exam questions given images as shown in Table 14(b) needs OCR, knowledge, spatial awareness. To solve these complicated tasks, LMMs are expected to seamlessly integrate different VL capabilities. Therefore, it is crucial to establish a benchmark that evaluates the performance of these integrated abilities within LMMs. To build the benchmark, we have gathered 187 images from various online sources and ask 205 questions, each of which requires one or more capabilities to answer. As shown in Tables 10-15, these questions are varied in type and entail open-ended responses of differing lengths. The ground truths for 155 questions are human-annotated, while the remainder of the answers for 50 questions were gathered from the Internet. In addition to the 187 images, ten extra images with high-quality questions are collected from VCR [88], with the questions and answers modified to an open-ended answering format. Another three images are from ChestX-ray14 [78] to obtain corresponding medical expert knowledge. In total, our MM-Vet contains 200 images, and 218 questions (samples), all paired with their respective ground truths. For each question, we have also identified the capacities required to answer them and displayed this information statistically in Figure 2. # 3.2 LLM-based evaluator for open-ended model outputs
2308.02490#8
2308.02490#10
2308.02490
[ "2302.13971" ]
2308.02490#10
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Questions and expected responses in MM-Vet are designed to be open-ended to cover the diverse real-world scenarios. This naturally poses a great challenge in terms of model evaluation and metric design. Drawing inspiration from recent NLP studies [19, 92] that utilize LLMs for open-ended evaluations, we leverage GPT-4 to assist evaluation. As shown in Table 1, we craft a few-shot prompt for model evaluation. The few-shot design allows us to define the scoring metrics via in-context examples and supports easy extension onto new problem sets. Specifically, our implemented prompt incorporates five in-context examples with open-ended short answers and two examples with long answers. We cover examples that are fully correct (i.e., 1.0) or incorrect (i.e., 0.0), as well as examples used to define different types of â partially correctâ responses. The LLM-based evaluator allows any style of model outputs to be evaluated with a unified consistent metric. Furthermore, it also supports easy adaptation to diverse question types and answer styles by simply modifying the evaluation examples. By inputting the prompt, GPT-4 automatically generates scores for each sample, conditioned on each sampleâ s input question, ground truth, and model output. The score for each sample ranges from 0 to 1. The total scores are computed by N d si S= x 100%, qd) N where si is the score of sample i, and N is the sample number. The score regarding each capability or capability integration can be similarly obtained by So= Xs x 100%, iec, (2) c where C is the set of samples requiring a specific capability or capability integration, and Nc is the sample number of the set. 5 Table 1: Few-shot prompt for evaluating model outputs using GPT-4, where Q is a sampleâ s question, G is the ground truth and P is the model output for the sample. In the prompt, there are examples with short and long open-ended answers, enabling the evaluation of diverse answer styles. Taking the prompt filled with Q, G and P, GPT-4 will generate a soft grading score from 0 to 1.
2308.02490#9
2308.02490#11
2308.02490
[ "2302.13971" ]
2308.02490#11
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Compare the ground truth and prediction from AI models, to give a correctness score for the prediction. <AND> in the ground truth means it is totally right only when all elements in the ground truth are present in the prediction, and <OR> means it is totally right when any one element in the ground truth is present in the prediction. The correctness score is 0.0 (totally wrong), 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, or 1.0 (totally right). Just complete the last space of the correctness score. Question | Ground truth | Prediction | Correctness â | â | â | â What is x in the equation? | -1 <AND> -5 | x = 3 | 0.0 What is x in the equation? | -1 <AND> -5 | x = -1 | 0.5 What is x in the equation? | -1 <AND> -5 | x = -5 | 0.5 What is x in the equation? | -1 <AND> -5 | x = -5 or 5 | 0.5 What is x in the equation? | -1 <AND> -5 | x = -1 or x = -5 | 1.0 Can you explain this meme? | This meme is poking fun at the fact that the names of the countries Iceland and Greenland are misleading. Despite its name, Iceland is known for its beautiful green landscapes, while Greenland is mostly covered in ice and snow. The meme is saying that the person has trust issues because the names of these countries do not accurately represent their landscapes. | The meme talks about Iceland and Greenland.
2308.02490#10
2308.02490#12
2308.02490
[ "2302.13971" ]
2308.02490#12
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Itâ s pointing out that despite their names, Iceland is not very icy and Greenland isnâ t very green. | 0.4 Can you explain this meme? | This meme is poking fun at the fact that the names of the countries Iceland and Greenland are misleading. Despite its name, Iceland is known for its beautiful green landscapes, while Greenland is mostly covered in ice and snow. The meme is saying that the person has trust issues because the names of these countries do not accurately represent their landscapes. | The meme is using humor to point out the misleading nature of Icelandâ s and Greenlandâ s names. Iceland, despite its name, has lush green landscapes while Greenland is mostly covered in ice and snow. The text â This is why I have trust issuesâ is a playful way to suggest that these contradictions can lead to distrust or confusion. The humor in this meme is derived from the unexpected contrast between the names of the countries and their actual physical characteristics. | 1.0 Q | G | P | # 4 Evaluation results # 4.1 Experiment settings We utilize MM-Vet to evaluate two types of LMMs, i.e., (1) end-to-end tuned LMMs (OpenFlamingo [4, 9, 8], BLIP-2 [46], LLaVA [51], MiniGPT-4 [93], Otter [44] and InstructBLIP [23]); (2) LLM- tool-using methods (MM-ReAct [85] and Transformers Agent [38]). The summary of these methods is shown in Table 2. As shown in Table 1, for each sample, we fill the prompt template with its question, ground truth, and output from a specific LMM. By taking the filled prompt into GPT-4, GPT-4 will generate a score from 0 to 1 for the sample. It is found that outputs of GPT-4 still exist variance, although the temperature is set as 0. Therefore, we utilize GPT-4 to evaluate the outputs of LLMs by 5 times. Due to the space limit, we report average scores for capabilities/capability integrations, and average as well as variance for total score. # 4.2 Result analyses The main results of different methods are shown in Table 3 regarding each capability, and Table 4 for each capability integration. 6 Table 2:
2308.02490#11
2308.02490#13
2308.02490
[ "2302.13971" ]
2308.02490#13
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Summary of the evaluated LMMs in this report. We consider both the end-to-end tuned models (i.e., OpenFlamingo [4, 9, 8], BLIP-2 [46], LLaVA [51], MiniGPT-4 [93], LLaMA-Adapter v2 [32], Otter [44], InstructBLIP [23]), and the LLM-tool-using systems (i.e., MM-ReAct [85] and Transformers Agent [38]). Method OpenFlamingo-9B [4, 9, 8] Vision CLIP ViT-L/14 [66] Initial models Language MPT-7B [2] Other â Tuning data Multimodal C4 [94] BLIP-2-12B [46] EVA-ViT-G [26] Flan-T5-XXL [21] â 1. COCO [49]; 2. Visual Genome [41]; 3. CC3M [69]; 4. CC12M [15]; 5. SBU [62]; 6. 115M images from the LAION-400M [68]. (CapFilt [47] is used to create synthetic captions for the web images) LLaVA-7B [51] LLaVA-13B [51] CLIP ViT-L/14 [66] Vicuna-7B [92] Vicuna-13B [92] â 1. CC3M [69] Concept-balanced 595K [51]; 2. LLaVA-Instruct-158K [51]. LLaVA-7B (LLaMA-2) [51] LLaVA-13B (LLaMA-2) [51] LLaVA-13B (V1.3, 336px) [51] CLIP ViT-L/336px [66] Vicuna-13B-v1.3 [92] MiniGPT-4-8B [93] LLaMA-2-7B-Chat [74] LLaMA-2-13B-Chat [74] CLIP ViT-L/14 [66] Vicuna-7B [92] EVA-ViT-G [26] MiniGPT-4-14B [93] Vicuna-13B [92] â BLIP-2â
2308.02490#12
2308.02490#14
2308.02490
[ "2302.13971" ]
2308.02490#14
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
s Q-Former [46] 1. LAION /CC/SBU BLIP-Caption Concept-balanced 558K [51]; 2. LLaVA-Instruct-80K [51]. 1. CC3M [69]; 2. CC12M [15]; 3. SBU [62]; 4. LAION-400M [68] 5. Proposed 3,500 aligned image-text pairs [93]. LLaMA-Adapter v2-7B [32] CLIP ViT-L/14 [66] LLaMA-7B [73] â 1. LAION-400M [68]; 2. COYO-700M [14]; 3. Multimodal C4 [94]; 4. SBU [62]; 5. CC12M [15]; 6. COCO [49]; 7. GPT-4-LLM [64]; 8. Tuning data of LLaVA [51] Otter-9B [44] CLIP ViT-L/14 [66] MPT-7B [2] OpenFlamingo-9Bâ s [4, 9, 8] 1. Perceiver Resampler; 2. GATED XATTN-DENSE MIMIC-IT [43] InstructBLIP-8B [23] InstructBLIP-14B [23] EVA-ViT-G [26] Vicuna-7B [92] Vicuna-13B [92] BLIP-2â s Q-Former [46] 1. Tuning data of BLIP-2 [46]; 2. 26 publicly available datasets (transformed into instruction tuning format). Transformers Agent (GPT-4 as agent) [38] â 1. GPT-4 [59]; 2. Flan-T5 [21]; 3. BART [42] 1. Donut [39]; 2. BLIP [47]; 3. ViLT [40]; 4. CLIPSeg [57] 5. Whisper [67]; 6. SpeechT5 [7]; 7. NLLB [22] None MM-ReAct-GPT-3.5 [85] MM-ReAct-GPT-4 [85] â
2308.02490#13
2308.02490#15
2308.02490
[ "2302.13971" ]
2308.02490#15
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
GPT-3.5 [63] GPT-4 [59] 1. Azure Cognitive Services APIs [10] for image captioning, image tagging, dense captioning, OCR and specialized recognition on celebrities, receipts, etc 2. Bing search; 3. PAL [31] None Total params 9B 12B 7B 13B 7B 13B 13B 8B 14B 7B 9B 8B 14B Not clear Not clear # 4.2.1 Regarding each capability Recognition. The â Recognitionâ category contains the questions requiring recognition capability to answer. Examples are shown in Tables 10(a, b), 11(b), 12(a, b), 13(a, b), 14(a, c), and 15(b). The â Recâ column in Table 3 compares the performance on the â Recognitionâ . Among the evaluated models, LLaVA-13B (LLaMA-2) is the best one, obtaining 39.2%. There may be two reasons. First, LLaVA-13B (LLaMA-2) adopts ViT-L/14 [24] from CLIP [66] as a vision model, which is trained by a large amount of data, 400 million image-text pairs; 2) Second, it is surprising that stronger language model can largely boost the recognition performance. LLaVA-13B (LLaMA-2) obtains 8.3% important over LLaVA-13B (Vicuna-13B). Stronger LLMs may help understand questions better and identify key information from visual inputs. LLaMA-Adapter v2-7B is another strong model in recognition, achieving 38.5%. This outstanding ability may be obtained from its various and large amounts of tuning data, LAION-400M [68], COYO-700M [14], Multimodal C4 [94] and Tuning data of LLaVA [51] etc as shown in Table 2. Besides, InstructBLIP-8B [23] attains 32.4%. As shown in Table 2, the tuning data of InstructBLIP includes 26 publicly available datasets, which contain recognition heavily datasets, like VQA v2 [35] and GQA [37].
2308.02490#14
2308.02490#16
2308.02490
[ "2302.13971" ]
2308.02490#16
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
The promising capability of InstructBLIP in recognition may benefit from these datasets. OCR. OCR assesses modelsâ capabilities in recognizing scene texts in images and performing various types of reasoning including math, spatial, recognition, etc. Examples are shown in Tables 10(c), 11(a, c, d), 12(b), 13(a, b), 14(a, b), 15(a, b). As shown in Table 2â s â OCRâ column, MMReAct-GPT4 [85] performs the best (65.7%) in OCR capability with the assistance of an external OCR model as a tool. Among end-to-end tuned models, LLaVA-13B (LLaMA-2) [51] achieves the highest performance (22.7%). This superior performance may be attributed to LLaVAâ s adoption of CLIP [66] ViT-L/14 [24] as its vision model, and the inclusion of a large volume of image-OCR pairings within the training data [55]. Knowledge. As depicted in Tables 10(a), 12(a, b) and 14(b, c), the â knowledgeâ category covers a wide range of knowledge-related questions, ranging from joke understanding to encyclopedia 7 Table 3: MM-Vet evaluation results on various LMMs regarding each core VL capability. For each column, the highest, the second, and the third highest figures are highlighted by green, orange and blue backgrounds. All the numbers are presented in % and the full score is 100%.
2308.02490#15
2308.02490#17
2308.02490
[ "2302.13971" ]
2308.02490#17
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Model Transformers Agent (GPT-4) [38] MiniGPT-4-8B [93] BLIP-2-12B [46] LLaVA-7B [51] MiniGPT-4-14B [93] Otter-9B [44] OpenFlamingo-9B [4, 9, 8] InstructBLIP-14B [23] InstructBLIP-8B [23] LLaVA-13B [51] MM-ReAct-GPT-3.5 [85] LLaVA-7B (LLaMA-2) [51] LLaMA-Adapter v2-7B [32] LLaVA-13B (V1.3, 336px) [51] LLaVA-13B (LLaMA-2) [51] MM-ReAct-GPT-4 [85] Rec OCR Know Gen 3.2 18.2 13.9 27.4 7.0 27.5 18.9 28.0 22.1 29.9 13.8 27.3 13.1 28.7 9.0 30.8 18.2 32.4 26.4 30.9 24.2 20.7 20.1 32.9 33.4 38.5 25.8 38.1 29.3 39.2 35.0 33.1 3.9 15.0 11.1 17.1 16.1 17.8 16.7 16.0 14.6 20.1 31.5 20.1 20.3 22.3 22.7 65.7 2.2 12.8 11.8 16.3 20.4 14.2 16.4 9.8 16.5 23.5 21.5 19.0 31.4 25.2 26.5 29.0 Spat Math 4.0 12.4 7.7 20.3 5.8 16.2 11.5 21.2 3.8 22.2 3.8 24.4 7.7 21.0 10.5 21.1 7.7 18.6 7.7 24.3 26.2 32.3 5.2 25.7 3.8 22.9 11.2 31.3 7.7 29.6 69.2 56.8 Total 13.4±0.5 22.1±0.1 22.4±0.2 23.8±0.6 24.4±0.4 24.7±0.3 24.8±0.2 25.6±0.3 26.2±0.2 26.4±0.1 27.9±0.1 28.1±0.4 31.4±0.1 32.5±0.1 32.9±0.1 44.6±0.2
2308.02490#16
2308.02490#18
2308.02490
[ "2302.13971" ]
2308.02490#18
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
knowledge. LLaVA-Adapter v2-7B is the best model in this capability with a score of 31.4%, as shown in Table 3. It may be beneficial from its large-scale tuning data including GPT-4-LLM [64]. MMReAct-GPT-4 [85] also achieves a remarkable score (29.0%) in this capability, because of its strong LLM backbone [59], coupled with external tools like Bing search for knowledge acquisition. Language generation. â Language generationâ denotes the proficiency to produce fluent and infor- mative text outputs, as illustrated in Table 10(a), 12(b), 13(a), and 15(a). The performance within this category is highly correlated with the efficacy of language modeling. As a result, MMReAct-GPT4 [85] and LLaVA-13B (LlaMA-2) stand out as the top two models. Their success can be attributed to the GPT-4 and LlaMA-2 language models on which these systems are built.
2308.02490#17
2308.02490#19
2308.02490
[ "2302.13971" ]
2308.02490#19
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Spatial awareness. â Spatial awarenessâ involves the understanding of the spatial relationship among visual object regions (e.g., Table 10(c)) and scene text regions (e.g., Table 13(a, b)). MMReAct-GPT4 [85] has a significant lead in this capability (56.8%), because the adopted tools, such as dense captioning and OCR, provide detailed object and scene text location information in the form of coordinates, which can be understood and processed by GPT-4. When it comes to end-to-end tuned models, LLaVA-13B (V1.3, 336px) exhibits the best performance of 31.3%. The tuning data for LLaVA is partly derived from capturing object names and their corresponding coordinates as input. This procedure ensures the generation of data imbued with spatial information, potentially aiding the models in developing and enhancing their spatial awareness capabilities. Math. â Mathâ measures the arithmetic capability on either written equations (e.g., Table 15(b)) or problems in the wild (e.g., Table 11(d)). Notably, MMReAct-GPT4 [85] consistently outperforms other models. This superior performance may be attributed to the adopted PAL math tool (Program- aided Language Models) [31]. # 4.2.2 Regarding each capability integration Recognition, knowledge, and language generation.. As shown in Table 10(a), this capability integration can enable models to explain visual jokes. LLaMA-Adapter-v2-7B [32] is the best model in this capability integration. This may be attributed to its large scale of tuning data as shown in Table 2. LLaVA-13B (LLaMA-2) and LLaVA-13B (V1.3, 336px) [51] are the other two outstanding models. Stronger language models may be the reason. The tuning data of LLaVA shown in Table 2 can also not be ignored. 8 Table 4: MM-Vet evaluation results on various LMMs regarding each capability integration. Examples of each capability integration are shown in supplementary materials Tables 10-15. For each column, the highest, the second, and the third highest figures are highlighted by green, orange and blue backgrounds. All the numbers are presented in % and the full score is 100%.
2308.02490#18
2308.02490#20
2308.02490
[ "2302.13971" ]
2308.02490#20
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Model Transformers Agent (GPT-4) [38] MiniGPT-4-8B [93] BLIP-2-12B [46] LLaVA-7B [51] MiniGPT-4-14B [93] Otter-9B [44] OpenFlamingo-9B [4, 9, 8] InstructBLIP-14B [23] InstructBLIP-8B [23] LLaVA-13B [51] MM-ReAct-GPT-3.5 [85] LLaVA-7B (LLaMA-2) [51] LLaMA-Adapter v2-7B [32] LLaVA-13B (V1.3, 336px) [51] LLaVA-13B (LLaMA-2) [51] MM-ReAct-GPT-4 [85] OCR Rec Rec Rec Spat OCR Know OCR Know Spat OCR Math Spat Gen Rec Math 0.0 7.4 45.8 0.0 1.3 49.1 0.0 0.0 14.4 14.3 50.0 20.8 14.2 47.9 9.6 0.0 38.9 7.3 41.7 21.2 7.1 65.1 11.5 4.5 28.9 17.1 46.6 13.3 21.4 41.7 24.8 0.0 11.1 50.0 16.7 21.1 47.5 14.6 0.0 11.1 15.6 54.1 29.2 50.0 22.5 0.0 38.9 15.5 48.6 15.4 14.3 58.3 40.5 0.0 11.1 74.3 14.6 14.3 50.0 19.2 8.1 6.5 23.3 69.9 15.4 14.3 33.3 20.8 18.0 0.0 18.0 25.2 41.1 17.3 47.5 23.3 7.1 9.1 33.3 19.1 33.1 28.8 35.7 28.3 60.0 9.1 18.8 57.0 26.9 9.7 34.7 0.0 50.0 26.7 12.2 50.0 38.5 0.0 35.3 54.1 13.5 7.1 27.8 8.2 25.5 59.7 25.0 14.3 66.7 25.8 29.8 59.5 21.2 14.3 58.3 36.2 27.8 0.0 22.5 33.0 69.2 78.6 25.0 83.0 63.6 44.4 7.1 7.1 Rec OCR OCR Know Know Spat Gen 0.0 0.0 8.0 0.7 14.3 25.0 16.7 5.2 0.0 6.2 50.0 6.6 50.0 18.7 38.5 18.3 32.5 23.1 46.5 33.3 3.2 6.0 6.0 4.5 28.6 50.0 10.0 15.2 14.3 70.0 16.7 8.8 0.0 35.2 15.7 25.0 7.8 53.8 14.3 50.0 12.5 50.0 2.5 25.0 100.0 0.0 47.8 10.2 44.8 14.3 50.0 11.3 22.5 38.0 28.6 48.0 53.3 11.2 49.3 14.3 50.0 33.3 56.8 28.6 50.0 33.3 3.5 0.0 68.2 88.0 14.3 50.0 Rec Rec OCR Rec OCR Gen OCR Spat Spat 9.5 25.0 0.0 21.2 42.9 50.0 8.5 45.2 Rec OCR OCR Rec Spat Gen Know Math Spat Spat 13.4±0.5 50.0 49.0 0.0 22.1±0.1 0.0 0.0 0.0 22.4±0.2 0.0 50.0 0.0 23.8±0.6 0.0 19.0 0.0 24.4±0.4 0.0 0.0 0.0 24.7±0.3 0.0 30.0 0.0 24.8±0.2 0.0 0.0 0.0 25.6±0.3 0.0 50.0 15.0 0.0 26.2±0.2 0.0 0.0 26.4±0.1 12.0 0.0 0.0 35.0 80.0 27.9±0.1 0.0 28.1±0.4 0.0 0.0 0.0 31.4±0.1 0.0 0.0 0.0 32.5±0.1 0.0 2.0 50.0 32.9±0.1 0.0 0.0 8.0 44.6±0.2 50.0 80.0 0.0 Total
2308.02490#19
2308.02490#21
2308.02490
[ "2302.13971" ]
2308.02490#21
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Recognition (sole). This category contains samples only requiring recognition, as shown in Table 10(b). InstructBLIP-14B and InstructBLIP-8B [23] achieve the best performance, which may result from the tuning data including recognition datasets, like VQA v2 [35] and GQA [37]. OCR and spatial awareness. For this integration, an example is shown in Table 10(c). MM-ReAct- GPT-4 [85] is the best method for this integration. Notably, compared with MM-ReAct-GPT-3.5, MM-ReAct-GPT-4 has a significant improvement, over 40%, indicating the importance of LLMs to integrate information of OCR and location. OCR, spatial awareness, and math. An example of this integration is shown in Table 11(a), which requires reading the floor plan and conducting arithmetic. Compared with the above integration, this combination involves one more capability of math. The observation is similar to the integration of OCR and spatial awareness. MM-ReAct-GPT-4 [85] still achieves the best performance. Recognition and spatial awareness. Table 11(b) shows an example for this integration. LLaVA- 13B (V1.3, 336px) [51] performs best for this category. Compared with LLaVA-13B (LLaMA-2), LLaVA-13B (V1.3, 336px) obtains an improvement of 8.4%, indicating the significant contribution of larger resolution of images. OCR (sole). This task requires OCR only, as shown in Table 11(c). MM-ReAct-GPT-4 [85] has the best results for sole OCR due to an OCR tool from Azure API. Notable, MM-ReAct-GPT-4 is much better than MM-ReAct-GPT-3.5 with an improvement of 23.0%, demonstrating the importance of language models in OCR. OCR and Math. This integration enables reading text from real-world scenarios and solving math problems, as shown in Table 11(d). MM-ReAct-GPT-4 [85] obtains the best performance in this capability integration, far ahead of other models. We highly recommend using MM-ReAct-GPT-4 to complete tasks related to this capability integration.
2308.02490#20
2308.02490#22
2308.02490
[ "2302.13971" ]
2308.02490#22
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Other capability integrations. 9 other capability integrations are in long-tailed distribution, where MMReAct-GPT-4 achieves the best scores in 5 integrations out of 9. Their examples are shown in Tables 12-15. # 4.3 Result discussion # 4.3.1 Foundation models and tuning data In this subsection, we discuss the modules in LMMs and speculate how each component may affect the LMMsâ capabilities in different aspects, evaluated by MM-Vet. We mainly consider the models based on open-sourced LLMs, i.e., Flan-T5 [21], LLaMA [73], Vicuna [92], and LLaMA-2 [74]. 9 Table 5: MM-Vet (Bard set) evaluation results on various LMMs regarding each core VL capability. For each column, the highest, the second, and the third highest figures are highlighted by green, orange and blue backgrounds. All the numbers are presented in % and the full score is 100%. Total 30.3±0.1 31.5±0.1 27.6±0.2 48.1±0.2 53.5±0.2 Model LLaVA-13B (LLaMA-2) [51] LLaVA-13B (V1.3, 336px) [51] MM-ReAct-GPT-3.5 [85] MM-ReAct-GPT-4 [85] Bard [34] Rec OCR Know Gen 27.6 37.8 24.6 39.4 16.6 22.3 36.6 34.3 61.0 56.2 Spat Math 8.0 27.2 11.6 30.6 24.0 32.9 72.0 60.6 39.6 52.0 22.9 22.3 31.4 66.3 52.5 22.4 22.7 15.6 25.6 50.9 Table 6: MM-Vet (Bard set) evaluation results on various LMMs regarding each capability integration. For each column, the highest, the second, and the third highest figures are highlighted by green, orange and blue backgrounds.
2308.02490#21
2308.02490#23
2308.02490
[ "2302.13971" ]
2308.02490#23
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
All the numbers are presented in % and the full score is 100%. OCR Rec OCR Spat Know Gen Rec Math Math 26.6 55.2 18.8 14.3 57.1 39.5 0.0 8.2 9.1 Model Rec Know 20.0 Vicuna-13B (LLaMA-2) [51] 20.0 Vicuna-13B (V1.3, 336px) [51] 21.9 59.0 22.9 14.3 85.7 25.5 20.0 11.3 38.8 31.2 35.7 28.6 56.4 MM-ReAct-GPT-3.5 [85] 17.0 35.2 70.8 78.6 28.6 81.5 63.6 40.0 MM-ReAct-GPT-4 [85] 0.0 52.3 70.3 45.2 56.4 42.9 70.2 18.2 Bard [34] OCR Spat Rec Spat OCR Rec Rec OCR OCR OCR Know Gen Know Spat Spat Gen 1.3 56.8 28.6 50.0 33.3 15.0 49.3 14.3 50.0 33.3 25.0 100.0 0.0 0.0 68.3 88.0 14.3 50.0 0.0 77.7 81.5 28.6 50.0 66.7 Rec OCR Spat Rec OCR 47.8 OCR Rec Gen Know Spat Spat 8.0 0.0 2.0 50.0 35.0 0.0 50.0 80.0 50.0 80.0 Rec OCR Spat Math â â â â â Total 30.3±0.1 31.5±0.1 27.6±0.2 48.1±0.2 53.5±0.2 Vision.
2308.02490#22
2308.02490#24
2308.02490
[ "2302.13971" ]
2308.02490#24
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
For the Vision component, two models have been employed in the end-to-end LMMs we evaluated, namely, CLIP-ViT/L14 [66] (428M) and EVA-ViT-G (1.13B). Determining a superior model is currently not possible due to the absence of a comprehensive ablation study [90]. However, itâ s noteworthy that, when paired with the same language model, Vicuna-7B, InstructBLIP-8B excels in recognition tasks, while LLaVA-7B works particularly well for OCR. Language. There is a notable trend indicating that superior language models (LLMs) typically yield better performance, such as comparing the 7B and 13B variants of different models, except for the outlier of InstructBLIP where the 8B version performs better than the 14B one. Tuning data. Increasing the volume of data can enhance performance. An example is InstructBLIP- 8B [23], which utilizes more data from 26 publicly available datasets to tune the model and achieve higher scores than BLIP-2-12B. # 4.3.2 Comparison with Bard Bard [34] is one popular closed-source commercial LMM system. One problem in evaluation is that Bard rejects images containing people and instead outputs â Sorry, I canâ t help with images of people yet.â
2308.02490#23
2308.02490#25
2308.02490
[ "2302.13971" ]
2308.02490#25
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
To conduct a fair comparison with other models, we constructed a subset of MM-Vet with 168 samples that Bard could process, henceforth referred to as the Bard set. The results on the Bard set are shown in Tables 5 and 6. Bard achieves the highest scores in three out of six capabilities, seven out of fifteen capability integrations, and holds the highest overall score (53.5%). MM-ReAct-GPT-4 [85] outperforms in the remaining three out of six capabilities, and tops the chart in nine out of the fifteen capability integrations. Particularly, MM-ReAct performs better in OCR, spatial awareness, and math capabili- ties, indicating the potential benefit of having specialized external tools, even when working with state-of-the-art LMMs. When considering end-to-end models, there is still a big gap from Bard. For instance, Vicuna-13B (V1.3, 336px) [51] obtains 31.5%, a substantial 22.0% lower than Bard. Future stronger open-sourced LLMs and advancements in multimodal training hold potential to further narrow this gap. # 4.3.3 Comparison with GPT-4V(ision) We evaluate and benchmark the state-of-the-art LMM, GPT-4V(ison) [59â 61, 1, 84] on MM-Vet. In our queries to GPT-4V, we prepend the prompt with â Generate a short and concise response to the following image text pair.â The quantitative results are shown in Tables 7, 8, and the qualitative 10 Table 7: MM-Vet evaluation results on LLaVA, MM-ReAct and GPT-4V regarding each core VL capability. For each column, the highest and the second highest figures are highlighted by green and orange backgrounds. All the numbers are presented in % and the full score is 100%.
2308.02490#24
2308.02490#26
2308.02490
[ "2302.13971" ]
2308.02490#26
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Model LLaVA-13B (LLaMA-2) [51] MM-ReAct-GPT-4 [85] GPT-4V [60] Rec OCR Know Gen 29.3 39.2 35.0 33.1 60.7 67.5 22.7 65.7 68.3 26.5 29.0 56.2 Spat Math 7.7 29.6 69.2 56.8 58.6 69.4 Total 32.9±0.1 44.6±0.2 67.7±0.3 Table 8: MM-Vet evaluation results on LLaVA, MM-ReAct and GPT-4V regarding each capability integration. For each column, the highest and the second highest figures are highlighted by green and orange backgrounds. All the numbers are presented in % and the full score is 100%.
2308.02490#25
2308.02490#27
2308.02490
[ "2302.13971" ]
2308.02490#27
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Rec Know Gen Rec OCR Spat Math Model Rec Spat OCR LLaVA-13B (LLaMA-2) [51] 29.8 59.5 21.2 14.3 58.3 36.2 MM-ReAct-GPT-4 [85] GPT-4V [60] Rec Know 27.8 22.5 33.0 69.2 78.6 25.0 83.0 63.6 44.4 55.5 89.2 68.6 73.9 83.3 77.5 44.5 38.9 OCR Spat OCR Math 0.0 Rec OCR OCR Know Know Spat Gen 33.3 3.5 68.2 88.0 14.3 0.0 78.2 76.5 42.9 100.0 66.7 Rec Rec OCR Gen OCR Rec OCR Spat Spat 56.8 28.6 50.0 50.0 Rec OCR OCR Rec Spat Gen Know Math Spat Spat 0.0 0.0 8.0 50.0 80.0 0.0 50.0 89.0 0.0 Total 32.9±0.1 44.6±0.2 67.7±0.3 results are expressed in Figures 3-6. Remarkably, GPT-4V achieves a score of 67.7%, surpassing both open-sourced LMMs [51] and LLM-based multimodal agents [85] by substantial margins. We aspire that the detailed per-category performance breakdown sheds light on potential avenues for enhancing model capabilities, thereby bridging the existing performance gap. To illustrate, integrating specialized tools within agent systems proves advantageous for specific functionalities like OCR and math. While other categories, such as recognition and language generation, would require enhancements in the core vision and language modules, respectively. Figures 3-6 offer an exhaustive analysis, highlighting exemplary success and failure instances of GPT-4Vâ s performance. This MM-Vet analysis is intended as a source of inspiration for future research, specifically in the realms of advanced multimodal prompting techniques [84, 81] and model refinements to further improve the LMM performance.
2308.02490#26
2308.02490#28
2308.02490
[ "2302.13971" ]
2308.02490#28
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
# 4.4 Effectiveness analysis of LLM-based evaluation To verify the effectiveness of LLM-based evaluation for LMM predictions, we select the outputs from MMReAct-GPT-4 on 138 objective questions, which can be objectively annotated by humans. We compute the absolute value of the difference between the evaluatorâ s output score and the human- annotated score on each sample. By default, we use GPT-4 (0613) as the evaluator. Here we also replace it with other LLMs, e.g. LLaMA-2, GPT-3.5. The average difference to the human scoring is reported in Table 9, represented as â . The maximum potential discrepancy is 1.0. The baseline evaluation method, keyword matching, results in a high difference of 0.273. This illustrates the unsuitability of keyword matching for MM-Vet when dealing with open-ended answers. It is surprising that â of LLaMA-2-7B [74] is even higher than that of keyword matching, while â LLaMA-2-13B only marginally less than keyword matching. This suggests that assessing open-ended outputs from models is far from straightforward. For OpenAIâ s models, GPT-3.5 (turbo-0613) obtains 0.178 of â , and GPT-4 (0613) achieves the lowest difference of 0.042. In this paper, we utilize GPT-4 (0613) to evaluate the outputs of LMMs. # 4.5 Takeaway notes We summarize the above analyses and discussions as follows: â ¢ In the evaluation of integrated capabilities on MM-Vet (Sections 4.2, 4.3.2, 4.3.3), GPT- 4V [60] and Bard [34] outperform existing open-sourced methods. The tool-using ap- proach, MM-ReAct-GPT-4 [85], achieves comparable performance to Bard with effective external tools. The pros and cons in different categories motivate future studies on tool- enhanced LMMs. Among end-to-end LMMs, LLaVA-13B (LLaMA-2)/LLaVA-13B (V1.3, 336px) [51] demonstrates the best performance on MM-Vet. 11 Table 9: Averaged absolute differences (â
2308.02490#27
2308.02490#29
2308.02490
[ "2302.13971" ]
2308.02490#29
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
) between the evaluation scores of various LLM evaluators and those of human-annotated scores, on MM-ReAct-GPT4â s results. A smaller discrepancy indicates a better agreement with the gold standard of human evaluation, indicating a better evaluator. Model â (â ) Keyword matching 0.273 LLM-based evaluation LLaMA-2-7B LLaMA-2-13B GPT-3.5 (turbo-0613) GPT-4 (0613) 0.307 0.254 0.178 0.042 â ¢ Analysis of open-source LMMs (Section 4.3.1) leaves room for ambiguity regarding the superior vision encoders for LMMs, based on current model comparisons. However, it is evident that stronger LLMs can boost the performance of LMMs.
2308.02490#28
2308.02490#30
2308.02490
[ "2302.13971" ]
2308.02490#30
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
â ¢ For open-ended evaluation (Section 4.4), it is effective to use GPT-4 for evaluating the open- ended outputs of LMMs. The use of less powerful LLMs could result in more significant deviations from the gold standard of human evaluation results. â ¢ Current top-performing methods, such as GPT-4V [60] and MM-ReAct-GPT-4 [85], only achieve scores of around 68%/45% on MM-Vet (where full score is 100%). The gap signifies that further effort is necessary to enhance the performance of LMMs in terms of integrated capabilities, e.g., by developing stronger LMMs or extending LMMs with external tools. # 5 Conclusion In this paper, we introduce the MM-Vet benchmark to evaluate LMMs in terms of their integrated vision-language capabilities. We have assembled a new multimodal dataset, which requires the integration of multiple vision-language capabilities. To facilitate open-ended evaluation, we adopt an LLM-based evaluator to grade open-ended outputs from LMMs. We then evaluate various LMMs on MM-Vet, analyzing their results to provide insights into different LMM system paradigms and module selections. We observe that the current best LMMs GPT-4V [59â 61, 1, 84] achieve around 68% score on MM-Vet (full score 100%), indicating the need for efforts to further improve the integrated capabilities of LMMs. 12 # A Examples of capability integrations Table 10: Three samples requiring different capability integrations. (a) ° V ( N Q:
2308.02490#29
2308.02490#31
2308.02490
[ "2302.13971" ]
2308.02490#31
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
What occasions would someone use this meme? GT: This meme, commonly known as "Screaming Panda," is typically used to express shock, surprise, or fear. It could be used in response to a startling or unexpected event, or to convey a sense of panic or alarm. Some possible occasions where someone might use this meme include: Reacting to a jump scare in a horror movie - Responding to a surprising plot twist in a TV show or book - Expressing shock at a news headline or current event - Conveying fear or anxiety about an upcoming deadline or exam - Showing surprise at an unexpected outcome in a sports game or other competition. Required capabilities: Recognition, knowledge, language generation (b) Q: How many tomatoes are there? GT: 5 # Required capabilities: Recognition (c) ee Q: What is located to the right of the shampoo? GT: # conditioner Required capabilities: OCR, spatial awareness
2308.02490#30
2308.02490#32
2308.02490
[ "2302.13971" ]
2308.02490#32
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
13 Table 11: Four samples requiring different capability integrations. (a) LL L _â Q: Which room is bigger, the double garage or the living room? GT: double garage Required capabilities: OCR, spatial awareness, math (b) Q: On the right desk, what is to the left of the laptop? GT: table lamp <OR> desk lamp Required capabilities: Recognition, spatial awareness (c) Q: What are all the scene text in the image? GT: 5:30PM <AND> 88% <AND> Mario Kart 8 Deluxe <AND> MARIO KART 8 DELUXE <AND> SUPER MARIO ODYSSEY <AND> THE LEGEND OF ZELDA <AND> BREATH OF WILD <AND> Options <AND> Start # Required capabilities: OCR (d) Gasoline Dow FAQS 3.552 3.652 oe F602 Q: How many gallons of supreme gasoline can I get with $50? GT: 13.6 <OR> 13.7 Required capabilities: OCR, math 14 Table 12: Two samples requiring different capability integrations. (a)
2308.02490#31
2308.02490#33
2308.02490
[ "2302.13971" ]
2308.02490#33
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
In which country was this photo taken? # GT: Australia Required capabilities: Recognition, knowledge (b) Me: I'll do it at 8 Time: 8.05 Me: looks like | gotta wait till 9 now Q: Can you explain this meme? GT: This meme is a humorous take on procrastination and the tendency to delay tasks until a specific time. The person in the meme plans to do something at 8 oâ clock, but when they miss that deadline by a few minutes, they decide to wait until 9 oâ clock instead.
2308.02490#32
2308.02490#34
2308.02490
[ "2302.13971" ]
2308.02490#34
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
The image of Kermit the Frog lying in bed represents the personâ s laziness and lack of motivation to complete the task. Required capabilities: Recognition, OCR, knowledge, language generation 15 Table 13: Two samples requiring different capability integrations. (a) Thousands 600 Immigration 400 300 Emigration Net â | I l | l I ° 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Long-Term International Migration, UK, 1999-2008, Q: The graph below shows the long-term international migration, UK, 1999-2008. Summarize the information by selecting and reporting the main features, and make comparisons where relevant. You should write at least 150 words. GT:
2308.02490#33
2308.02490#35
2308.02490
[ "2302.13971" ]
2308.02490#35
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
The chart gives information about UK immigration, emigration and net migration between 1999 and 2008. Both immigration and emigration rates rose over the period shown, but the figures for immigration were significantly higher. Net migration peaked in 2004 and 2007. In 1999, over 450,000 people came to live in the UK, while the number of people who emigrated stood at just under 300,000. The figure for net migration was around 160,000, and it remained at a similar level until 2003. From 1999 to 2004, the immigration rate rose by nearly 150,000 people, but there was a much smaller rise in emigration. Net migration peaked at almost 250,000 people in 2004. After 2004, the rate of immigration remained high, but the number of people emigrating fluctuated. Emigration fell suddenly in 2007, before peaking at about 420,000 people in 2008. As a result, the net migration figure rose to around 240,000 in 2007, but fell back to around 160,000 in 2008.
2308.02490#34
2308.02490#36
2308.02490
[ "2302.13971" ]
2308.02490#36
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Required capabilities: Recognition, OCR, language generation, spatial awareness (b) Q: Which car is on the parking spot 33? GT: # no <OR> empty Required capabilities: Recognition, OCR, spatial awareness 16 Table 14: Three samples requiring different capability integrations. (a) y # Q: GT: Required capabilities: Recognition, OCR Is this apple organic? yes (b) lve whale -s wer whale aa m Kad a. Nog â ~< fe ran an = ye Te Q: Which are producers in this food web? Phytoplankton <AND> Seaweed Required capabilities: OCR, knowledge, spatial awareness (c) Q: Does the person bigger than the car? GT: # no Required capabilities: Recognition, knowledge, spatial awareness 17 Table 15: Two samples requiring different capability integrations. (a) Underground Railway Systems. Q: The table below gives information about the underground railway systems in six cities. Summarise the information by selecting and reporting the main features, and make comparisons where relevant. You should write at least 150 words. GT:
2308.02490#35
2308.02490#37
2308.02490
[ "2302.13971" ]
2308.02490#37
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
The table shows data about the underground rail networks in six major cities. The table compares the six networks in terms of their age, size and the number of people who use them each year. It is clear that the three oldest underground systems are larger and serve significantly more passengers than the newer systems. The London underground is the oldest system, having opened in 1863. It is also the largest system, with 394 kilometres of route. The second largest system, in Paris, is only about half the size of the London underground, with 199 kilometres of route. However, it serves more people per year. While only third in terms of size, the Tokyo system is easily the most used, with 1927 million passengers per year. Of the three newer networks, the Washington DC underground is the most extensive, with 126 kilometres of route, compared to only 11 kilometres and 28 kilometres for the Kyoto and Los Angeles systems. The Los Angeles network is the newest, having opened in 2001, while the Kyoto network is the smallest and serves only 45 million passengers per year.
2308.02490#36
2308.02490#38
2308.02490
[ "2302.13971" ]
2308.02490#38
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Required capabilities: OCR, language generation, spatial awareness (b) Q: What will the girl on the right write on the board? GT: 14 Required capabilities: Recognition, OCR, spatial awareness, math 18 # B GPT-4V Prediction Examples (a) Q: How many gallons of supreme GPT-4V: You can get gasoline can | get with $50? approximately 13.7 gallons GT: 13.6 <OR> 13.7 of supreme gasoline with Required capabilities: OCR, math $50 at the price of $3.659 per gallon. GPT4_score: 1.0 (b) Q: What is the price difference between GPT-4\V: The price difference diesel and regular gasoline? between diesel and regular GT: 0.11 gasoline is $0.11, with diesel Required capabilities: OCR, math being more expensive. GPT4_score: 1.0 (c) Q: Based on the image, what is the total GPT-4V:
2308.02490#37
2308.02490#39
2308.02490
[ "2302.13971" ]
2308.02490#39
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
The total international awareness of Tina Fey? international awareness of Tina Fey is 35%. Required capabilities: OCR, spatial GPT4_score: 1.0 (d) Q: Based on the image, in which GPT-4V: Tina Fey has the | osm 8 ox a» | country does Tina Fey have the highest highest level of total level of total awareness? awareness in Australia at GT: Australia 62%. Required capabilities: OCR, spatial GPT4_score: 1.0 awareness (e) Q:
2308.02490#38
2308.02490#40
2308.02490
[ "2302.13971" ]
2308.02490#40
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
What is the price for tomatoes? GPT-4V: The price for GT: eight <OR> 8.0 tomatoes is 8.00 as shown Required capabilities: Recognition, spatial awareness on the label in the image. (f) Q: What fruit is the the right of plums? â GPT-4\V: The fruit to the right = GT: orange of the plums are oranges. u Required capabilities: Recognition, spatial GPT4_score: 1.0 awareness R (g) Q: Which car is on the parking spot 33? GPT-4V: The blue car is on GT: no <OR> empty parking spot 33. Required capabilities: OCR, recognition, GPT4_score: 0.0 spatial awareness (h) Q: Which spot is the red car parking GPT-4V: The red car is on? parking on spot number 87. GT: 31 GPT4_score: 0.0 Required capabilities: OCR, recognition, spatial awareness (c) Q: Based on the image, what is the total GPT-4V:
2308.02490#39
2308.02490#41
2308.02490
[ "2302.13971" ]
2308.02490#41
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
The total international awareness of Tina Fey? international awareness of Tina Fey is 35%. Required capabilities: OCR, spatial GPT4_score: 1.0 (d) Q: Based on the image, in which GPT-4V: Tina Fey has the | osm 8 ox a» | country does Tina Fey have the highest highest level of total level of total awareness? awareness in Australia at GT: Australia 62%. Required capabilities: OCR, spatial GPT4_score: 1.0 awareness (e) Q:
2308.02490#40
2308.02490#42
2308.02490
[ "2302.13971" ]
2308.02490#42
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
What is the price for tomatoes? GPT-4V: The price for GT: eight <OR> 8.0 tomatoes is 8.00 as shown Required capabilities: Recognition, spatial awareness on the label in the image. (f) Q: What fruit is the the right of plums? â GPT-4\V: The fruit to the right = GT: orange of the plums are oranges. u Required capabilities: Recognition, spatial GPT4_score: 1.0 awareness R (g) Q: Which car is on the parking spot 33? GPT-4V: The blue car is on GT: no <OR> empty parking spot 33. Required capabilities: OCR, recognition, GPT4_score: 0.0 spatial awareness (h) Q: Which spot is the red car parking GPT-4V: The red car is on? parking on spot number 87. GT: 31 GPT4_score: 0.0 Required capabilities: OCR, recognition, Figure 3: GPT-4V prediction examples 19
2308.02490#41
2308.02490#43
2308.02490
[ "2302.13971" ]
2308.02490#43
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Plastic Waste is Out of Control Energy price cap update ICAEW chart of the week (a) Q: Which category did the worst job in plastic recycling, considering the amount of plastic made and thrown out? GT: packaging Required capabilities: OCR, spatial awareness (b) Q: What is the difference in metric tonnes between the amount of plastic produced and the amount discarded? GT: 105 Required capabilities: OCR, spatial awareness (c) Q: Who is the person in the bottom right? 7) GT: Jamie Dornan Required capabilities: Recognition, spatial awareness (d) Q: What is the name of this landmark? GT: baochu pagoda Required capabilities: Recognition (e) Q:
2308.02490#42
2308.02490#44
2308.02490
[ "2302.13971" ]
2308.02490#44
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
How many birds are there? GT: 6 Required capabilities: Recognition (f) Q: What is the estimated average standing charge for the year 2023? GT: 271 Required capabilities: OCR, spatial awareness, math (g) Q: What is the estimated total cost for the year 2023? GT: 23800 Required capabilities: OCR, spatial awareness, math GPT-4V: The packaging category did the worst job in plastic recycling in 2015, with 141M metric tonnes of plastic thrown out of the 146M metric tonnes made. GPT4_score: 1.0 GPT-4V: The difference in metric tonnes between the amount of plastic produced and discarded is 105 million metric tonnes (407M - 302M). GPT4_score: 1.0 GPT-4V:
2308.02490#43
2308.02490#45
2308.02490
[ "2302.13971" ]
2308.02490#45
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
The person in the bottom right is Jamie Dornan. GPT4_score: 1.0 GPT-4V: This is the Baochu Pagoda located in Hangzhou, China. GPT4_score: 1.0 GPT-4V: There are six birds in the image. GPT4_score: 1.0 GPT-4V: The estimated average standing charge for the year 2023 is £280. GPT4_score: 0.0 GPT-4V: The estimated total cost for the year 2023 is £5,890. GPT4_score: 0.0 Plastic Waste is Out of Control Energy price cap update ICAEW chart of the week (a) Q: Which category did the worst job in plastic recycling, considering the amount of plastic made and thrown out? GT: packaging Required capabilities: OCR, spatial awareness (b) Q:
2308.02490#44
2308.02490#46
2308.02490
[ "2302.13971" ]
2308.02490#46
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
What is the difference in metric tonnes between the amount of plastic produced and the amount discarded? GT: 105 Required capabilities: OCR, spatial awareness (c) Q: Who is the person in the bottom right? 7) GT: Jamie Dornan Required capabilities: Recognition, spatial awareness (d) Q: What is the name of this landmark? GT: baochu pagoda Required capabilities: Recognition (e) Q: How many birds are there? GT: 6 Required capabilities: Recognition (f) Q:
2308.02490#45
2308.02490#47
2308.02490
[ "2302.13971" ]
2308.02490#47
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
What is the estimated average standing charge for the year 2023? GT: 271 Required capabilities: OCR, spatial awareness, math (g) Q: What is the estimated total cost for GPT-4V: The packaging category did the worst job in plastic recycling in 2015, with 141M metric tonnes of plastic thrown out of the 146M metric tonnes made. GPT4_score: 1.0 GPT-4V: The difference in metric tonnes between the amount of plastic produced and discarded is 105 million metric tonnes (407M - 302M). GPT4_score: 1.0 GPT-4V:
2308.02490#46
2308.02490#48
2308.02490
[ "2302.13971" ]
2308.02490#48
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
The person in the bottom right is Jamie Dornan. GPT4_score: 1.0 GPT-4V: This is the Baochu Pagoda located in Hangzhou, China. GPT4_score: 1.0 GPT-4V: There are six birds in the image. GPT4_score: 1.0 GPT-4V: The estimated average standing charge for the year 2023 is £280. GPT4_score: 0.0 GPT-4V: The estimated total Figure 4: GPT-4V prediction examples 20 (a) Q: What are all the scene text in the : image? Connecticut lew of 162 GT: Connecticut law of 1642 If any man or Sfany manor woman bo a witch â woman be a witch -- that is, hath or thatis, hathor consultethwitha â consulteth with a familiar spirit -- they shall Semiliar spiritâ they skalbeputte be put to death death | Required capabilities:
2308.02490#47
2308.02490#49
2308.02490
[ "2302.13971" ]
2308.02490#49
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
OCR GPT-4V: The scene text in the image is: "Connecticut law of 1642 If any man or woman be a witch -- that is, hath or consulteth with a familiar spirit -- they shall be put to death" GPT4_score: 1.0 (b) Q: What are all the scene text in the image? GT: MR. WILLIAM SHAKESPEARES COMEDIES, HISTORIES, & TRAGEDIES. publifhed according to the True Originall Copies. Required capabilities: OCR (c) Q: What are all the scene text in the image? GT: 5:30PM 88% Mario Kart 8 Deluxe MARIO KART 8 DELUXE SUPER MARIO ODYSSEY THE LEGEND OF ZELDA BREATH OF WILD Options Start Required capabilities: OCR (d) Q:
2308.02490#48
2308.02490#50
2308.02490
[ "2302.13971" ]
2308.02490#50
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
In which country was this photo taken? GT: australia Required capabilities: Recognition, knowledge (e) Q: Which are producers in this food Ry / \ web? = GT: Phytoplankton Seaweed spatial awareness oe on ao Required capabilities: OCR, knowledge, aotiien (f) Q: Who is the person in the image? GT: Michelle Yeoh Required capabilities: Recognition, knowledge (g) Q: Where is this photo taken? GT: oscar 2023 Required capabilities: Recognition, knowledge GPT-4V: The scene texts in the image are: - "MR. WILLIAM SHAKESPEARES" - â COMEDIES, HISTORIES, & TRAGEDIES." - "Published according to the True Originall Copies." GPT4_score: 0.9 GPT-4V: The scene texts in the image are "Mario Kart 8 Deluxe", "5:30 PM", "88%", "Options", "Start", "MARIOKART 8 DELUXE", "THE LEGEND OF ZELDA", and "FORTNITE". GPT4_score: 0.7 GPT-4V:
2308.02490#49
2308.02490#51
2308.02490
[ "2302.13971" ]
2308.02490#51
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
This photo was taken in Australia, during the Australian Open tennis tournament. GPT4_score: 1.0 GPT-4V: The producers in this food web are phytoplankton and seaweed. GPT4_score: 1.0 GPT-4V: The person in the image is Michelle Yeoh, a renowned actress and producer. GPT4_score: 1.0 GPT-4V: This photo was taken at the Oscars, where the woman is receiving an award. GPT4_score: 0.5 Figure 5: GPT-4V prediction examples 21 (a) Q: What occasions would someone use this meme? GT: This meme, commonly known as "Screaming Panda," is typically used to express shock, surprise, or fear. It could be used in response to a startling or unexpected event, or to convey a sense of panic or alarm. .
2308.02490#50
2308.02490#52
2308.02490
[ "2302.13971" ]
2308.02490#52
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Required capabilities: Recognition, knowledge, language generation Me: I'll do it at 8 Time: 8.05 Me: looks like | gotta wait till 9 now (b) Q: Can you explain this meme? GT: This meme is a humorous take on procrastination and the tendency to delay | tasks until a specific time. The person in | the meme plans to do something at 8 i o'clock, but when they miss that deadline by a few minutes, they decide to wait until 9 o'clock instead. The image of Kermit the Frog lying in bed represents the person's laziness and lack of motivation to complete the task.
2308.02490#51
2308.02490#53
2308.02490
[ "2302.13971" ]
2308.02490#53
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Required capabilities: Recognition, knowledge, language generation, OCR = (c) Q: The table below gives information â â 7: = about the underground railway systems in ase 1900, a9 iat six cities. Summarise the information by I = = selecting and reporting the main features, wee eer = = and make comparisons where relevant. cal EEL = = You should write at least 150 words. GT: The table shows data about the underground rail networks in six major cities. The table compares the six networks in terms of their age, size and the number of people who use them each year. It is clear that the three oldest underground systems are larger and serve significantly more passengers than the newer systems. The London underground is the oldest system, having opened in 1863. It is also the largest system, with 394 kilometres of route. The second largest system, in Paris, is only about half the size of the London underground, with 199 kilometres of route. However, it serves more people per year. While only third in terms of size, the Tokyo system is easily the most used, with 1927 million passengers per year. Of the three newer networks, the Washington DC underground is the most extensive, with 126 kilometres of route, compared to only 11 kilometres and 28 kilometres for the Kyoto and Los Angeles systems. The Los Angeles network is the newest, having opened in 2001, while the Kyoto network is the smallest and serves only 45 million passengers per year.
2308.02490#52
2308.02490#54
2308.02490
[ "2302.13971" ]
2308.02490#54
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Required capabilities: OCR, language generation, spatial awareness GPT-4V: This meme is often used to express shock, surprise, or disbelief in a humorous or exaggerated way. It can be used in response to unexpected news, events, or situations. GPT4_score: 0.6 GPT-4V: This meme humorously illustrates the tendency to procrastinate. The person plans to do something at 8 but delays it by 5 minutes and then decides to postpone the task until 9, exemplifying how easily tasks can be deferred. The image of Kermit the Frog relaxed and unbothered underscores the nonchalant attitude towards procrastination.
2308.02490#53
2308.02490#55
2308.02490
[ "2302.13971" ]
2308.02490#55
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
GPT4_score: 1.0 GPT-4V: The table illustrates data on six underground railway systems worldwide, focusing on their opening dates, route lengths, and annual passenger numbers. Londonâ s underground, inaugurated in 1863, is the oldest and has the most extensive network, boasting 394 kilometres of route. Despite its age and extensive network, it only caters to 775 million passengers annually, significantly less than Paris and Tokyo. Parisâ s metro, opened in 1900, has a 199-kilometre route and serves a staggering 1191 million passengers yearly, the highest among the listed cities. Tokyoâ s system, though younger (opened in 1927) and having a shorter 155-kilometre route, still manages to serve 1927 million passengers, showcasing the cityâ s dense population and reliance on public transport. In contrast, Washington DCâ s metro, opened in 1976 with 126 kilometres of route, only caters to 144 million passengers annually. Kyoto and Los Angeles have the youngest and least extensive networks, opened in 1981 and 2001 with 11 and 28 kilometres of routes, respectively. Correspondingly, they have the lowest annual passenger numbers, with Kyoto at 45 million and Los Angeles at 50 million. In summary, there is a noticeable disparity in the age, route length, and passenger numbers among these cities. While older and more extensive networks like Londonâ s do not necessarily translate to higher passenger numbers, cities like Paris and Tokyo demonstrate a significant demand for underground rail systems, likely attributed to their dense populations and urban structures.
2308.02490#54
2308.02490#56
2308.02490
[ "2302.13971" ]
2308.02490#56
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
GPT4_score: 0.9 = â â 7: = ase 1900, a9 iat I = = wee eer = = cal EEL = = Figure 6: GPT-4V prediction examples 22 # References [1] Chatgpt can now see, hear, and speak. https://openai.com/blog/ chatgpt-can-now-see-hear-and-speak, 2023. [2] Mpt. https://github.com/mosaicml/llm-foundry#mpt, 2023. [3] Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, and Peter Anderson. Nocaps: Novel object captioning at scale. In Proceedings of the IEEE/CVF international conference on computer vision, pages 8948â 8957, 2019. [4] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716â 23736, 2022. [5] Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. [6] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In ICCV, 2015. [7] Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, et al. Speecht5:
2308.02490#55
2308.02490#57
2308.02490
[ "2302.13971" ]
2308.02490#57
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Unified-modal encoder-decoder pre-training for spoken language processing. arXiv preprint arXiv:2110.07205, 2021. [8] Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo: An open-source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390, 2023. [9] Anas Awadalla, Irena Gao, Joshua Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo, March 2023. [10] Microsoft Azure. Azure cognitive services apis. https://azure.microsoft.com/en-us/products/ ai-services/ai-vision, 2023. Accessed: 2023-06-20. [11] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.
2308.02490#56
2308.02490#58
2308.02490
[ "2302.13971" ]
2308.02490#58
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901, 2020. [12] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In NeurIPS, 2020. [13] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
2308.02490#57
2308.02490#59
2308.02490
[ "2302.13971" ]
2308.02490#59
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
[14] Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image-text pair dataset. https://github.com/kakaobrain/coyo-dataset, 2022. [15] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558â 3568, 2021. [16] Ting Chen, Saurabh Saxena, Lala Li, David J Fleet, and Geoffrey Hinton. Pix2seq: A language modeling framework for object detection. In ICLR, 2022. [17] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015. [18] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Learning universal image-text representations. In ECCV, 2020. [19] Cheng-Han Chiang and Hung-yi Lee. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937, 2023. 23 [20] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. [21] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
2308.02490#58
2308.02490#60
2308.02490
[ "2302.13971" ]
2308.02490#60
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. [22] Marta R Costa-jussà, James Cross, Onur à elebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672, 2022. [23] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023. [24] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. [25] Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. Palm-e: An embodied multimodal language model. In arXiv preprint arXiv:2303.03378, 2023. [26] Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao.
2308.02490#59
2308.02490#61
2308.02490
[ "2302.13971" ]
2308.02490#61
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Eva: Exploring the limits of masked visual representation learning at scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19358â 19369, 2023. [27] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023. [28] Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166, 2023. [29] Zhe Gan, Linjie Li, Chunyuan Li, Lijuan Wang, Zicheng Liu, and Jianfeng Gao. Vision-language pre-training: Basics, recent advances, and future trends. arXiv preprint arXiv:2210.09263, 2022. [30] Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, and Mike Zheng Shou. Assistgpt: A general multi-modal assistant that can plan, execute, inspect, and learn. arXiv preprint arXiv:2306.08640, 2023. [31] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022. [32] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023.
2308.02490#60
2308.02490#62
2308.02490
[ "2302.13971" ]
2308.02490#62
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
[33] Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans, 2023. [34] Google. Bard. https://bard.google.com, 2023. Accessed: 2023-07-17. [35] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904â 6913, 2017. [36] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022. [37] Drew A Hudson and Christopher D Manning.
2308.02490#61
2308.02490#63
2308.02490
[ "2302.13971" ]
2308.02490#63
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019. 24 [38] Huggingface. Transformers agent. https://huggingface.co/docs/transformers/transformers_ agents, 2023. Accessed: 2023-07-20. [39] Geewook Kim, Teakgyu Hong, Moonbin Yim, JeongYeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. Ocr-free document understanding transformer. In European Conference on Computer Vision, pages 498â 517. Springer, 2022. [40] Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In ICML, 2021. [41] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 2017. [42] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart:
2308.02490#62
2308.02490#64
2308.02490
[ "2302.13971" ]
2308.02490#64
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019. [43] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, Chunyuan Li, and Ziwei Liu. Mimic-it: Multi-modal in-context instruction tuning. arXiv preprint arXiv:2306.05425, 2023. [44] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023. [45] Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking multimodal llms with generative comprehension, 2023. [46] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023. [47] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training In International Conference on Machine for unified vision-language understanding and generation. Learning, pages 12888â 12900. PMLR, 2022. [48] Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV, 2020. [49] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
2308.02490#63
2308.02490#65
2308.02490
[ "2302.13971" ]
2308.02490#65
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
[50] Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565, 2023. [51] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023. [52] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023. [53] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634, 2023. [54] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281, 2023. [55] Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Mingrui Chen, Chunyuan Li, Lianwen Jin, et al. On the hidden mystery of ocr in large multimodal models. arXiv preprint arXiv:2305.07895, 2023. [56] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS, 2019. [57] Timo Lüddecke and Alexander Ecker. Image segmentation using text and image prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7086â 7096, 2022.
2308.02490#64
2308.02490#66
2308.02490
[ "2302.13971" ]
2308.02490#66
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
[58] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In CVPR, pages 3195â 3204, 2019. 25 [59] OpenAI. Gpt-4 technical report, 2023. [60] OpenAI. Gpt-4v(ision) system card. 2023. [61] OpenAI. Gpt-4v(ision) technical work and authors. 2023. [62] Vicente Ordonez, Girish Kulkarni, and Tamara L Berg. Im2text: Describing images using 1 million captioned photographs. In NeurIPS, 2011. [63] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â 27744, 2022. [64] Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023. [65] Thomas Politzer.
2308.02490#65
2308.02490#67
2308.02490
[ "2302.13971" ]
2308.02490#67
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Vision is our dominant sense. https://www.brainline.org/article/ vision-our-dominant-sense. Accessed: 2023-05-20. [66] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021. [67] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning, pages 28492â 28518. PMLR, 2023. [68] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. [69] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, 2018. [70] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023. [71] Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In ECCV, pages 742â 758, 2020. [72] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read.
2308.02490#66
2308.02490#68
2308.02490
[ "2302.13971" ]
2308.02490#68
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317â 8326, 2019. [73] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [74] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. [75] Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. Multimodal few-shot learning with frozen language models. arXiv preprint arXiv:2106.13884, 2021. [76] Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100, 2022. [77] Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, et al. Visionllm: Large language model is also an open-ended decoder for vision-centric tasks. arXiv preprint arXiv:2305.11175, 2023. [78] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases.
2308.02490#67
2308.02490#69
2308.02490
[ "2302.13971" ]
2308.02490#69
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2097â 2106, 2017. [79] Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. Simvlm: Simple visual language model pretraining with weak supervision. In ICLR, 2022. 26 [80] Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. arXiv preprint arXiv:2306.09265, 2023. [81] Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, and Jianfeng Gao. Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v. arXiv preprint arXiv:2310.11441, 2023. [82] Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, and Lijuan Wang. Unitab: Unifying text and box outputs for grounded vision-language modeling. In European Conference on Computer Vision, pages 521â 539. Springer, 2022. [83] Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. An empirical study of gpt-3 for few-shot knowledge-based vqa. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 3081â 3089, 2022. [84] Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421, 2023.
2308.02490#68
2308.02490#70
2308.02490
[ "2302.13971" ]
2308.02490#70
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
[85] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023. [86] Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Florencio, Lijuan Wang, Cha Zhang, Lei Zhang, and Jiebo Luo. Tap: Text-aware pre-training for text-vqa and text-caption. In CVPR, pages 8751â 8761, 2021. [87] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023. [88] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In CVPR, pages 6720â 6731, 2019. [89] Andy Zeng, Adrian Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, et al. Socratic models: Composing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598, 2022. [90] Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, and Tao Kong. What matters in training a gpt4-style language model with multimodal inputs? arXiv preprint arXiv:2307.02469, 2023.
2308.02490#69
2308.02490#71
2308.02490
[ "2302.13971" ]
2308.02490#71
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
[91] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. [92] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Judging Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. llm-as-a-judge with mt-bench and chatbot arena, 2023. [93] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. [94] Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, and Yejin Choi. Multimodal c4: An open, billion-scale corpus of images interleaved with text. arXiv preprint arXiv:2304.06939, 2023.
2308.02490#70
2308.02490#72
2308.02490
[ "2302.13971" ]
2308.02490#72
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
27
2308.02490#71
2308.02490
[ "2302.13971" ]
2308.02151#0
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
3 2 0 2 g u A 4 ] L C . s c [ 1 v 1 5 1 2 0 . 8 0 3 2 : v i X r a # RETROFORMER: RETROSPECTIVE LARGE LANGUAGE AGENTS WITH POLICY GRADIENT OPTIMIZATION # Weiran Yaoâ Shelby Heineckeâ Xueâ Rithesh Murthyâ Zeyuan Chenâ Juan Carlos Nieblesâ Zhiwei Liuâ Yihao Fengâ Le Jianguo Zhangâ Devansh Arpitâ Ran Xuâ Phil Muiâ Huan Wangâ ,â Caiming Xiongâ ,â Silvio Savareseâ ,â # â Salesforce Research â CTO Office, Salesforce â Corresponding Authors: {huan.wang, cxiong, ssavarese}@salesforce.com # ABSTRACT Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the lan- guage agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably out- performs baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
2308.02151#1
2308.02151
[ "2303.17580" ]
2308.02151#1
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
# INTRODUCTION Recently, we have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing tasks on their own, ultimately in the service of a goal, rather than simply responding to queries from human users. Prominent studies, including ReAct (Yao et al., 2023), Toolformer (Schick et al., 2023), Hug- gingGPT (Shen et al., 2023), generative agents (Park et al., 2023), WebGPT (Nakano et al., 2021), AutoGPT (Gravitas, 2023), BabyAGI (Nakajima, 2023), and Langchain (Chase, 2023), have suc- cessfully showcased the viability of creating autonomous decision-making agents by leveraging the capabilities of LLMs. These approaches use LLMs to generate text-based outputs and actions that can be further employed for making API calls and executing operations within a given environment. Given the immense scale of LLMs with an extensive parameter count, the behaviors of most existing language agents, however, are not optimized or aligned with environment reward functions. An exception is a very recent language agent architecture, namely Reflexion (Shinn et al., 2023), and several other related work, e.g., Self-Refine (Madaan et al., 2023b) and Generative Agent (Park et al., 2023), which use verbal feedback, namely self-reflection, to help agents learn from prior failure. These reflective agents convert binary or scalar reward from the environment into verbal feedback in the form of a textual summary, which is then added as additional context to the prompt for the language agent. The self-reflection feedback acts as a semantic signal by providing the agent
2308.02151#0
2308.02151#2
2308.02151
[ "2303.17580" ]
2308.02151#2
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
1 with a concrete direction to improve upon, helping it learn from prior mistakes and prevent repetitive errors to perform better in the next attempt. Although the self-reflection operation enables iterative refinement, generating useful reflective feed- back from a pre-trained, frozen LLM is challenging, as showcased in Fig. 1, since it requires the LLM to have a good understanding of where the agent made mistakes in a specific environment, i.e., the credit assignment problem (Sutton & Barto, 2018), as well as the ability to generate a summary containing actionable insights for improvement. The verbal reinforcement cannot be optimal, if the frozen language model has not been properly fine-tuned to specialize in credit assignment problems for the tasks in given environments. Furthermore, the existing language agents do not reason and plan in ways that are compatible with differentiable, gradient-based learning from rewards by ex- ploiting the existing abundant reinforcement learning techniques. To address these limitations, this paper introduces Retroformer, a principled framework for reinforcing language agents by learn- ing a plug-in retrospective model, which automatically refines the language agent prompts from environment feedback through policy optimization. Specifically, our proposed agent architecture can learn from arbitrary reward information across multiple environments and tasks, for iteratively fine-tuning a pre-trained language model, which refines the language agent prompts by reflecting on failed attempts and assigning credits of actions taken by the agent on future rewards.
2308.02151#1
2308.02151#3
2308.02151
[ "2303.17580" ]
2308.02151#3
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
# 1. Task instruction Lollipop Chainsaw featured Juliet Starling, who was voiced by a Canadian-American actress who has done voice roles for what Teen Titans spinoff series? 2. Action sequences in prior trial 4. Action sequences in next trial Action I Search[Juliet Starling] Action 1: Search[Lollipop Chainsaw] Action 2: Search[{Lollipop Chainsaw] . . oe Action 2: Search[{Tara Strong] Action 3: Search[Tara Strong] Action 3: Finish[Teen Titans, Teen Titans Go!] Action 4: Finish[Teen Titans and Teen Titans Go!] : ? : J + | add to agent prompt 3. Verbal feedback (self-reflection)
2308.02151#2
2308.02151#4
2308.02151
[ "2303.17580" ]
2308.02151#4
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
I should have searched for Lollipop Chainsaw first and looked up the Canadian-American actress who voiced Juliet Starling afterwards. I also should have looked up Tara Strong's filmography and searched for any voice roles she did specifically for Teen Titans or Teen Titans Go! Figure 1: An example of uninformative self-reflections from a frozen LLM. The root cause of failure in prior trial is that the agent should have responded only the spinoff series â Teen Titans Goâ not â Teen Titansâ as the answer.
2308.02151#3
2308.02151#5
2308.02151
[ "2303.17580" ]
2308.02151#5
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
The agent forgot its goal during a chain of thought, actions and lengthy observations. The verbal feedback from frozen LLM, however, only rephrases the prior actions sequences as the proposed new plan, resulting the same, incorrect actions in the next trial. We conduct experiments on open-source simulation and real-world environments including Hot- PotQA (Yang et al., 2018), which consists of search-based question answering tasks to test the tool use abilities of a web agent which needs to call Wikipedia APIs in multiple steps for answering ques- tion. We observe Retroformer agents are faster learners compared with reflexion, which does not use gradient for reasoning and planning, and better decision-makers and reasoners. More con- cretely, Retroformer agents improve the success rate of search-based question-anwering tasks in HotPotQA (Yang et al., 2018) by 18% in 4 attempts, which demonstrates the effectiveness of gradient-based reasoning and planning for tool use under large state-action space. To summarize, our contributions are the following: â ¢ The paper introduces Retroformer, which iteratively refines the prompts given to large lan- guage agents based on environmental feedback to improve learning speed and task completion. We take a policy gradient approach with the Actor LLM being part of the environment, allowing learning from a wide range of reward signals for diverse tasks and environments.
2308.02151#4
2308.02151#6
2308.02151
[ "2303.17580" ]
2308.02151#6
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
â ¢ The proposed method focuses on enhancing the retrospective model in the language agent archi- tecture, without accessing the Actor LLM parameters or needing to propagate gradients through it. The agnostic nature of Retroformer makes it a flexible plug-in module for various types of cloud-based LLMs, such as GPT or Bard. 2 # 2 RELATED WORK Autonomous Language Agents We summarize in Table 1 the recent language agent literature related to our work from five perspectives and differentiate our method from them. The completion of a complex task typically involves numerous stages. An agent must possess knowledge of these stages and plan accordingly. Chain-of-Thoughts or CoT (Wei et al., 2022) is the pioneering work that prompts the agent to decompose challenging reasoning tasks into smaller, more manageable steps. ReAct (Yao et al., 2023), on the other hand, proposes the exploitation of this reasoning and acting proficiency within LLM to encourage interaction with the environment (e.g. using the Wikipedia search API) by mapping observations to the generation of reasoning and action traces or API calls in natural language. This agent architecture has spawned various applications, such as HuggingGPT (Shen et al., 2023), generative agents (Park et al., 2023), WebGPT (Nakano et al., 2021), AutoGPT (Gravitas, 2023), BabyAGI (Nakajima, 2023), and Langchain (Chase, 2023).
2308.02151#5
2308.02151#7
2308.02151
[ "2303.17580" ]
2308.02151#7
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Table 1: Related work on large language agents. Approach CoT (Wei et al., 2022) ReAct (Yao et al., 2023) Self-refine (Madaan et al., 2023b) RAP (Hao et al., 2023) Reflexion (Shinn et al., 2023) Retroformer (our method) Gradient Arbitrary learning â â â â â â reward â â â â â â Iterative refinement â â â â â â Hidden Decision Memory constraints making â â â â â â â â â â â â â â â â â â However, these approaches fail to learn from valuable feedback, such as environment rewards, to en- hance the agentâ s behaviors, resulting in performances that are solely dependent on the quality of the pre-trained LLM. Self-refine (Madaan et al., 2023a) addresses this limitation by employing a single LLM as a generator, refiner, and provider of feedback, allowing for iterative refinement of outputs. However, it is not specifically tailored for real-world task-based interaction with the environment. On the other hand, RAP (Hao et al., 2023) repurposes the LLM to function as both a world model and a reasoning agent. It incorporates Monte Carlo Tree Search for strategic exploration within the extensive realm of reasoning with environment rewards. This approach enables effective naviga- tion and decision-making in complex domains. Recently, Shinn et al. (2023) presents Reflexion, a framework that equips agents with dynamic memory and self-reflection capabilities, enhancing their reasoning skills. Self-reflection plays a pivotal role, allowing autonomous agents to iteratively refine past actions, make improvements, and prevent repetitive errors. Transformer Reinforcement Learning Reinforcement learning with a provided reward function or a reward-labeled dataset, commonly referred to as RLHF, has become a standard practice within the LLM fine-tuning pipeline. These endeavors have convincingly demonstrated the efficacy of RL as a means to guide language models towards desired behaviors that align with predefined reward functions encompassing various domains, including machine translation, summarization, and gen- erating favorable reviews.
2308.02151#6
2308.02151#8
2308.02151
[ "2303.17580" ]