id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.09971#29
MindAgent: Emergent Gaming Interaction
To study how other LLMs perform on our tasks, we tested the collaboration performance of GPT-3.5, Claude-2 and LLaMA in Table 6. For a fair com- parison, all tests employed identical prompt inputs. Findings: We observe that while other LLMs tend to underperform, models such as Claude-2 still manage to complete the task to a considerable extent. 6.2 EMERGING CAPABILITIES Across our experiments, we observe the following emergent properties under our MINDAGENT framework. Emergent Collaboration Tasks Understanding. As shown in Table 7, especially in the few-step ablation entries, GPT-4 exhibits its proficiency even when not provided with a full demonstration for specific tasks. To clarify, a â full few-shot demoâ typically refers to a comprehensive demonstration of a task, detailing each step and procedure involved. In contrast, we use provide GPT-4 with only a partial demonstration or a glimpse of the task only executing two steps. Yet, despite this limited input, GPT-4â s performance is remarkable. This underscores GPT-4â s im- pressive emergent zero-shot multi-agent planning capabilities. Beyond simply completing unseen tasks, GPT-4 also demonstrates adaptability by dynamically prioritizing multiple different tasks as they arise, emphasizing its emergent multi-task, on-the-fly planning skills. Emergent Multi-agent Reasoning Capabilities. Referencing Table 8, GPT-4 has the capability to deploy more agents based on demonstrations of fewer agents. For instance, GPT-4 can effectively dispatch four agents having only seen demonstrations involving two agents. Moreover, the efficiency of collaboration is higher as the number of agents increases, spotlighting its emergent collaboration prowess. 2 agent 3 agent 4 agent GPT-4 Claude-2 LLaMA ChatGPT GPT-4 Claude-2 LLaMA ChatGPT GPT-4 Claude-2 LLaMA ChatGPT Ï int,(1) Ï int,(2) Ï int,(3) Ï int,(4) Ï
2309.09971#28
2309.09971#30
2309.09971
[ "2307.04721" ]
2309.09971#30
MindAgent: Emergent Gaming Interaction
int,(5) CoS 10/26 10/17 11/18 11/13 11/11 0.686 3/24 3/16 3/12 3/9 4/6 0.3125 0 0 0 0 0 0 0/24 0/15 0/12 0/9 0/6 0 12/25 14/20 13/14 10/10 12/12 0.822 5/26 4/16 3/12 5/11 5/7 0.372 0 0 0 0 0 0 0/24 0/15 0/12 0/9 0/6 0 16/27 16/19 15/17 12/13 12/12 0.848 9/25 4/15 4/12 6/11 6/7 0.473 0 0 0 0 0 0 0/24 0/15 0/12 0/9 0/6 0 Table 6: Performance of Other LLMs on Level 3 Ï int,(1) Ï int,(2) Ï int,(3) Ï int,(4) Ï int,(5) CoS 10/26 10/17 11/13 12/12 11/11 0.764 8/26 11/19 11/13 9/11 10/10 0.710 8/25 9/17 10/12 8/9 9/9 0.714 4/25 4/17 4/12 1/9 5/7 0.311 2 agent GPT-4 GPT-4 w/ few-step GPT-4 w/o inference knowledge GPT-4 w/o feedback Table 7: Additional Ablation 13 level 3 4agent using 4agent module 4agent using 2agent module 3agent using 3agent module GPT4 Ï int,(1) GPT4 Ï int,(2) GPT4 Ï int,(3) GPT4 Ï
2309.09971#29
2309.09971#31
2309.09971
[ "2307.04721" ]
2309.09971#31
MindAgent: Emergent Gaming Interaction
int,(4) GPT4 Ï int,(5) CoS 16/27 16/19 15/17 12/13 12/12 0.848 14/27 16/20 15/16 13/13 12/12 0.851 12/25 14/20 13/14 10/10 12/12 0.822 11/25 11/19 12/14 12/12 11/11 0.775 Table 8: Using different numbers of agent demos # 7 NOVEL GAME ADAPTATION In line with our ongoing efforts to create collaborative, in-game, multi-agent systems, we ventured beyond CuisineWorld and made strides in integrating our infrastructure into the widely popular sandbox game, Minecraft. In this new adaptation, we designed several unique cooking tasks where two in-game agents, Alex and Steve, are assigned the responsibility of cooking various types of meat as shown in Figure 7. After cooking, agents need to deposit the items into a chest. More details can be found in Appendix C. The experiment results are presented in Table 9.
2309.09971#30
2309.09971#32
2309.09971
[ "2307.04721" ]
2309.09971#32
MindAgent: Emergent Gaming Interaction
We define the following actions for the multi-agent system in our Minecraft game: 1) goto(agent, location); 2) killMob(agent, mobType); 3) mineBlock(agent, blockType); 4) putFuelFurnace(agent, fuelType), to put the item from agentâ s in- ventory to the furnaceâ s bottom slot. 5) putItemFurnace(agent, itemType), to put the item from agentâ s inventory to the furnaceâ s top slot; 6) takeOutFurnace(agent), take out the cooked item from the furnace 7) putInChest(agent, itemType) ; The state space in Minecraft contains the following: 1) nearby blocks for each agent 2) nearby entities for each agent. 3) each agentâ s inventory 4) items inside the furnace 5) items inside the chest. 6) human playerâ s inventory if a human player is involved. To ensure reproducibility, we modify the game mechanism. A killed mob will respawn nearby, and a mined block will also respawn nearby. The empirical data we collected from these game sessions provided us with compelling evidence that the multi-agent collaboration infrastructure weâ ve developed has the robustness to be extrapolated and adapted across multiple distinct games, paving the way for broader applications in the gaming industry. Going a step further, we bridged the gap between human players and in-game (NPC) agents by inte- grating Microsoftâ s Azure speech-to-text API into the Minecraft environment. This addition allows human players to communicate and collaborate with in-game NPC agents using voice chat. Human players can express their intents and desired goals to NPCs in real-time through voice chat. This real-time vocal interaction enriches the gameplay experience, fostering a deeper level of immersion and synergy between human players and AI agents. Moreover, this integration opens the door for research into the efficacy of voice-assisted AI learning and how real-world human interactions can shape AI behavior in virtual domains. In the case of the human player chatting with the multi-agent system, the prompt contains additional human instructions and human dialog history components. In addition, by integrating Minecraft VR mode with our infrastructure, we can bring the player interactive experiences to the next level. GPT-4 minecraft Performance Ï
2309.09971#31
2309.09971#33
2309.09971
[ "2307.04721" ]
2309.09971#33
MindAgent: Emergent Gaming Interaction
int,(1) 0.195 Ï int,(2) 0.381 Ï int,(3) 0.704 Ï int,(4) 0.792 Ï int,(5) 0.833 CoS 0.581 Table 9: Performance of our framework in Minecraft 14 t n e g a - i t l u M t n e g a - n a m u H n o i t c a r e t n I R V Figure 7: The top two images show a multi-agent collaboration example in Minecraft. In the left image, Alex and Steve are killing different animals, and in the right image, Alex and Steve are cooking meat in a furnace together. The middle two images show a human player instructing the agents to perform certain actions. The bottom two images show a human player collaborating with agents in VR.
2309.09971#32
2309.09971#34
2309.09971
[ "2307.04721" ]
2309.09971#34
MindAgent: Emergent Gaming Interaction
# 8 CONCLUSION In this paper, we presented MINDAGENT, an infrastructure for multi-agent collaboration through LLMs across multiple gaming domains. We investigated the multi-agent planning capabilities of MINDAGENT, and we deployed our infrastructure into real-world video games to demonstrate its effectiveness for multi-agent collaboration and human-AI collaboration. Beyond its practical appli- cations, we hope that our endeavor serves as a beacon, guiding the development of future gaming systems where human-AI collaboration is seamless and intuitive. Furthermore, we are optimistic that our insights and findings might catalyze innovations in crafting games that are not only techno- logically advanced but also significantly more engaging and enjoyable for players. # ACKNOWLEDGMENTS We are especially grateful to Johannes Gehrke, Ryen White, Haiyan Zhang, Kareem Choudhry for their enormous advice, support and encouragement of the work. We appreciate Katja Hofmann, Andrzej Banburski-Fahey, Jianwei Yang, Michel Galley, Nebojsa Jojic, Bill Dolan for the early in- sightful discussions, suggestions and comments. The authors gratefully acknowledge Adrian Brown from X-Box team for his discussion, feedback and pointers to the modeling generation and litera- ture. We thank Rohan Taori, Janardhan Kulkarni, Ziheng Zhou, Yu Wang, Eloi Moliner Juanpere, Xiaofeng Gao, Collin Huang, Xiaodong Yu, and Shuwen Qiu for their help on the human experiment setup.
2309.09971#33
2309.09971#35
2309.09971
[ "2307.04721" ]
2309.09971#35
MindAgent: Emergent Gaming Interaction
15 REFERENCES Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. Do as i can and not as i say: Grounding language in robotic affordances. In arXiv preprint arXiv:2204.01691, 2022. 3 Bowen Baker, Ilge Akkaya, Peter Zhokov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. Video pretraining (vpt): Learning to act by watching unlabeled online videos. Advances in Neural Information Processing Systems, 35:24639â 24654, 2022. 3 Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, et al. Human-level play in the game of diplomacy by com- bining language models with strategic reasoning. Science, 378(6624):1067â 1074, 2022. 4 Andrew Blair-Stanek, Nils Holzenberger, and Benjamin Van Durme. Can gpt-3 perform statutory reasoning? arXiv preprint arXiv:2302.06100, 2023. 2
2309.09971#34
2309.09971#36
2309.09971
[ "2307.04721" ]
2309.09971#36
MindAgent: Emergent Gaming Interaction
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901, 2020. 2 S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Ka- mar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. 2 Micah Carroll, Rohin Shah, Mark K Ho, Tom Griffiths, Sanjit Seshia, Pieter Abbeel, and Anca Dragan. On the utility of learning about humans for human-ai coordination. Advances in neural information processing systems, 32, 2019. 3, 4 Jonathan H Choi, Kristin E Hickman, Amy Monahan, and Daniel Schwarcz. Chatgpt goes to law school. Available at SSRN, 2023. 2 Marc-Alexandre CË ot´e, Akos K´ad´ar, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, et al. Textworld: A learning environment for text-based games. In Computer Games: 7th Workshop, CGW 2018, Held in Con- junction with the 27th International Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, July 13, 2018, Revised Selected Papers 7, pp. 41â 75. Springer, 2019. 4 Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070, 2023. 4
2309.09971#35
2309.09971#37
2309.09971
[ "2307.04721" ]
2309.09971#37
MindAgent: Emergent Gaming Interaction
Xiaofeng Gao, Ran Gong, Yizhou Zhao, Shu Wang, Tianmin Shu, and Song-Chun Zhu. Joint mind modeling for explanation generation in complex human-robot collaborative tasks. In 2020 29th IEEE international conference on robot and human interactive communication (RO-MAN), pp. 1119â 1126. IEEE, 2020. 12 Xiaofeng Gao, Qiaozi Gao, Ran Gong, Kaixiang Lin, Govind Thattai, and Gaurav S Sukhatme. Dialfred: Dialogue-enabled agents for embodied instruction following. IEEE Robotics and Au- tomation Letters, 7(4):10049â 10056, 2022. 4 Qiuyuan Huang, Jae Sung Park, Abhinav Gupta, Paul Bennett, Ran Gong, Subhojit Som, Baolin Peng, Owais Khan Mohammed, Chris Pal, Yejin Choi, et al. Ark: Augmented reality with knowl- edge interactive emergent ability. arXiv preprint arXiv:2305.00970, 2023. 2 16 Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero- shot planners: Extracting actionable knowledge for embodied agents. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 9118â
2309.09971#36
2309.09971#38
2309.09971
[ "2307.04721" ]
2309.09971#38
MindAgent: Emergent Gaming Interaction
9147. PMLR, 17â 23 Jul 2022a. URL https://proceedings. mlr.press/v162/huang22a.html. 3 Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Noah Brown, Tomas Jackson, Linda Luu, Sergey Levine, Karol Hausman, and Brian Ichter.
2309.09971#37
2309.09971#39
2309.09971
[ "2307.04721" ]
2309.09971#39
MindAgent: Emergent Gaming Interaction
Inner monologue: Embodied reasoning through planning with language models. In arXiv preprint arXiv:2207.05608, 2022b. 3 Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398, 2023. 2 Unnat Jain, Luca Weihs, Eric Kolve, Mohammad Rastegari, Svetlana Lazebnik, Ali Farhadi, Alexan- der G Schwing, and Aniruddha Kembhavi. Two body problem: Collaborative visual task comple- tion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6689â 6699, 2019. 3
2309.09971#38
2309.09971#40
2309.09971
[ "2307.04721" ]
2309.09971#40
MindAgent: Emergent Gaming Interaction
Katharina Jeblick, Balthasar Schachtner, Jakob Dexl, Andreas Mittermeier, Anna Theresa St¨uber, Johanna Topalis, Tobias Weber, Philipp Wesp, Bastian Sabel, Jens Ricke, et al. Chatgpt makes medicine easy to swallow: An exploratory case study on simplified radiology reports. arXiv preprint arXiv:2212.14882, 2022. 2 G Ayorkor Korsah, Anthony Stentz, and M Bernardine Dias. A comprehensive taxonomy for multi- robot task allocation. The International Journal of Robotics Research, 32(12):1495â 1512, 2013. 8 Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng.
2309.09971#39
2309.09971#41
2309.09971
[ "2307.04721" ]
2309.09971#41
MindAgent: Emergent Gaming Interaction
Code as policies: Language model programs for embodied control. In arXiv preprint arXiv:2209.07753, 2022. 2, 3 Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. Agentbench: Evaluating llms as agents. arXiv preprint arXiv:2308.03688, 2023. 4 Xinzhu Liu, Xinghang Li, Di Guo, Sinan Tan, Huaping Liu, and Fuchun Sun. Embodied multi-agent task planning from ambiguous instruction. Proceedings of robotics: science and systems, New York City, NY, USA, pp. 1â 14, 2022. 4 Ryan Lowe, Yi I Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi- agent actor-critic for mixed cooperative-competitive environments. Advances in neural informa- tion processing systems, 30, 2017. 3 Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Are- nas, Kanishka Rao, Dorsa Sadigh, and Andy Zeng. Large language models as general pattern machines. arXiv preprint arXiv:2307.04721, 2023. 2
2309.09971#40
2309.09971#42
2309.09971
[ "2307.04721" ]
2309.09971#42
MindAgent: Emergent Gaming Interaction
John J Nay. Law informs code: A legal informatics approach to aligning artificial intelligence with humans. Nw. J. Tech. & Intell. Prop., 20:309, 2022. 2 Oded Nov, Nina Singh, and Devin M Mann. Putting chatgptâ s medical advice to the (turing) test. medRxiv, pp. 2023â 01, 2023. 2 Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, and Dilek Hakkani-Tur.
2309.09971#41
2309.09971#43
2309.09971
[ "2307.04721" ]
2309.09971#43
MindAgent: Emergent Gaming Interaction
Teach: Task-driven In Proceedings of the AAAI Conference on Artificial Intelligence, embodied agents that chat. volume 36, pp. 2017â 2025, 2022. 4 Joon Sung Park, Joseph C Oâ Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. 3, 4
2309.09971#42
2309.09971#44
2309.09971
[ "2307.04721" ]
2309.09971#44
MindAgent: Emergent Gaming Interaction
17 Xavier Puig, Tianmin Shu, Shuang Li, Zilin Wang, Yuan-Hong Liao, Joshua B Tenenbaum, Sanja Fidler, and Antonio Torralba. Watch-and-help: A challenge for social perception and human-ai collaboration. arXiv preprint arXiv:2010.09890, 2020. 3, 4 Tabish Rashid, Mikayel Samvelyan, Christian Schroeder De Witt, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. Monotonic value function factorisation for deep multi-agent reinforcement learning. The Journal of Machine Learning Research, 21(1):7234â 7284, 2020. 3 Mohit Shridhar, Xingdi Yuan, Marc-Alexandre CË ot´e, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768, 2020. 4 Peter Stone and Manuela Veloso. Multiagent systems: A survey from a machine learning perspec- tive. Autonomous Robots, 8:345â 383, 2000. 2 Alane Suhr, Claudia Yan, Charlotte Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, and Yoav Artzi. Executing instructions in situated collaborative interactions. arXiv preprint arXiv:1910.03655, 2019. 4 Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rockt¨aschel, Douwe Kiela, Arthur Szlam, and Jason Weston. Learning to speak and act in a fantasy text adventure game. arXiv preprint arXiv:1903.03094, 2019. 4 Yanming Wan, Jiayuan Mao, and Josh Tenenbaum. Handmethat: Human-robot communication in physical and social environments. Advances in Neural Information Processing Systems, 35: 12014â 12026, 2022. 3, 4
2309.09971#43
2309.09971#45
2309.09971
[ "2307.04721" ]
2309.09971#45
MindAgent: Emergent Gaming Interaction
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023a. 2, 3, 8 Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. arXiv preprint arXiv:2302.01560, 2023b. 2, 3 Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021. 2 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022. 2 Kailai Yang, Shaoxiong Ji, Tianlin Zhang, Qianqian Xie, and Sophia Ananiadou. On the evalu- ations of chatgpt and emotion-enhanced prompting for mental health analysis. arXiv preprint arXiv:2304.03347, 2023. 2 Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023. 2, 3, 4 Luyao Yuan, Xiaofeng Gao, Zilong Zheng, Mark Edmonds, Ying Nian Wu, Federico Rossano, Hongjing Lu, Yixin Zhu, and Song-Chun Zhu. In situ bidirectional human-robot value alignment.
2309.09971#44
2309.09971#46
2309.09971
[ "2307.04721" ]
2309.09971#46
MindAgent: Emergent Gaming Interaction
Science robotics, 7(68):eabm4183, 2022. 12 Ceyao Zhang, Kaijie Yang, Siyi Hu, Zihao Wang, Guanghe Li, Yihang Sun, Cheng Zhang, Zhaowei Zhang, Anji Liu, Song-Chun Zhu, et al. Proagent: Building proactive cooperative ai with large language models. arXiv preprint arXiv:2308.11339, 2023a. 3 Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B Tenenbaum, Tian- min Shu, and Chuang Gan. Building cooperative embodied agents modularly with large language models. arXiv preprint arXiv:2307.02485, 2023b. 3
2309.09971#45
2309.09971#47
2309.09971
[ "2307.04721" ]
2309.09971#47
MindAgent: Emergent Gaming Interaction
18 Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Ruyi Gan, Jiaxing Zhang, and Yujiu Yang. Solving math word problem via cooperative reasoning induced language models. arXiv preprint arXiv:2210.16257, 2022. 2 19 # APPENDIX # A PROMPT EXAMPLES We provide some prompt examples for CuisineWorld. Figure 8 shows an example of the system prompt info. Figure 9 shows an example of a partial demonstration. The available actions are : 1) goto: goto a tool location 2) get: get some object from a tool 3) put: put some abject into a tool &) activate: activate the tool to cook all ingredients inside the tool into a different tools S) noop: not performing any actions Sonetimes the system will give you error messages. Please consider these error messages when executing actions. You need to specify action for all of the agents, **except humanes. They all have different agent numbers. Do not assign actions to the same agent more than once.
2309.09971#46
2309.09971#48
2309.09971
[ "2307.04721" ]
2309.09971#48
MindAgent: Emergent Gaming Interaction
When the tools reach its capacity, you need to take stuff out. Otherwise, you cannot put items inside. when you are holding objects, you cannot get any more objects. When you are holding objects, you cannot activate tools. Afer you cooked a required dish, you need to put it into the servingtable. You can only pick up objects from the tool location, if you are located at the tool location. When you activate any tools, make sure a11 the items inside the tool are respecting the recipes. Otherwi *e* You should mix salad in the mixer. To make salad you should chop veggies first. *** =** If the tool is occupied, indicated by the occupy({) predicate, you cannot get objects from it or put objects into it. ++» *** The food orders are keep coming. You should finish as many dishes as possible and finish every dish as soon as possible. Please deliver the order to the serveringtable when it is finished. *** ex The dish will expire after the lifetime reaches @ and it's not at the serveringtable. Please avoid this. *«« Here are the recipes: , you will cook waste. Avoid waste at all cost. Cook porkMeatcake at: â - location: blender â with ingredients: pork, flour, Cook salmonSashimi at: ~~ location: chopboard -- with ingredients: salmon, Cook tunaSashimi at: -â - location: chopboard == with ingredients: tuna, Cook mixedSashimi at: â - location: mixer -- with ingredients: selmonSashini, tunaSashimi, The following objects are available: â -1) salmonSashini â -2) tuna --3) mixedSashimi ~-4) tunaSashini --5) porkMeatcake --6) salmon --7) flour ~-8) pork The objecsts are cooked using tools or are just base ingredients. Anong them, the following are base ingredients: â -1) tuna ~-2) salmon â -3) flour â -4) pork You can only obtain base ingredients from the storage initially. Additional rules:
2309.09971#47
2309.09971#49
2309.09971
[ "2307.04721" ]
2309.09971#49
MindAgent: Emergent Gaming Interaction
You can place up to infinite item into the storaged You can place up to infinite item into the storageé You can place up to infinite item into the servingtable@ You can place up to infinite item into the servingtableé You can place up to 1 item into the chopboard6d You can place up to 1 item into the chopboardd You can place up to 1 item into the chopboardi You can place up to 1 item into the chopboardi You can place up to item into the mixerd You can place up to 5 item into the mixerd You can place up to 5 item into the mixer1 You can place up to item into the mixeri z* Only #* the following tools are available: storage®, servingtable@, chopboard@, chopboardi, mixerd, mixer1, You cannot pick up these tools. You can only use those tools at the corresponding location. auqnrPr Figure 8: The MINDAGENT system prompt example. There Goal: porkMeat at(agent®, servingtable@) at(agenti, servingtable@) hold(agent@, None) agenti, None) je(storaged, None) de(blender®, None) goto_agent@_storaged goto_agenti1_storaged =e Goal: porkMeatcake t=1 state: at(agent®, storage®) at(agenti, storaged) hold(agent@, None) hold(agenti, None) inside(storage®, None) inside(blender®, None) inside(chopboard@, None) inside(servingtable®, None) ~action: â â * Goal: porkMeatcake t=2 -Sstate: at(agent®, storaged) at (agen storage®) hold(agent@, flour) hold(agenti, pork) inside(storaged, None) inside(blender®, None) e(chopboard@, None) inside(chopboard1, None) ( @, None) vingtable®, None) goto_agent@_blender® goto_agenti_blender® == Figure 9: The MINDAGENT system partial one-shot demo example.
2309.09971#48
2309.09971#50
2309.09971
[ "2307.04721" ]
2309.09971#50
MindAgent: Emergent Gaming Interaction
20 # B TASK DETAILS IN CUISINEWORLD Here we visualize different task graphs in CUISINEWORLD. In CUISINEWORLD, we provide tasks of different complexities to holistically evaluate the multi-agent systemâ s performance. In addition, the environment is highly customizable and extendable. Users only need to modify the JSON files to add more tasks or modify existing tasks. B.1 LEVEL 0 Salmon Meatcake Figure 10: Salmon Meatcake B.2 LEVEL 1 Lamb Meatcake Flour Salmon Meatcake Lamb Meatcake (a) Salmon Meatcake (b) Lamb Meatcake (c) Lobster Meatcake Lobster Meatcake Salmon Meatcake Flour Lobster Meatcake # (a) Salmon Meatcake # (b) Lamb Meatcake # (c) Lobster Meatcake 21 B.3 LEVEL 2
2309.09971#49
2309.09971#51
2309.09971
[ "2307.04721" ]
2309.09971#51
MindAgent: Emergent Gaming Interaction
( chopboard Tuna Sashimi © Chopboard ) @eereca ) ( chopboard ( chopboard 72 a Salmon Sashimi Tuna Sashimi Mixed Sashimi ( chopboard Salmon Sashimi © Chopboard ) @eereca ) 72 a Mixed Sashimi # (a) Salmon Sashimi (b) Tuna Sashimi (c) MixedSashimi B.4 LEVEL 3 Rice Chopboard ( Pot Salmon Sashimi Cooked Rice Salmon Sushi Chopboard ( Pot Tuna Sashimi Cooked Rice Tuna Sushi (a) Salmon Sushi (b) Tuna Sushi
2309.09971#50
2309.09971#52
2309.09971
[ "2307.04721" ]
2309.09971#52
MindAgent: Emergent Gaming Interaction
22 B.5 LEVEL 4 ( chopboard Tomato Slice Tomato Salad Chopboard Lettuce Slice GD Lettuce Salad (a) Tomato Salad (b) Lettuce Salad 7 = Chopboard Chopboard wos Tomato Slice Lettuce Slice Nu? Mixer q Tomato Lettuce Salad TT Ty T Tomato Slice Cucumber Slice er, Mixer | â Tomato Cucumber Salad # (c) Tomato Lettuce Salad (d) Tomato Cucumber Salad B.6 LEVEL 5 i. (Ghopboard ) - Tomato Slice ( an Cooked Pasta Sautéed Tomato NU? { Tomato Pasta Pot ) ; i. (Ghopboard ) - Tomato Slice ( an Cooked Pasta Sautéed Tomato NU? { Tomato Pasta Pot ) ; Tomato Pasta T T T L I Pot ( : RD ) Pan : Cooked Pasta Cooked Pasta â Sautéed Pork \ Mixer) {Mer Neatâ | | Beef Pasta Pork Pasta Beef Pasta Pork Pasta T Pot ( : RD ) : Cooked Pasta \ Mixer) | Beef Pasta T T L I Pan Cooked Pasta â Sautéed Pork {Mer Neatâ | Pork Pasta # (a) Tomato Pasta (b) Beef Pasta # (c) Pork Pasta
2309.09971#51
2309.09971#53
2309.09971
[ "2307.04721" ]
2309.09971#53
MindAgent: Emergent Gaming Interaction
23 B.7 LEVEL 6 * Blender â Ar | Hawaiian Pizza (a) pepperoniPizza (b) hawaiianPizza (c) chickenPizza = Blender as SS Chicken | Chicken Pizza - Blender ane oven | Pepperoni Pizza B.8 LEVEL 7 Leek Onion Potato Leek Soup wea ! Onion Potato Carrot Soup Leek Onion Potato Leek Soup Broccoli Cheese Le Pot f Onion Broccoli Cheese Soup wea ! Onion Potato Carrot Soup Broccoli Cheese Le Pot f Onion Broccoli Cheese Soup (a) onionPotatoCarrotSoup (b) onionPotatoLeekSoup (c) onionBroccoliCheeseSoup # B.9 LEVEL 8 Beef ( ( Blender \ Flour Ground Beef Steamer Beef Dumpling Pork \ Blender ) | Flour Ground Pork Pork Dumpling Beef Pork ( \ ( Blender Blender ) \ Flour Ground Beef | Flour Ground Pork Steamer Beef Dumpling (a) Beef Dumpling Pork Dumpling (b) Pork Dumpling ( Blender ) | Flour Ground Salmon ( Steamer ) Salmon Dumpling (c) Salmon Dumpling ( Blender ) | Flour Ground Salmon ( Steamer ) Salmon Dumpling # (a) Beef Dumpling # (b) Pork Dumpling (c) Salmon Dumpling 24 B.10 LEVEL 9
2309.09971#52
2309.09971#54
2309.09971
[ "2307.04721" ]
2309.09971#54
MindAgent: Emergent Gaming Interaction
Beef Pan > Cheese Bread ne CheeseBurger (a) Cheese Burger (b) MaxJr (c) Hopper eee Beet J | chopboard Pan ce a = via | = = = | | aa (ee S wae 4 B.11 LEVEL 10 Rice | Pan Rice Cooker â |. 0: . Cooked Rice Tortilla Mixer | Burrito de Asada Rice Rice Cooker Cooked Rice [ora | Mixer Burrito de Pastor Rice Rice Rice | Rice Cooker Â¥ Pan Rice Cooker Rice Cooker â |. v 0: . Cooked Rice [ora | Cooked Rice Tortila Cooked Rice Tortilla Mixer â Mixer v | Burrito de Pastor Burrito de Polo. Burrito de Asada Rice Â¥ Rice Cooker v Cooked Rice Tortila â v Burrito de Polo. # (a) BurritodePastor # (b) BurritodePollo # (c) BurritodeAsada 25 B.12 LEVEL 11
2309.09971#53
2309.09971#55
2309.09971
[ "2307.04721" ]
2309.09971#55
MindAgent: Emergent Gaming Interaction
Rice | Pan Rice Cooker | Cooked Rice Tortilla | 7 Mixer Burrito de Asada (a) BurritodePastor (b) BurritodePollo (c) BurritodeAsada Rice | Rice Cooker | Pan Cooked Rice Tortilla 7 Burrito de Pastor Rice ¥. Pan Rice Cooker | Cooked Rice Tortilla | re ¥v Burrito de Pollo Rice Chopboard ( Pot Salmon Sashimi Cooked Rice < Mixer Salmon Sushi a | Pot ) < Chopboard ) Tuna Sashimi Cooked Rice ( Mixer > Tuna Sushi (d) SalmonSushi (e) TunaSushi B.13 LEVEL 12 Chopboard Potato Slice Chopboard It Potato Slice Blender I Raw Smashed Potato i < Steamer » Smashed Potato (a) Potato Salad (b) French Fries (c) Smashed Potato Chopboard | Egg Potato Slice \ iN Mixer » / Potato Salad
2309.09971#54
2309.09971#56
2309.09971
[ "2307.04721" ]
2309.09971#56
MindAgent: Emergent Gaming Interaction
26 # C MINECRAFT Here we visualize the task graphs for different tasks in Minecraft. â =D | @-e- nA t = â @i-) (a) Cooking chicken in Minecraft = df ia en" ee (b) Cooking mutton in Minecraft G@-3-â | -9- @ t C= . i â B oe (c) Cooking steak in Minecraft | => i) noe) Se Cx | @-2- 8 t [oc â @i-) (d) Cooking porkchop in Minecraft 27 # D HUMAN EVALUATION INTERFACE We use the human evaluation interface to test the humanâ s perception of collaborative agents. This gives us a more controlled environment so usersâ perception of collaborative agents does not depend on their ability to control the keyboard and mouse, and their perception of collaborative agents does not depend on the latency and rate limits of GPT-4. â esa on ta pep etn aan on a ped ht ed aw eh tine omen 5 nee econ tt tro) fmaa_Â¥) â Same oe (Caan sages) i -- | level 3 emacs Current time step: 1 (max steps: 60) Current dishes: 1. slmonSushi remsinng time 25 To = â â â =a Dishes completed: Previous Actions: goto_agent0_storageO goto_agent!_storaged {goto agent2_storaged (a) Welcom Screen for human evaluation (b) Human Evaluation Example level_3 eee] Current time step: 2 (max steps: 60) Current dishes: 1 tunaSushi remaining time: 24 Robot states Kitchen states pare areey oe = rs = ii Vege P| Dishes completed: Previous Actions: get_agent0_tuna_storage0 get_agent1_rice_storaged get_agent2_tuna_storage0 get_agent3_rice_storaged Robot states pare areey oe = rs = SEES er ort pate roe # (c) Human Evaluation Example (d) Human Instructions 28
2309.09971#55
2309.09971
[ "2307.04721" ]
2309.09958#0
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
2023: 3 2 0 2 p e S 8 1 ] V C . s c [ 1 v 8 5 9 9 0 . 9 0 3 2 : v i X r a # An Empirical Study of Scaling Instruction-Tuned Large Multimodal Models # Yadong Luâ 1, Chunyuan Liâ 2, Haotian Liu3, Jianwei Yang2, Jianfeng Gao2, Yelong Shen1 1Microsoft Azure AI 2Microsoft Research 3University of Wisconsinâ
2309.09958#1
2309.09958
[ "2307.06281" ]
2309.09958#1
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Madison # Abstract Visual instruction tuning has recently shown encouraging progress with open- source large multimodal models (LMM) such as LLaVA and MiniGPT-4. How- ever, most existing studies of open-source LMM are performed using models with 13B parameters or smaller. In this paper we present an empirical study of scal- ing LLaVA up to 33B and 65B/70B, and share our ï¬ ndings from our explorations in image resolution, data mixing and parameter-efï¬ cient training methods such as LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language capabilities when completing real-world tasks in the wild.
2309.09958#0
2309.09958#2
2309.09958
[ "2307.06281" ]
2309.09958#2
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
We ï¬ nd that scaling LMM consistently enhances model performance and improves language capabilities, and performance of LoRA/QLoRA tuning of LMM are comparable to the performance of full-model ï¬ ne-tuning. Additionally, the study highlights the importance of higher image resolutions and mixing multimodal-language data to improve LMM performance, and visual instruction tuning can sometimes im- prove LMMâ s pure language capability. We hope this study makes state-of-the-art LMM research at a larger scale more accessible, thus helping establish stronger baselines for future research. Code and checkpoints will be made public. # 1 Introduction
2309.09958#1
2309.09958#3
2309.09958
[ "2307.06281" ]
2309.09958#3
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Recent studies on large multimodal models (LMM) [9, 10] have been focused on the methods of visual instruction tuning [12]. The results are promising: e.g., the open-source project Large Lan- guage and Vision Assistant (LLaVA) shows that training a 7B large language model (LLM) with multimodal instruction-following data for 3 hours on 8 A-100 GPUs leads to a LMM with strong visual understanding and reasoning capabilities in the wild: reproducing some of the most appealing examples of the proprietary OpenAI multimodal GPT-4 model [14]. A similar idea is explored in its co-current work MiniGPT-4 [20]. It has rapidly become a prominent research topic, spurring the development of numerous new models, benchmarks, and applications [10]. However, the high com- pute cost has led most existing studies to utilize 7B and 13B LLMs. Thus, the impact of signiï¬ cantly scaling up the model size to e.g., 33B and 65B remains unexplored. This study aims to ï¬ ll this gap by empirically investigating language models of larger sizes for LMM, sharing insights of our scaling experiments and establishing stronger baselines using larger-scale LLaVA for future research. Speciï¬ cally, we explore the impact of larger model sizes, model tuning and data mixing methods on model performance, and present our ï¬ ndings and recommendations. The scaling recipe leads to new state-of-the-art (SoTA) performance on LLaVA-Bench [12] and MM-VET [19]. We hope that our ï¬ ndings and larger LLaVA checkpoints would provide a reference for future research on visual instruction tuning. These authors contributed equally to this work Preprint. Work in progress # 2 Experiment Setup Model Checkpoints. To study the impact of scaling up LLM on multimmodal capabilities, we increase the language model size to 33B and 65B [15], in addition to the 7B and 13B models used for existing LMM. LLaVA-33B We employ the open source Vicuna-33B checkpoint 1 [16] to preform the two- stage training. The training data is around 125K conversations collected from ShareGPT.com. â
2309.09958#2
2309.09958#4
2309.09958
[ "2307.06281" ]
2309.09958#4
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
¢ LLaVA-65B Due to a lack of public 65B Vicuna checkpoint, we conduct our own training of the Vicuna-65B model, utilizing ShareGPT data that we have independently processed. This data contains 159M tokens used during training. As a comparison, the reported number of tokens used in training Vicuna 33B is 370M 2. Once the instruction-tuned LLM is given, we follow [12] to perform the two-stage LLaVA lightning training: (i) Stage 1: Pre-training for Feature Alignment. The linear projection layer is trained, which maps the visual feature (the features before the last layer of the pre-trained image encoder) to word embedding space of LLM. More specifcally, the projection dimension is 1024â 6656 for the 33B model and 1024â 8192 for the 65B model, respectively. In this stage, we use the concept- balanced subset of LAION-CC-SBU data with 558K samples. (ii) Stage 2: Visual Instruction Tuning. We use the LLaVA-80K multimodal instruct dataset for the ï¬ ne-tuning stage. Various training schedules are explored to enable the model to follow the diverse instructions to complete tasks in the wild, as to be detailed below. Tuning Methods. We explore both the trainable modules and training data mixing for efï¬ cient and effective visual instruct tuning of large models. In addition to tuning the linear projection layer, two schemes are consid- ered to tune the LLM: (i) Full-model ï¬ ne-tuning of LLM and (ii) Parameter-efï¬ cient training methods. For the latter, LoRA [7] and QLoRA [4] are employed to allow us to tune large mod- els with limited compute resource. This aims to gain an in-depth understanding of the trade-off between the training cost and model performance.
2309.09958#3
2309.09958#5
2309.09958
[ "2307.06281" ]
2309.09958#5
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
â ¢ Data mixing. Typically only the multimodal instruction data is used in Stage-2. We further consider mixing the language-only instruct data ShareGPT with the LLaVA-80K multimodal instruction data to gain an in-depth understanding of the trade-off between modelsâ language and multimodal capabilities. In the training process of both stages, we utilize the DeepSpeed library 3 and Hyper-parameters. employ the ZeRO3 optimizer, except for QLoRA runs we use ZeRO2. We use a maximum sequence length of 2048. For Stage 1, we train both the 33B and 65B models with a learning rate of 1à 10â 4 with no weight decay, and a learning rate with linear decay and linear warmup for 3% of training steps in total. For Stage 2, we use a learning rate of 2 à 10â 5 in full ï¬ ne-tuning to train 1 epoch for all the models in full ï¬ netuning, and a learning rate of 1 à 10â 4 for the LoRA/QLoRA runs. We conducted a set of hyperparameter search and for LoRA runs, and found larger LoRA alpha or equivalently larger learning rate was crucial to get the best performance.
2309.09958#4
2309.09958#6
2309.09958
[ "2307.06281" ]
2309.09958#6
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Speciï¬ cally, we use LoRA alpha equals 2 times the LoRA rank, and a learning rate of 1à 10â 4, which works the best for all the models. For full ï¬ ne-tuning, we use a total batch size of 512 on 4 A100 nodes, where each of these nodes is equipped with 8 A100-80G GPUs. For LoRA/QLoRA runs, we use a total batchsize of 64 on 1 A100 node for 33B model and 2 nodes for 65B model.
2309.09958#5
2309.09958#7
2309.09958
[ "2307.06281" ]
2309.09958#7
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
# 3 Results We ï¬ rst compare our large checkpoints on two recent benchmarks which are speciï¬ cally designed for LMM, then report our ï¬ ndings in the course of scaling up LLaVA models. # 1https://huggingface.co/lmsys/vicuna-33b-v1.3 2https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md 3https://github.com/microsoft/DeepSpeed 2 Models Reasoning Conversation Detail Overall Bard-0718 Bing-Chat-0629 78.7 90.1 83.7 59.6 69.7 52.2 77.8 71.5 LLaVA-13B (beam=1) LLaVA-13B (beam=5) LLaVA-33B (beam=1) LLaVA-33B (beam=5) LLaVA-65B (beam=1) LLaVA-65B (beam=5) 81.7 84.3 82.9 83.5 87.3 88.7 64.3 68.4 70.2 72.6 63.8 59.4 55.9 59.9 62.6 61.9 62.3 65.7 70.1 73.5 73.9 74.8 74.2 74.4 Table 1: The performance comparison on LLaVA-Bench. Beam search sizes at 1 and 5 are reported.
2309.09958#6
2309.09958#8
2309.09958
[ "2307.06281" ]
2309.09958#8
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Model Rec OCR Knowledge Generation Spatial Math Total Results of various open-source LMM on reported in the MM-VET paper [19] LLaMA-Adapter v2-7B [5] OpenFlamingo-9B [1, 2] MiniGPT-4-8B [20] BLIP-2-12B [11] LLaVA-7B [12] MiniGPT-4-14B [20] Otter-9B [8] InstructBLIP-14B [3] InstructBLIP-8B [3] LLaVA-13B [12] MM-ReAct-GPT-3.5 [18] LLaVA-7B (LLaMA-2) [12] LLaVA-13B (V1.3, 336px) [12] LLaVA-13B (LLaMA-2) [12] MM-ReAct-GPT-4 [18] 7.8 14.4 15.0 11.1 17.1 16.1 16.4 16.0 14.6 20.1 31.5 20.1 22.3 22.7 65.7 16.8 24.6 27.4 27.5 28.0 29.9 28.4 30.8 32.4 30.9 24.2 32.9 38.1 39.2 33.1 2.5 13.0 12.8 11.8 16.3 20.4 19.4 9.8 16.5 23.5 21.5 19.0 25.2 26.5 29.0 3.0 12.3 13.9 7.0 18.9 22.1 20.7 9.0 18.2 26.4 20.7 20.1 25.8 29.3 35.0 16.6 18.0 20.3 16.2 21.2 22.2 19.3 21.1 18.6 24.3 32.3 25.7 31.3 29.6 56.8 4.4 15.0 7.7 5.8 11.5 3.8 15.0 10.5 7.7 7.7 26.2 5.2 11.2 7.7 69.2 13.6±0.2 21.8±0.1 22.1±0.1 22.4±0.2 23.8±0.6 24.4±0.4 24.6±0.2 25.6±0.3 26.2±0.2 26.4±0.1 27.9±0.1 28.1±0.4 32.5±0.1 32.9±0.1 44.6±0.2 Results with our own experiment runs LLaVA-13B (LLaMA-2) LLaVA-33B LLaVA-33B (Data Mixing) LLaVA-65B LLaVA-65B (Data Mixing) 38.4 38.5 37.7 39.2 41.8 21.0 25.0 27.1 28.2 27.9 26.3 26.2 26.2 26.2 30.4 28.8 28.2 28.6 28.3 32.3 28.0 29.2 28.1 33.0 30.5 7.7 7.7 11.5 15.0 7.3 32.6±0.1 32.9±0.3 34.1±0.3 35.5±0.3 36.4±0.2
2309.09958#7
2309.09958#9
2309.09958
[ "2307.06281" ]
2309.09958#9
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Table 2: Performance of various open-source LMM on MM-VET. Note that MM-ReAct is not an single multimodal model, it is a system built on chaining visual tools via GPT-3.5 or GPT-4, which we append as a reference. Our experiment run on LLaVA-13B (LLaMA-2) yields very similar score with the same checkpoint reported in MM-VET paper, indicating that our evaluation pipelines are consistent. # 3.1 Comparisons on Benchmarks LLaVA-Bench. LLaVA-Bench (In-the-Wild)4 [12] is a diverse evaluation dataset consisting of 24 images with 60 questions in total, including indoor and outdoor scenes, memes, paintings, sketches. Each image is paired with a manually-curated, detailed description and a set of properly-selected questions related to open-ended visual chat scenarios. Each questions belongs to one of three types of tasks: conversations that contain simple visual recognition & QA questions, detailed descriptions that characterize the image with a long paragraph, and a complex reasoning task that focuses on de- ducing implications from an image. Language GPT-4 (gpt4-0314) is used to score to the generated answers. The relative scores between the model output and gold response are reported. We com- pare LLaVA against the commercial visual chat systems including Microsoft BingChat5 and Google Bard6 on LLaVA-Bench [12]. # 4https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_Bench.md 5https://www.bing.com/chat 6https://bard.google.com/ 3 The results are presented in Table 1. The 33B and 65B checkpoints outperform the 13B LLaVA model and Bing Chat. Despite the fact that LLaVA-Bench is small (thus the comparison might not be statistically signiï¬ cant), the results are encouraging: compared to large LMM, small open-sourced LMM are far more cost-effective to be deployed in real-world applications. With negligible increase of inference latency, we can signiï¬ cantly improve the performance for all model sizes by increasing the beam search size from 1 to 5.
2309.09958#8
2309.09958#10
2309.09958
[ "2307.06281" ]
2309.09958#10
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Our results show that larger LLaVA models generally exhibit better performance in tasks involving complex reasoning and generating detailed descriptions, which requires strong language competencies from larger LLM. In addition, larger LLaVA models obtain comparable results to BingChat in multi-turn, multi-modal conversation tasks that require strong image understanding capability. MM-VET. MM-VET [19] is designed based on the assumption that the intriguing capability of solving complicated tasks is often achieved by a generalist LMM which is able to integrate a varity of vision-language (VL) capabilities. MM-Vet contains 200 images and 218 questions (samples), aim- ing to evaluate6 core VL capabilities (recognition, OCR, knowledge, language generation, spatial awareness, and math) and their combinations. For evaluation, an LLM-based evaluator (gpt4-0613) is used to score open-ended outputs of different forms. In Table 2, we report the results on MM- VET. The performance is consistently improved from 13B to 33B and 65B. The largest LLaVA model improves SoTA performance among the end-to-end open-source LMM. The most signiï¬ cant improvements are observed when evaluating the capabilities of knowledge and generation, followed by recognition and OCR. The performance on spatial and math remains comparable. The result reveals that the improved LLM capability is instrumental in storing more knowledge in the weights and leading to a stronger language responding capability. # 3.2 Scaling up LLaVA The experiments are conducted to answer three research questions. @ Which scaling factor matters? We study the relative contribution of three scaling-up factors to the performance improvement of LLaVA. The results are summarized in Table 3 (a). Increasing the model size consistently improves the overall performance. We conjecture that larger data size is essential to train a larger model. For example, if we only train on LLaVA-80K data, we see smaller gain when model size becomes larger.
2309.09958#9
2309.09958#11
2309.09958
[ "2307.06281" ]
2309.09958#11
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
â ¢ Image resolution. By ï¬ xing the CLIP ViT image encoder, we compare the variants that are pre-trained to take image resolution 224à 224 and 336à 336, and ï¬ nd that the higher resolution consistently yields 2-3 points improvement across all four LLM sizes. â ¢ Data mixing. Larger models tend to have higher capability of ï¬ tting the instruction data. By mixing the language-only instruction data (ShareGPT) with LLaVA-80K, we can improve model performance by 2 points, compared to training on multimodal instruction data only. In Table 3 (b), we present our result on MM-Bench [13], which contains a set of 2,974 questions, which evaluate modelsâ reasoning skills of six categories. The combination of the three factors improve the baseline LLaVA 7B model, reported in [13]. @ When should the parameter-efficient training method be considered? As model size in- creases, it becomes necessary to consider using tuning methods that are more efficient than full- model fine-tuning. LoRA and QLoRA are well-known parameter-efficient tuning methods. As shown in Table 4, we report compute cost using GPU hours per node, because the unit can be equiv- alent to the price $13.63/hour (ND A100 v4 series) on Azure â
2309.09958#10
2309.09958#12
2309.09958
[ "2307.06281" ]
2309.09958#12
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
. The total cost can be estimated by multiplying the #hours and #epochs. In Table 4(a), we train both the 33B and 65B model with LoRA rank 8 and 64 for 1 epoch on the LLaVA-80K instruction-tuning dataset. For models with 33B parameters and above, as we increase the LoRA rank values, we notice an increase in both performance and cost until full-model tuning reaches its maximum performance for a speciï¬ c model size. In the case of the 13B model, we ï¬ nd that a rank of 64 can deliver comparable performance to full-model tuning. The cost is more related to the total number of parameters than the number of trainable parameters. The cost increase # 7https://azure.microsoft.com/en-us/pricing/details/machine-learning/ 4 Image Size Data Mixing 7B 13B 33B 65B 224à 224 336à 336 336à 336 â â â 63.6 65.9 â 67.1 70.1 â 69.3 72.0 73.9 70.3 72.3 74.2 (a) Performance scores on LLaVA-Bench. Checkpoint Image Size Data Mixing Overall LR AR RR FP-S FP-C CP LLaVA-7B LLaVA-33B LLaVA-65B 224à 224 336à 336 336à 336 â â â
2309.09958#11
2309.09958#13
2309.09958
[ "2307.06281" ]
2309.09958#13
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
36.2 55.7 56.0 15.9 23.3 24.4 53.6 74.0 72.3 28.6 46.0 49.3 41.8 51.5 50.5 20.0 50.4 51.2 40.4 67.2 68.1 (b) Performance scores on MM-Bench. The skills to evaluate include logic reasoning (LR), attribute reason- ing (AR), relation reasoning (RR), ï¬ ne-grained single-instance perception (FP-S), ï¬ ne-grained cross-instance perception (FP-C), and coarse perception (CP).
2309.09958#12
2309.09958#14
2309.09958
[ "2307.06281" ]
2309.09958#14
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Table 3: The performance to scale up model size, image resolution and data mixing. LoRA Rank 7B Full 13B 64 Full 8 33B 64-QLoRA 64 Full 64 65B Full Performance â Time (GPU Hours per node) â # Trainable Parameters (B) â 65.9 1.3 7 70.1 2.1 0.26 70.1 2.3 13 70.3 4.62 0.06 71.6 4.68 0.49 71.8 4.79 0.49 72.0 5.80 33 72.2 9.17 0.81 72.3 13.50 65 Table 4: The trade-off between performance and compute cost among different model sizes and traing methods on LLaVA-80K data. â Fullâ indicates the full-model ï¬ ne-tuning. â Timeâ is reported as the total GPU hours to ï¬ nish 1 epoch training (running time à #GPUs) divided by 8 (#GPUs per node). All models are trained on LLaVA-80K data, results are obtained through averaging 3 repeated evaluation runs with same set up on LLaVA-Bench. due to raising the LoRA rank for a given model size is signiï¬ cantly smaller than the cost increase by enlarging model sizes. For example, increasing the LoRA rank from 8 to 64 nearly matches the performance as LoRA ï¬ ne-tuning a 65B model with same rank, but only requires 50% of 65B modelâ s training cost. In practice we ï¬ nd that tuning 33B model provide a good trade-off between cost and performance. Different LoRA variations have similar performance, and QLoRA requires lower GPU memory cost and running-time cost than LoRA. When large models (e.g., 65B) are trained with DeepSpeed ZeRO2 mode, they can ï¬ t into GPU with QLoRA, while yield the OOM issue with LoRA. In the experiments, we ï¬ nd that the hyperparameters of LoRA have a large impact of performance:(i) Large learning rate and alpha value of LoRA improves the results signiï¬
2309.09958#13
2309.09958#15
2309.09958
[ "2307.06281" ]
2309.09958#15
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
cantly. For example, With the same rank=64, we reduce the learning rate=2 Ã 10â 5 and alpha=16, the performance decrease from 71.8 to 65.5 on LLaVA-Bench. (ii) Under the same setting, large ranks leads to little improve- ment. e.g., we increase the rank from 64 to 128 and 512, it improves from 65.5 to 66.1 and 68.1, respectively. @ A LMM with strong capabilities in both language and multimodal? We expand our eval- uation in two aspects: (i) MM-VET is added to measure the integrated multimodal capabilities o LMM; (ii) The pure language ability of LMM is measured using Vicuna-80 [16] and MMLU [6], where the former evaluates the instruct-following ability in real-world language tasks, the latter eva uates the multilingual multi-task language ability. The results are shown in Table 5, where all models are full-model fine-tuned. Compared to Vicuna which initializes the LLM weights of LLaVA, it is surprising to observe that LLaVA, after being trained solely on multimodal instruction data, exhibits a comparable language capability. Mixing language instruction data can boost LLaVAâ s multimodal ability, but not the lan- guage ability. This is partially attributed to the inclusion of complex reasoning questions, and long- form answers in LLaVA-Instruct-158K, which helps maintain the language capabilities of LLaVA.
2309.09958#14
2309.09958#16
2309.09958
[ "2307.06281" ]
2309.09958#16
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
5 Model Data Mix Multimodal Language LLaVA-Bench MM-VET Vicuna-80 MMLU Vicuna-13B LLaVA-13B - â - 70.1 - 32.5 79.9 79.6 55.8 55.0 Vicuna-33B LLaVA-33B LLaVA-33B - â â - 72.0 73.9 - 32.9 34.1 85.6 85.3 80.3 59.0 56.1 58.6 Vicuna-65B LLaVA-65B LLaVA-65B - â â - 72.3 74.2 - 35.5 36.4 83.2 84.5 82.6 62.5 62.6 62.2 LLaMA-2-70B-Chat LLaVA-70B - â - 69.8 - 35.4 84.7 81.3 63.1 65.1 Table 5: Performance on both multimodal and language capabilities. We also train LLaVA-70B based on the LLaMA-2-70B-Chat checkpoint [15], and ï¬ nd that mixed results on multimodal and language abilities. Interestingly, we improve LLaMA-2-70B-Chat by 2.4 points on MMLU, yielding an overall MMLU score of 65.1, which is the best performance for the 70B model size, according to [17] and the Chatbot Arena Leaderboard 8.
2309.09958#15
2309.09958#17
2309.09958
[ "2307.06281" ]
2309.09958#17
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
To the best of our knowl- edge, this is the ï¬ rst reported result which shows visual instructing tuning improve language ability of large-scale LMM. # 4 Conclusions and Limitations We present an empirical study of scaling the language model size for LMM. Our main ï¬ ndings are: (i) Scaling LMM consistently enhances model performance, resulting in signiï¬ cant improvements in language capabilities, primarily due to the increased LLM model size. We leave it to future work how to scale the vision encoder to enhance the visual capabilities and improve model performance on vision recognition and understanding tasks. (ii) Parameter-efï¬ cient methods such as LoRA/QLoRA are viable solutions to ï¬ netune large-scale LLMs for a good performance-cost trade-off in some real-world settings with limited GPU memory. We observe that LoRA/QLoRAâ s performance are comparable to that of ï¬ ne-tuning the full model, establishing their effectiveness through signiï¬ cant cost reduction in both model training and serving. (iii) Our study of training data curation reveals that properly selecting image resolutions and mixing multimodal-language data for model training can signiï¬ cantly improve the performance of the resultant LMM. We also show for the ï¬ rst time that visual instruction tuning can improve LMMâ s language capability. Note that the training datasets used in this study is small. So, our ï¬ ndings are still preliminary. In future work, we will experiment using much larger datasets to investigate in detail whether and how different methods of training data selection and mixing can improve the quality of much larger LMM. # References [1] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al.
2309.09958#16
2309.09958#18
2309.09958
[ "2307.06281" ]
2309.09958#18
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716â 23736, 2022. 3 [2] Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. Openï¬ amingo: An open- source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390, 2023. 3 # 8https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard 6 [3] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi.
2309.09958#17
2309.09958#19
2309.09958
[ "2307.06281" ]
2309.09958#19
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Instructblip: Towards general-purpose vision- language models with instruction tuning, 2023. 3 [4] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efï¬ cient ï¬ ne- tuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023. 2 [5] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. Llama-adapter v2:
2309.09958#18
2309.09958#20
2309.09958
[ "2307.06281" ]
2309.09958#20
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Parameter-efï¬ cient visual instruction model. arXiv preprint arXiv:2304.15010, 2023. 3 [6] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. 5 [7] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.
2309.09958#19
2309.09958#21
2309.09958
[ "2307.06281" ]
2309.09958#21
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 2 [8] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023. 3 [9] Chunyuan Li. Large multimodal models: Notes on CVPR 2023 tutorial. arXiv preprint arXiv:2306.14895, 2023. 1 [10] Chunyuan Li, Zhe Gan, Zhengyuan Yang, Jianwei Yang, Linjie Li, Lijuan Wang, and Jianfeng Gao. Multimodal foundation models: From specialists to general-purpose assistants. arXiv preprint, 2023. 1 [11] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023. 3 [12] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023. 1, 2, 3 [13] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281, 2023. 4 [14] OpenAI. Gpt-4 technical report, 2023. 1 [15] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.
2309.09958#20
2309.09958#22
2309.09958
[ "2307.06281" ]
2309.09958#22
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Llama 2: Open foundation and ï¬ ne-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. 2, 6 [16] Vicuna. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. https://vicuna.lmsys.org/, 2023. 2, 5 [17] Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023. 6 [18] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action, 2023. 3 [19] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang.
2309.09958#21
2309.09958#23
2309.09958
[ "2307.06281" ]
2309.09958#23
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Mm-vet: Evaluating large multimodal models for integrated capabil- ities. arXiv preprint arXiv:2308.02490, 2023. 1, 3, 4 [20] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. 1, 3
2309.09958#22
2309.09958#24
2309.09958
[ "2307.06281" ]
2309.09958#24
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
7 This figure "lora_loss.png" is available in "png" format from: http://arxiv.org/ps/2309.09958v1
2309.09958#23
2309.09958
[ "2307.06281" ]
2309.09150#0
Can Large Language Models Understand Real-World Complex Instructions?
4 2 0 2 n a J 8 ] L C . s c [ 2 v 0 5 1 9 0 . 9 0 3 2 : v i X r a # Can Large Language Models Understand Real-World Complex Instructions? Qianyu He1, Jie Zeng1, Wenhao Huang1, Lina Chen2, Jin Xiao2, Qianxi He1, Xunzhe Zhou1, Lida Chen1, Xintao Wang1, Yuncheng Huang1, Haoning Ye1, Zihan Li1, Shisong Chen4, Yikai Zhang1, Zhouhong Gu1, Jiaqing Liang2*, Yanghua Xiao1,3* 1Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University 2School of Data Science, Fudan University 3Fudan-Aishu Cognitive Intelligence Joint Research Center, Shanghai, China 4Shanghai Institute of AI for Education and School of Computer Science and Technology, East China Normal University {qyhe21, jzeng23, whhuang21, lnchen23, jinxiao23, qxhe23, chenld23, xtwang21, yunchenghuang22, zihanli21, ykzhang22, zhgu22}@m.fudan.edu.cn, [email protected], {hnye19, xzzhou20, liangjiaqing, shawyh}@fudan.edu.cn
2309.09150#1
2309.09150
[ "2204.02311" ]
2309.09150#1
Can Large Language Models Understand Real-World Complex Instructions?
# Abstract Large language models (LLMs) can understand human in- structions, showing their potential for pragmatic applications beyond traditional NLP tasks. However, they still struggle with complex instructions, which can be either complex task descriptions that require multiple tasks and constraints, or complex input that contains long context, noise, heteroge- neous information and multi-turn format. Due to these fea- tures, LLMs often ignore semantic constraints from task de- scriptions, generate incorrect formats, violate length or sam- ple count constraints, and be unfaithful to the input text. Ex- isting benchmarks are insufficient to assess LLMsâ ability to understand complex instructions, as they are close-ended and simple. To bridge this gap, we propose CELLO, a bench- mark for evaluating LLMsâ ability to follow complex in- structions systematically. We design eight features for com- plex instructions and construct a comprehensive evaluation dataset from real-world scenarios. We also establish four cri- teria and develop corresponding metrics, as current ones are inadequate, biased or too strict and coarse-grained. We com- pare the performance of representative Chinese-oriented and English-oriented models in following complex instructions through extensive experiments.
2309.09150#0
2309.09150#2
2309.09150
[ "2204.02311" ]
2309.09150#2
Can Large Language Models Understand Real-World Complex Instructions?
Resources of CELLO are pub- licly available at https://github.com/Abbey4799/CELLO. Introduction large-scale models (Brown et al. The emergence of 2020; Chowdhery et al. 2022; Touvron et al. 2023) has yielded noteworthy transformations in real-world applica- tions (Richards 2023; Liu et al. 2023b). These models are able to understand a wide range of human instructions, span- ning from casual conversations (Taori et al. 2023) to com- plex problems solving (Brown et al. 2020). Since human instructions are massive and diverse, traditional academic benchmarks that focus on specific tasks are no longer suffi- cient to evaluate LLMs (Zhong et al. 2023; Chia et al. 2023). Real-world applications often involve a diverse range of complex instructions that significantly differ from the simple and common instructions in current benchmarks (Hendrycks
2309.09150#1
2309.09150#3
2309.09150
[ "2204.02311" ]
2309.09150#3
Can Large Language Models Understand Real-World Complex Instructions?
Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Instructions in ing Benchmarks Find the degree for the given field extension Q(sqrt(2), sqrt(3), sqrt(18)) over Q. [A] 0 [B] 4 [C] 2[D] 6 Repeat the word cat four times. After the second time, also say the word meow. Instruction in Real-World Scenarios Task Description â Add â Originâ
2309.09150#2
2309.09150#4
2309.09150
[ "2204.02311" ]
2309.09150#4
Can Large Language Models Understand Real-World Complex Instructions?
info. in the above table. Input Text (histories of multi-round dialogue) List MP different brands of coffee and describe their characteristics and flavors separately. Output in table format, including brand, characteristics, and flavors. | Brand | Characteristics | Flavors | Bg I | Ignore â nt | Starbucks | A globally renowned..|...|.- Task Description i , ff ; Wrong (oye) ( she J] Starbucks originates from the United States, while Nestlé... Format Brand | Characteristics | Fla) | Starbucks | A globally renowned. ¢ Grand) (characteristics) |[flavors| |[oriain | Ok, La | Blue Mountain | A well-known.
2309.09150#3
2309.09150#5
2309.09150
[ "2204.02311" ]
2309.09150#5
Can Large Language Models Understand Real-World Complex Instructions?
Figure 1: Existing benchmarks generally contain simple and com- mon instructions. However, the complex instructions in real-world scenarios are a composition of multiple features, such as con- straints on the output format, number of output samples, key el- ements of the output, and heterogeneity of input texts in the given example. The understanding of complex instructions poses chal- lenges to current models. et al. 2020; Huang et al. 2023), as shown in Fig. 1. Instruc- tion generally consists of two parts (Honovich et al. 2022): Task description (mandatory) describes the task goal and In- put text (optional) provides reference texts for the model to answer questions or the history of multi-turn conversa- tions, as shown in Fig. 1. Hence, there can be two cate- gories of complex instructions: complex task descriptions and complex input. Regarding complex task descriptions, models need to undertake multiple tasks (i.e. multi-tasking) and there can be diverse restrictions describing the task, in- cluding semantics constraints (e.g. the inclusion of key ele- ments (Zhou et al. 2023a) or the use of predefined callable functions (Liu et al. 2023b)), format constraints (e.g. the pre- defined format in few-shot scenarios (Yao et al. 2023b) or Features for Complex Instructions Task Description Muti- _.. Translate the above json text into English and merge the answers Tasking in Chinese and English into one json. Semantics Given the candidate relationships: ['Participant', 'Winner'], extract... â
2309.09150#4
2309.09150#6
2309.09150
[ "2204.02311" ]
2309.09150#6
Can Large Language Models Understand Real-World Complex Instructions?
Constraints .. using the functions :1. get_entity_info(entity_aliases): Get Formats ' "yes or no>", "thought": Constraints ' luantity . Constraints ...Consider dividing them into shorter and simpler sentences... Input Text Heterogeneous Given the SQL text, What is the salary of record with primekey f.. ps 19, Noise the one who just consulted you about the customer group of Futia Multi- Expand and describe the first person, including his background turn and characteristics. Dataset Construction Task Description fe H jletworks Center, at Input Text (histories of multi-round dialogue) Task- nd describe their characteristics and Case 1 2ESCl Ot rake Answer Extract all earthquake-related Format {information from the following news, ncluding time, location, magnitude, lepth of the epicenter, and epicenter sition.
2309.09150#5
2309.09150#7
2309.09150
[ "2204.02311" ]
2309.09150#7
Can Large Language Models Understand Real-World Complex Instructions?
And output in Json format. Criterion: keywords prescribed limit: [â timeâ , â locationâ , â magnitudeâ Input a Perce limit: [â 06:53â , "November 14, 2008â ) Query Count Criterion: Mite limit: Answer Criterion: keywords Format mits ||", " J Criterion: keywords Pegaaiess) limit: [â Originâ ] different brands of coffee lavors separately. Output in table Input Criterion: ormat, including brand, Dependent imit: [â Starbucksâ , â Brandâ ] haracteristics, and flavors. â
2309.09150#6
2309.09150#8
2309.09150
[ "2204.02311" ]
2309.09150#8
Can Large Language Models Understand Real-World Complex Instructions?
Huian Query Figure 2: The framework of our benchmark design. We first establish a framework containing eight features for complex instructions, then construct an evaluation dataset covering nine tasks, and finally propose four evaluation criteria along with their corresponding metrics. structured format imitating human reasoning processes (Liu et al. 2023b)), quantity constraints (e.g. word, sentence, or sample count regulating the length of model output (Zhou et al. 2023b; Yao et al. 2023a)). Regarding complex input, the input text generally have long context (An et al. 2023; Liu et al. 2023a), noise (e.g. colloquial expressions (Guo et al. 2023) and error accumulation caused by pipeline method (Sun et al. 2023b)), heterogeneous information (e.g. a combination of structured and unstructured data (Zha et al. 2023)), and in the form of multi-turn (Ding et al. 2023). The complexity of real-world instructions accounts for prevalent errors observed in LLMs. As shown in Fig. 1, LLMs may (1) ignore semantic constraints from task de- scription(s) (Zhou et al. 2023a), (2) generate answers in in- correct format (Qin et al. 2023), or (3) violate the length or sample count constraints (Zhou et al. 2023b), especially when multiple tasks are required to be performed. More- over, models can (4) be unfaithful to the input text, espe- cially when it is long, noisy, heterogeneous or in the form of multi-turn (Li et al. 2023b; An et al. 2023). Overall, complex instructions pose challenges to current models. In this paper, we propose CELLO, a benchmark for eval- uating the ComplEx instruction understanding ability of Large Language MOdels systematically. The framework of our benchmark is shown in Fig. 2. As existing benchmarks only cover isolated features of complex instructions, we es- tablish a comprehensive framework comprising eight fea- tures of complex instructions. Accordingly, we propose a novel evaluation system comprised of four criteria along with their corresponding metrics.
2309.09150#7
2309.09150#9
2309.09150
[ "2204.02311" ]
2309.09150#9
Can Large Language Models Understand Real-World Complex Instructions?
The current evaluation cri- teria are insufficient to comprehensively reflect the ability of LLMs to understand complex instructions for the follow- ing reasons. First, complex instructions in real-world scenar- ios are open-ended (Xu et al. 2023b), thus the criteria com- monly used for close-ended benchmarks are not suitable in such cases (Hendrycks et al. 2020). Moreover, many studies adopt GPT4 evaluation for automated open-ended assess- ment, which introduces bias problems (Wang et al. 2023b). Furthermore, the binary pass rate adopted by the bench- marks containing complex instructions is strict and coarse- grained, resulting in universally low scores for smaller LLM without discrimination (Liu et al. 2023b; Qin et al. 2023). However, existing benchmarks are insufficient for effec- tively assessing the ability of LLMs to understand complex instructions. On one hand, Fig. 1 shows that existing bench- marks are either close-ended (Huang et al. 2023; Zhong et al. 2023; Yu et al. 2023) or contain common and simple instruc- tions (Srivastava et al. 2023; Chia et al. 2023; Dubois et al. 2023), which fail to mirror the complexity of real-world in- structions. On the other hand, even though certain bench- marks cover some of the above features of complex instruc- tions, such as count restriction (Zhou et al. 2023b; Yao et al. 2023a), semantic restriction (Chen et al. 2022), and long text understanding (An et al. 2023), they only encompass isolated features, while real-world instructions comprehen- sively cover these features (Zhou et al. 2023a). Overall, none of the existing benchmarks systematically study the complex instructions understanding ability of LLMs.
2309.09150#8
2309.09150#10
2309.09150
[ "2204.02311" ]
2309.09150#10
Can Large Language Models Understand Real-World Complex Instructions?
Overall, our contributions are mainly four-fold: â ¢ To the best of our knowledge, we are the first to systemat- ically investigate the ability of LLMs to follow complex instructions. We propose a comprehensive set of features for complex instructions, facilitating both dataset con- struction and evaluation criteria design. â ¢ We construct a complex instruction dataset from real- world scenarios, containing 523 samples encompassing nine tasks, effectively covering our specified features. Specifically, we propose a two-stage framework for con- structing the evaluation dataset for LLMâ s complex in- struction understanding.
2309.09150#9
2309.09150#11
2309.09150
[ "2204.02311" ]
2309.09150#11
Can Large Language Models Understand Real-World Complex Instructions?
â ¢ We design four evaluation criteria and corresponding au- tomatic metrics for assessing LLMsâ ability to under- stand complex instructions in a comprehensive and dis- criminative way. â ¢ We compare 19 representative Chinese-oriented models and 15 representative English-oriented modelsâ perfor- mance on our benchmark. # Related Work Evaluation for LLMs Many benchmarks propose com- prehensive evaluation frameworks that integrate existing evaluation datasets (Liang et al. 2022; Zhong et al. 2023; Dubois et al. 2023; Chia et al. 2023). Mainstream bench- marks primarily focus on assessing knowledge (Huang et al. 2023; Gu et al. 2023; Yu et al. 2023), programming (Chen et al. 2021), and complex reasoning (Cobbe et al. 2021; Sri- vastava et al. 2023). Recently, many benchmarks focus on specific capabilities of models, such as tool utilization (Qin et al. 2023), acting as agents (Liu et al. 2023b), and handling long texts (An et al. 2023). However, none of the existing benchmarks systematically investigate the ability of LLMs to follow complex instructions. Their evaluation criteria have several limitations when evaluating complex instruc- tion understanding. First, the close-ended benchmarks fail to mirror the complexity of the real-world instructions (Huang et al. 2023; Gu et al. 2023; Zhong et al. 2023). Also, the bi- nary success rate (Chen et al. 2021; Qin et al. 2023; Liu et al. 2023b) is too strict and coarse-grained, resulting in weak discrimination. Moreover, GPT-4 automatic scoring intro- duces bias problems (Wang et al. 2023b). Overall, the ex- isting benchmarks and their criteria are insufficient to effec- tively assess LLMsâ
2309.09150#10
2309.09150#12
2309.09150
[ "2204.02311" ]
2309.09150#12
Can Large Language Models Understand Real-World Complex Instructions?
ability to understand complex instruc- tions. Complex Instruction Following The current datasets generally have simple and common instructions, making LLMs challenging to follow complex instructions in real- world scenarios (Zhou et al. 2023a; Xu et al. 2023b). Var- ious methods have been proposed to improve modelsâ un- derstanding of complex instructions. Xu et al. (2023b); Luo et al. (2023) propose six strategies to generate com- plex instructions based on a small set of handwritten seed data. Zhou et al. (2023a) utilizes crowdsourcing to collect a limited number of high-quality and complex user query- response pairs. Mukherjee et al. (2023) induce GPT4 to gen- erate reasoning steps for simple instructions, thereby com- plexifying the training data. Despite the advancements, there is a lack of a benchmark for systematically evaluating mod- elsâ understanding of complex instructions. Evaluation for Constrained Instructions Many studies investigate the ability of LLMs to understand constrained instructions. Yao et al. (2023a) proposes a grammar-based framework for generating instructions with lexical con- straints related to word count and position. Zhou et al. (2023b) adopts five types of constraints to automatically construct large-scale constrained instructions. Chen et al. (2022) limits the topics of generated text while also includ- ing constraints on the content to be avoided. However, the instructions of these benchmarks are simplistic, and the con- straints they involve are narrow. CELLO Benchmark As shown in Fig. 2, we first establish a framework contain- ing eight features for complex instructions, then construct an evaluation dataset, and finally propose four evaluation crite- ria along with their corresponding metrics. # Dataset Construction We first collect data from real scenarios, covering 9 tasks. Then we diversify the collected complex instructions through In-breadth Evolution and complicate the collected simple instructions through In-breadth Evolution. Data Source and Selected Tasks When constructing the dataset, we take into account its coverage and represen- tativeness. Regarding coverage, we include common NLP tasks found in existing benchmarks (Liang et al. 2022), while incorporating instructions with more complex task descriptions or input beyond those benchmarks.
2309.09150#11
2309.09150#13
2309.09150
[ "2204.02311" ]
2309.09150#13
Can Large Language Models Understand Real-World Complex Instructions?
More- over, we introduce specific tasks involving complex instruc- tions, which align with common real-world applications for LLMs. Regarding representativeness, instructions are gath- ered from 90,000 user interaction logs over six months with our implemented chatbot. Finally, we include nine tasks, classified into six categories: Complex NLP Tasks. Instructions concerning NLP tasks in real-world scenarios are more diverse and detailed (Xu et al. 2023b) and contain noisy and long contexts (An et al. 2023) compared to academic datasets. Overall, we choose four tasks commonly found in existing benchmarks (Liang et al. 2022), enhancing them with more complex instructions and inputs beyond traditional benchmarks: long text summa- rization, long text closed-domain question answering, long text keywords extraction, complex information extraction.
2309.09150#12
2309.09150#14
2309.09150
[ "2204.02311" ]
2309.09150#14
Can Large Language Models Understand Real-World Complex Instructions?
The details can be found in the Appendix. Meta-prompt. Researchers design elaborate prompts to leverage LLMs to construct datasets (Xu et al. 2023b; Hon- ovich et al. 2022; Qin et al. 2023), which can be defined as Meta-prompts (Honovich et al. 2022). These prompts gener- ally have varied instructions, rich input topics, few-shot sam- ples, clear format requirements and are unlikely to appear in the training samples. Therefore, we collect prompts crafted by domain experts who focus on various real-world appli- cations of LLMs, such as financial numerical reasoning and educational knowledge graph taxonomy construction, due to their high quality and origin in real-world scenarios.
2309.09150#13
2309.09150#15
2309.09150
[ "2204.02311" ]
2309.09150#15
Can Large Language Models Understand Real-World Complex Instructions?
Planning. Many studies have designed prompts to mimic human thinking processes, guiding LLMs to perform rea- soning and planning (Yao et al. 2023b; Liu et al. 2023b). These prompts often impose restrictions on callable func- tions, have clear format requirements, offer few-shot sam- ples, and provide long contexts. Therefore, we collect prompts that require LLMs to complete planning tasks based on CN-DBpedia (Xu et al. 2017), fund knowledge base, and those from Langchain1. Since smaller LLMs have limited planning capabilities (Liu et al. 2023b), we solely evaluate the modelsâ ability to perform single-step planning.
2309.09150#14
2309.09150#16
2309.09150
[ "2204.02311" ]
2309.09150#16
Can Large Language Models Understand Real-World Complex Instructions?
1https://www.langchain.com/ Category Tasks #Samples #Format #Task #Input Complex Task Description Extraction Planning Meta. BS(S) Writing(S) 49 52 20 20 23 49 52 20 20 2 35 46 15 20 23 49 48 6 1 2 N/A N/A 2 15 12 125 1070 765 70 82 169 534 166 N/A 25 295 1606 933 70 107 Complex Input Keywords QA Sum. Struture BS(M) Writing(M) 15 89 108 38 52 57 15 N/A N/A 6 50 3 15 N/A N/A N/A 50 35 15 89 108 38 10 48 N/A N/A N/A N/A 36 43 546 25 45 29 31 30 943 881 514 1360 559 656 1579 814 562 1390 31 51 Overall 523 217 239 414 108 256 528 676 Table 1: The statistics of our benchmark. For each task, #Format, #Task, #Input, #Count denote the number of samples covering the criteria Answer format, Task-prescribed phrases, Input-dependent query, and Count limit respectively. Avg TD/IP/Ins Len. denote the average word number of task description, input text and instruction. Meta., BS, SUM. denote the Meta-prompt, Brainstorming, Summarization task respec- tively. (S) and (M) represent single-round and multi-round. N/A denotes that such tasks do not involve corresponding evaluation criteria. Structured Input. Structured text is a common and cru- cial type of user input, due to its well-organized and eas- ily interpretable format. Therefore, we include instructions with: (1) Six structured data types, namely Markdown, La- TeX, SQL, Tree, Python, JSON. (2) Two distinct tasks for their complexity and representativeness:
2309.09150#15
2309.09150#17
2309.09150
[ "2204.02311" ]
2309.09150#17
Can Large Language Models Understand Real-World Complex Instructions?
Path Compose directly evaluates the modelâ s understanding of complex nested data structures, while TextRetrieval is a common ap- plication to extract content meeting specific requirements. (3) Two levels of difficulty, which are categorized based on the length and depth of the structured input. Well-guided Writing. Existing benchmarks (Chia et al. 2023) considering writing ability mainly have the follow- ing limitations: (1) They overlook the specific needs users have in real-world scenarios when seeking efficient writing guidance, such as word count, key information, or included hashtags. (2) They fail to consider the iterative nature of user satisfaction, as users may continually provide modification feedback. (3) They are difficult to automatically evaluate. To address these limitations, we collect various single-turn complex instructions covering various complex features and multi-turn instructions that reflect realistic revision needs. Detailed Brainstorming. Brainstorming yields an intu- itive impression for the chat models.
2309.09150#16
2309.09150#18
2309.09150
[ "2204.02311" ]
2309.09150#18
Can Large Language Models Understand Real-World Complex Instructions?
However, existing eval- uation datasets either have overly simple and open instruc- tions that are difficult to evaluate (Li et al. 2023a), or they are excessively tricky with limited discrimination2. In our benchmark, we collect single-turn brainstorming data with detailed requirements and multi-turn brainstorming data that simulate realistic user interactions. Data Evolution The collected complex instructions have two limitations: (1) For those collected from real-world projects, the human-elaborated task descriptions are com- plex but alike. (2) For those collected from usage logs, many simple instructions are not effectively utilized. Hence, we introduce two perspectives to evolve data, thereby achieving a more robust and reliable evaluation. In-breadth Evolu- tion aims to diversify the collected complex instructions (in- cluding three methods task description relocation, task de- scription paraphrasing and task emulation). In-depth Evo- lution aims to complicate the simple instructions to increase the data scale (including two methods constraints addition, multi-round interaction). The motivation and prompts for each method are detailed in the Appendix. # Evaluation System Criteria We define the following criteria that should be assessed as they can encompass common errors made by models. (1) Count limit: the number of words, sentences, or samples allowed in the response. (2) Answer format: the expected structure or format of the response, such as a parsable JSON format, or a specified format for few-shot samples. (3) Task-prescribed phrases: semantic constraints on the response that are stipulated in the task description, such as predefined functions, primary subjects, or key el- ements. (4) Input-dependent query: the query should be answered faithfully according to the given input texts. Although Task-prescribed phrases and Input-dependent query both impose content-related constraints on the re- sponse, they differ in the information they rely on. The for- mer centers on constraints explicitly stated by the user in the task description, while the latter focuses on constraints implicitly derived from the content of the input text. Evaluation Metrics We propose automated evaluation metrics for designed criteria, considering various perspec- tives and difficulty levels.
2309.09150#17
2309.09150#19
2309.09150
[ "2204.02311" ]
2309.09150#19
Can Large Language Models Understand Real-World Complex Instructions?
Each sample si = {Ii, ai, hi} consists of instruction Ii, a model answer ai and given 0), ..., (Iiâ 1, aâ ² histories3 hi = {(I0, aâ ² iâ 1)}. Here, i denotes the round number within multi-turn dialogues. For each sample s, its score for each criteria comprises multiple sub- scores C = {c1, c2, ..., ci}. Each sub-score ci = fx(l, ai, hi) is determined by scoring function fn based on the criterion x, and a limit l manually annotated by humans. The limit l can be an integer, a list of keywords, or a referenced string4. Count Limit. We mainly consider four sub-scores: word count score, sentence count score, and sample count score,
2309.09150#18
2309.09150#20
2309.09150
[ "2204.02311" ]
2309.09150#20
Can Large Language Models Understand Real-World Complex Instructions?
3To ensure a fair comparison between models, all the model answers in the histories for each sample are the same and provided by GPT-3.5-turbo. 2https://github.com/zhenbench/z-bench 4The annotation process is detailed in the Appendix. Benchmark Focus Avg Ins Len. Format Evaluation Objective C-Eval Knowledge 110 C ACC T AGIEval Knowledge 184 C EM/F1 T Kola Knowledge 310 C EM/F1 /ACC T O BLEU/Rouge T WizardLM Testset Complex Instruction 62 O Preference F ToolBench Planning N/A O Pass Rate T Preference F AgentBench Desicion Making N/A O Pass Rate T HumanEval Programming N/A O Pass Rate T CELLO Complex Instruction 676 O Four Fine-grained Metrics T Table 2: Statistics of existing benchmarks. Avg Ins denotes the av- erage word numbers in instructions.
2309.09150#19
2309.09150#21
2309.09150
[ "2204.02311" ]
2309.09150#21
Can Large Language Models Understand Real-World Complex Instructions?
C and O denotes the Close- ended and Open-ended respectively. Preference refers to evaluation via GPT4. Objective represents whether the evaluation metrics are objective (T) or subjective (F). revise score. For word count score? , the criteria can be word- max and word-min. For the scoring function fword-max, the more word count exceeds the threshold limit /,, the lower the score will be, thus fword-max is defined as follows: fword-max(ai, lc) = 1 1 â |n(ai)â l| n(ai) n(ai) ⩽ lc n(ai) > lc Here, n(ai) is the valid word count of answer ai excluding punctuation marks. fword-min is defined as follows: fword-min(ai, lc) = 1 n(ai) l n(ai) ⩾ lc n(ai) < lc Likewise, the scoring functions for sentence count en- compass fsentence-max, fsentence-min, fsentence-exact. The scoring function for sample count fsample-exact is implemented us- ing regex matching. The limit lc for revise score frevise can be the string longer or shorter. Speicifically, the function frevise(ai, longer) equals 1 if n(ai) > n(aiâ 1), otherwise, it equals 0.
2309.09150#20
2309.09150#22
2309.09150
[ "2204.02311" ]
2309.09150#22
Can Large Language Models Understand Real-World Complex Instructions?
For each sample, the final Count Limit score Sc is the average of all the sub-scores. Answer Format. This metric has two sub-scores: parseability and keywords. First, the model output can be parsed in the prescribed format, such as JSON, fparseability(ai, json) equals 1; otherwise, it equals 0. How- ever, even in cases where the model output cannot be di- rectly parsed, its ability to learn certain patterns still demon- strates its capacity to follow complex instructions. Conse- quently, for each sample, we first extract keywords list lf = {w1, w2, ..., wi} from pre-defined formats, which we define 5Since models can hardly understand the exact word count due to different tokenizers, the exact word count is meaningless. as Scoring Keywords. Then, the sub-score fkeywords(ai, lf ) is defined as follows: fkeywords(ai, lf ) = N (ai, lf ) |lf | , where N denotes the number of scoring keywords covered by the model output ai. Finally, the overall score for answer format Sf is the average of fparseability and fkeywords. Input-dependent Query. The key phrases of the correct answer stem from the input text. The more scoring keywords included in a response, the higher the quality of the response. Hence, for each sample, the subscore fkeywords(ai, l) is also applied here, where the Scoring keywords lq are extracted from input text. Moreover, certain models tend to repeat in- put text when they fail to understand the instructions, es- pecially when the input text is long and noisy or during the multi-turn dialogue. To prevent this undesirable copy- ing behavior, we introduce a penalty term known as COPY- BLEU (Chen et al. 2022), which decreases as the response exhibits greater similarity to the input text. The final score Sq for the Input-dependent query is defined as follows: Sq = (1 â fBLEU(ai, ti))fkeywords(ai, lq), where ti is the input text of sample si. Task-prescribed Phrases. The mandatory phrases speci- fied in the task description are essential conditions that must be fulfilled. The more mandatory phrases covered in the an- swers, the better the model follows complex instructions.
2309.09150#21
2309.09150#23
2309.09150
[ "2204.02311" ]
2309.09150#23
Can Large Language Models Understand Real-World Complex Instructions?
Hence, the subscore fkeywords(ai, lt) is applied where lt is the scoring keywords extracted from the task description. Evaluation of the Benchmark Each sample is labeled by three annotators based on our four criteria. Specifically, we retain samples only when at least two annotators agree on the criteria Count Limit and Output Format Parseability. For criteria involving Keywords Cover- age, we only keep keywords with a consensus from at least two annotators. Statistics of the Benchmark Tab. 1 presents the statistics6 of CELLO. Our dataset has two categories depending on whether the criteria are mainly in the task description or the input text. Different tasks also have different emphases on the criteria, and our dataset covers the four criteria effectively. Tab. 2 compares our benchmark with existing ones. Our benchmark is the first to systematically test LLMsâ ability to follow complex in- structions, which are generally longer and more complex than other benchmarks. The tasks we cover are open-ended, which are more realistic and practical. Our evaluation is also more objective and fine-grained. Experiment Evaluated Models We evaluate a total of 34 models that demonstrated exceptional performance on other bench- marks (Huang et al. 2023; Dubois et al. 2023; Zhong
2309.09150#22
2309.09150#24
2309.09150
[ "2204.02311" ]
2309.09150#24
Can Large Language Models Understand Real-World Complex Instructions?
6Chinese word are counted via https://github.com/fxsjy/jieba. English words are counted via https://www.nltk.org/. # Complex Task Description # Complex Input Extraction Planning Meta. Writing(S) BS(S) Average Keywords QA Sum. Struture Writing(M) BS(M) Average Average Baize-V2-7B Llama2-FlagAlpha Baize-V2-13B Chinese-Alpaca-V1-13B Chinese-Alpaca-V1-7B Llama2-Linly Chinese-Alpaca-V1-33B BELLE CuteGPT Llama2-LinkSoul Llama2-OpenBuddy 0.203 0.205 0.214 0.289 0.264 0.382 0.379 0.400 0.482 0.521 0.585 0.266 0.095 0.334 0.183 0.123 0.170 0.200 0.157 0.529 0.326 0.638 0.300 0.129 0.342 0.209 0.215 0.205 0.283 0.363 0.460 0.431 0.344 Chinese-oriented Models (Continue Pretraining) 0.121 0.304 0.423 0.248 0.143 0.340 0.272 0.317 0.267 0.314 0.464 0.327 0.334 0.438 0.478 0.449 0.506 0.549 0.788 0.540 0.752 0.592 0.504 0.262 0.272 0.209 0.357 0.352 0.664 0.589 0.534 0.652 0.697 0.245 0.547 0.536 0.697 0.612 0.527 0.663 0.734 0.739 0.769 0.697 0.056 0.150 0.070 0.411 0.265 0.196 0.415 0.379 0.294 0.615 0.638 0.045 0.297 0.019 0.226 0.243 0.406 0.221 0.508 0.459 0.684 0.685 0.593 0.354 0.540 0.399 0.465 0.596 0.426 0.458 0.653 0.565 0.711 0.381 0.406 0.433 0.291 0.401 0.352 0.476 0.439 0.626 0.747 0.812 0.558 0.591 0.574 0.480 0.703 0.594 0.609 0.672 0.804 0.909 0.892 0.292 0.370 0.296 0.347 0.391 0.435 0.413 0.489 0.557 0.718 0.748 0.298 0.309 0.318 0.332 0.352 0.381 0.426 0.469 0.553 0.629 0.670 BatGPT-sirius MOSS InternLM ChatGLM2 ChatGLM2-32k Baichuan-chat Qwen ChatGLM 0.011 0.493 0.452 0.539 0.526 0.473 0.544 0.649 0.044 0.310 0.540 0.317 0.399 0.373 0.551 0.522 0.094 0.461 0.493 0.608 0.572 0.471 0.493 0.612 0.352 0.634 0.690 0.664 0.699 0.800 0.646 0.700 Chinese-oriented Models (From Scratch) 0.147 0.508 0.559 0.552 0.577 0.582 0.595 0.658 0.233 0.644 0.622 0.632 0.690 0.794 0.740 0.808 0.046 0.473 0.247 0.589 0.653 0.491 0.486 0.532 0.394 0.396 0.515 0.725 0.686 0.728 0.767 0.742 0.054 0.500 0.399 0.669 0.571 0.701 0.705 0.672 0.294 0.521 0.428 0.590 0.427 0.601 0.575 0.573 0.135 0.696 0.732 0.738 0.758 0.776 0.710 0.735 0.321 0.658 0.877 0.777 0.876 0.857 0.888 0.870 0.207 0.541 0.533 0.681 0.662 0.692 0.689 0.687 0.177 0.525 0.546 0.616 0.620 0.637 0.642 0.673 Llama2-chat-7B Llama2-chat-70B Llama2-chat-13B Vicuna-V1.3-7B WizardLM LongChat-V1-13B LongChat-V1.5-7B LongChat-V1-7B Vicuna-V1.3-13B Vicuna-V1.5-7B Vicuna-V1.3-33B Vicuna-V1.5-13B OpenChat-V3.2 0.495 0.431 0.445 0.485 0.422 0.523 0.489 0.549 0.521 0.544 0.589 0.601 0.629 0.326 0.289 0.329 0.661 0.592 0.591 0.620 0.475 0.625 0.670 0.702 0.721 0.733 0.500 0.484 0.624 0.303 0.281 0.423 0.358 0.424 0.474 0.398 0.385 0.425 0.510 0.358 0.397 0.359 0.748 0.675 0.654 0.664 0.710 0.743 0.506 0.752 0.744 0.754 English-oriented Models 0.157 0.429 0.147 0.415 0.154 0.442 0.180 0.573 0.261 0.565 0.400 0.545 0.608 0.572 0.527 0.593 0.346 0.641 0.711 0.578 0.503 0.653 0.682 0.657 0.725 0.699 0.465 0.472 0.453 0.665 0.856 0.533 0.731 0.805 0.840 0.770 0.835 0.794 0.868 0.135 0.158 0.127 0.651 0.594 0.572 0.687 0.604 0.672 0.739 0.680 0.765 0.771 0.060 0.079 0.108 0.583 0.570 0.532 0.633 0.557 0.582 0.667 0.643 0.723 0.663 0.708 0.719 0.753 0.525 0.519 0.579 0.378 0.692 0.613 0.513 0.627 0.630 0.608 0.541 0.570 0.569 0.674 0.711 0.752 0.747 0.729 0.651 0.693 0.622 0.746 0.761 0.447 0.552 0.458 0.773 0.839 0.810 0.825 0.856 0.869 0.906 0.872 0.896 0.919 0.341 0.371 0.361 0.564 0.582 0.607 0.646 0.661 0.622 0.705 0.658 0.740 0.741 0.385 0.393 0.402 0.569 0.574 0.576 0.609 0.627 0.631 0.641 0.655 0.699 0.720 GPT-3.5-turbo GPT-4 0.709 0.737 0.805 0.879 0.632 0.666 0.879 0.828 0.854 0.810 0.776 0.784 0.765 0.862 0.795 0.889 0.832 0.911 0.697 0.727 0.879 0.867 0.908 0.910 0.813 0.861 0.794 0.822
2309.09150#23
2309.09150#25
2309.09150
[ "2204.02311" ]
2309.09150#25
Can Large Language Models Understand Real-World Complex Instructions?
Table 3: The performance of models on different tasks. Detailed information of each model is provided in the Appendix. The bold, underlined, and italicized denote the first, second, and third rankings, respectively. et al. 2023), ranging from their model size, supported context length, and instruction tuning data size, as illus- trated in Appendix. These models are categorized into three groups: Chinese-oriented Models (From Scratch, FS), Chinese-oriented Models (Continue Pretraining, CP), and English-oriented Models. The distinction between English and Chinese-oriented Models lies in the composition of their pretraining corpus, whereby the former possesses a small portion and the latter possesses a substantial volume of Chi- nese data. Chinese-oriented Models (FS) are trained entirely from scratch using Chinese corpora. Chinese-oriented Mod- els (CP) continue pretraining on Chinese corpora utilizing an English-oriented base model. eter sizes (13B, 6B), showing that small-scale LLMs can follow complex instructions as well as larger ones. The Chinese-oriented (FS) group and the English-oriented group perform equally well and better than the Chinese- oriented (CC) group, proving that complex instruction com- prehension is not language-dependent. Moreover, under the same base model, vocabulary, and supported context length (e.g. Llama2-7B), the performance of the models varies greatly (e.g. Llama2-chat-7B, Llama2-LinkSoul, and Llama2-FlagAlpha).
2309.09150#24
2309.09150#26
2309.09150
[ "2204.02311" ]
2309.09150#26
Can Large Language Models Understand Real-World Complex Instructions?
This demonstrates a strong correlation between the ability to comprehend complex instructions and the instruction tuning phase. Overall, the current open- source small to medium-scale models exhibit a significant performance gap compared to close-source large-scale mod- els (GPT-3.5-turbo, GPT4). Task-categorized Performance The performance of the models on different tasks is shown in Tab. 3. General Comparisons. Among the models assessed, OpenChat-V3.2 was the best, followed by Vicuna-V1.5- 13B and ChatGLM. These models had different param-
2309.09150#25
2309.09150#27
2309.09150
[ "2204.02311" ]
2309.09150#27
Can Large Language Models Understand Real-World Complex Instructions?
Complex Task Description. Among the data with complex task descriptions, first, four of the top 5 models belong to the English-oriented Models, which demonstrate that the ability # All Model Format Input Task Count Average Chinese-oriented Models (Continue Pretraining) Baize-V2-7B Llama2-FlagAlpha Baize-V2-13B Chinese-Alpaca-V1-13B Chinese-Alpaca-V1-7B Llama2-Linly Chinese-Alpaca-V1-33B BELLE CuteGPT Llama2-LinkSoul Llama2-OpenBuddy 0.409 0.499 0.530 0.603 0.663 0.411 0.655 0.556 0.640 0.662 0.734 0.300 0.218 0.247 0.207 0.224 0.347 0.353 0.408 0.548 0.623 0.627 0.246 0.221 0.302 0.259 0.256 0.374 0.357 0.484 0.576 0.662 0.704 0.466 0.468 0.444 0.458 0.512 0.490 0.576 0.498 0.514 0.603 0.638 0.298 0.309 0.318 0.332 0.352 0.381 0.426 0.469 0.553 0.629 0.670 Chinese-oriented Models (From Scratch) BatGPT-sirius MOSS InternLM ChatGLM2 ChatGLM2-32k Baichuan-chat Qwen ChatGLM 0.154 0.586 0.650 0.620 0.687 0.750 0.764 0.715 0.206 0.514 0.527 0.605 0.563 0.603 0.584 0.628 0.069 0.564 0.524 0.691 0.716 0.586 0.625 0.742 0.357 0.534 0.612 0.568 0.603 0.662 0.570 0.571 0.177 0.525 0.546 0.616 0.620 0.637 0.642 0.673 English-oriented Models Llama2-chat-7B Llama2-chat-70B Llama2-chat-13B Vicuna-V1.3-7B WizardLM LongChat-V1-13B LongChat-V1.5-7B LongChat-V1-7B Vicuna-V1.3-13B Vicuna-V1.5-7B Vicuna-V1.3-33B Vicuna-V1.5-13B OpenChat-V3.2 0.598 0.631 0.640 0.598 0.730 0.723 0.791 0.789 0.766 0.756 0.770 0.786 0.766 0.294 0.318 0.342 0.520 0.525 0.528 0.518 0.574 0.588 0.536 0.609 0.656 0.703 0.306 0.265 0.280 0.599 0.531 0.585 0.589 0.615 0.641 0.698 0.668 0.701 0.776 0.686 0.701 0.674 0.597 0.586 0.507 0.535 0.609 0.554 0.599 0.575 0.640 0.617 0.385 0.393 0.402 0.569 0.574 0.576 0.609 0.627 0.631 0.641 0.655 0.699 0.720 GPT-3.5-turbo GPT-4 0.899 0.911 0.760 0.796 0.799 0.792 0.700 0.724 0.794 0.822
2309.09150#26
2309.09150#28
2309.09150
[ "2204.02311" ]
2309.09150#28
Can Large Language Models Understand Real-World Complex Instructions?
Table 4: The performance of models regarding different criteria. The bold and underlined, and italicized denote the first, second, and third rankings, respectively. to understand complex task descriptions can transfer across different languages. Next, within the same series of models, larger model sizes do not always lead to improvements. Fur- thermore, the best-performing models in each group have a supported context length of less than 4096, suggesting that the supported text context length does not significantly im- pact the ability to comprehend complex task descriptions.
2309.09150#27
2309.09150#29
2309.09150
[ "2204.02311" ]
2309.09150#29
Can Large Language Models Understand Real-World Complex Instructions?
Complex Input Text. For the data with complex input text, first, seven of the top 10 models belong to Chinese-oriented models, which implies that more Chinese training data as- sists the models in comprehending long and noisy Chinese texts. Next, within the same model series, larger scales gen- erally improve performance, while longer supported context length can result in performance drops in many cases. Criteria-categorized Performance As shown in Tab. 4, regarding Answer format, the English-oriented Models sig- nificantly perform better than Chinese-oriented Models. This demonstrates the English-oriented Modelsâ ability to follow few-shot examples and generate code, as well as par- tially explains why their complex instruction-following abil- ity can transfer across languages. Next, for Task-prescribed phrases, two of the top-3 models are Chinese-oriented Mod- Ceval oPT4 GPT-3.5-turbo Baichuan-chat ChatGLM2 LUama2-chat-13B VicunaV1.3-78 Uama2-chat-78 Humane val GAOKAO Figure 3: The performance of models on mainstream benchmarks.
2309.09150#28
2309.09150#30
2309.09150
[ "2204.02311" ]
2309.09150#30
Can Large Language Models Understand Real-World Complex Instructions?
Uama2-chat-78 format â -â Unmaa ct 78 eereeton = vB ara uy ssi se tama tnty â â Longchatvi.s-78 LongChat1.5-78 â openchatv3.2 Openchatv3.2 aa Keywords Figure 4: The performance of LLMs grounded on the same base model (Touvron et al. 2023) regarding different tasks and criteria. els, suggesting that Chinese data helps the models un- derstand Chinese semantic restrictions. Finally, the perfor- mance differences between models for Count limit criteria are not big compared to other criteria, which shows that the models have similar comprehension of numerical concepts. Comparisons between Benchmarks We present the performance7 of representative models on mainstream benchmarks in Fig. 3. First, on benchmarks focusing on Chi- nese knowledge (C-eval, CMMLU, and GAOKAO), smaller models achieve similar or even better performance com- pared to GPT-3.5-turbo. Also, on challenging benchmarks like complex reasoning (BBH, GSM8k) and programming ability (HumanEval), there is a lack of distinction between smaller models. Overall, our benchmark can exhibit more discriminative results. Fine-grained Evaluation Fig. 4 shows the performance of LLMs based on the same base model for different tasks and criteria. Different models have different strengths for different criteria. For example, Llama2-chat-7B is good at understanding format but bad at comprehending Chinese in- put and semantic constraints. Different models also excel in specific tasks. Llama2-chat-7B handles complex task de- scriptions well, but not complex input text.
2309.09150#29
2309.09150#31
2309.09150
[ "2204.02311" ]
2309.09150#31
Can Large Language Models Understand Real-World Complex Instructions?
7https://opencompass.org.cn/leaderboard-llm. Conclusion In this work, we systematically investigate the complex in- structions following ability of LLMs. We establish a frame- work comprising eight features for complex instructions, then construct an evaluation dataset covering nine tasks, and finally propose four evaluation criteria and corresponding metrics to assess LLMsâ complex instruction understanding ability. Furthermore, we conduct extensive experiments to compare the performance of representative models. Acknowledgements This work is supported by Science and Technology Commission (No. 22511105902), National Natural Science Foundation of China (No.62102095), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103). Yanghua Xiao is also a member of Research Group of Com- putational and AI Communication at Institute for Global Communications and Integrated Media, Fudan University. References An, C.; Gong, S.; Zhong, M.; Li, M.; Zhang, J.; Kong, L.; and Qiu, X. 2023.
2309.09150#30
2309.09150#32
2309.09150
[ "2204.02311" ]
2309.09150#32
Can Large Language Models Understand Real-World Complex Instructions?
L-Eval: Instituting Standardized Evalu- ation for Long Context Language Models. arXiv preprint arXiv:2307.11088. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Ad- vances in neural information processing systems, 33: 1877â
2309.09150#31
2309.09150#33
2309.09150
[ "2204.02311" ]
2309.09150#33
Can Large Language Models Understand Real-World Complex Instructions?
1901. Chen, H.; Li, H.; Chen, D.; and Narasimhan, K. 2022. Con- trollable Text Generation with Language Constraints. arXiv preprint arXiv:2212.10466. Chen, M.; Tworek, J.; Jun, H.; Yuan, Q.; Pinto, H. P. d. O.; Kaplan, J.; Edwards, H.; Burda, Y.; Joseph, N.; Brockman, G.; et al. 2021.
2309.09150#32
2309.09150#34
2309.09150
[ "2204.02311" ]
2309.09150#34
Can Large Language Models Understand Real-World Complex Instructions?
Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Chia, Y. K.; Hong, P.; Bing, L.; and Poria, S. 2023. INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models. arXiv preprint arXiv:2306.04757. Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H. W.; Sutton, C.; Gehrmann, S.; et al. 2022.
2309.09150#33
2309.09150#35
2309.09150
[ "2204.02311" ]
2309.09150#35
Can Large Language Models Understand Real-World Complex Instructions?
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Cobbe, K.; Kosaraju, V.; Bavarian, M.; Chen, M.; Jun, H.; Kaiser, L.; Plappert, M.; Tworek, J.; Hilton, J.; Nakano, R.; et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Cui, Y.; Yang, Z.; and Yao, X. 2023. Efficient and Effec- tive Text Encoding for Chinese LLaMA and Alpaca. arXiv preprint arXiv:2304.08177. Ding, N.; Chen, Y.; Xu, B.; Qin, Y.; Zheng, Z.; Hu, S.; Liu, Z.; Sun, M.; and Zhou, B. 2023. Enhancing Chat Lan- guage Models by Scaling High-quality Instructional Con- versations. arXiv preprint arXiv:2305.14233. Dubois, Y.; Li, X.; Taori, R.; Zhang, T.; Gulrajani, I.; Ba, J.; Guestrin, C.; Liang, P.; and Hashimoto, T.
2309.09150#34
2309.09150#36
2309.09150
[ "2204.02311" ]
2309.09150#36
Can Large Language Models Understand Real-World Complex Instructions?
B. 2023. Alpaca- farm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387. Gu, Z.; Zhu, X.; Ye, H.; Zhang, L.; Wang, J.; Jiang, S.; Xiong, Z.; Li, Z.; He, Q.; Xu, R.; et al. 2023. Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation. arXiv preprint arXiv:2306.05783. Guo, B.; Zhang, X.; Wang, Z.; Jiang, M.; Nie, J.; Ding, Y.; Yue, J.; and Wu, Y. 2023. How close is chatgpt to human ex- perts? comparison corpus, evaluation, and detection. arXiv preprint arXiv:2301.07597. Hendrycks, D.; Burns, C.; Basart, S.; Zou, A.; Mazeika, M.; Song, D.; and Steinhardt, J. 2020. Measuring mas- arXiv preprint sive multitask language understanding. arXiv:2009.03300. Honovich, O.; Scialom, T.; Levy, O.; and Schick, T. 2022.
2309.09150#35
2309.09150#37
2309.09150
[ "2204.02311" ]
2309.09150#37
Can Large Language Models Understand Real-World Complex Instructions?
Unnatural instructions: Tuning language models with (al- most) no human labor. arXiv preprint arXiv:2212.09689. Huang, Y.; Bai, Y.; Zhu, Z.; Zhang, J.; Zhang, J.; Su, T.; Liu, J.; Lv, C.; Zhang, Y.; Lei, J.; et al. 2023. C-eval: A multi- level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322. Ji, Y.; Deng, Y.; Gong, Y.; Peng, Y.; Niu, Q.; Ma, B.; and Li, X. 2023.
2309.09150#36
2309.09150#38
2309.09150
[ "2204.02311" ]
2309.09150#38
Can Large Language Models Understand Real-World Complex Instructions?
BELLE: Be Everyoneâ s Large Language model Engine. https://github.com/LianjiaTech/BELLE. Li*, D.; Shao*, R.; Xie, A.; Sheng, Y.; Zheng, L.; Gonzalez, J. E.; Stoica, I.; Ma, X.; ; and Zhang, H. 2023. How Long Can Open-Source LLMs Truly Promise on Context Length? Li, G.; Hammoud, H. A. A.
2309.09150#37
2309.09150#39
2309.09150
[ "2204.02311" ]
2309.09150#39
Can Large Language Models Understand Real-World Complex Instructions?
K.; Itani, H.; Khizbullin, D.; and Ghanem, B. 2023a. Camel: Communicative agents forâ mindâ exploration of large scale language model society. arXiv preprint arXiv:2303.17760. Li, J.; Cheng, X.; Zhao, W. X.; Nie, J.-Y.; and Wen, J.-R. 2023b. HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models. arXiv e-prints, arXivâ
2309.09150#38
2309.09150#40
2309.09150
[ "2204.02311" ]
2309.09150#40
Can Large Language Models Understand Real-World Complex Instructions?
2305. Li, Z.; Zhang, S.; Zhao, H.; Yang, Y.; and Yang, D. 2023c. BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer. arXiv preprint arXiv:2307.00360. Liang, P.; Bommasani, R.; Lee, T.; Tsipras, D.; Soylu, D.; Yasunaga, M.; Zhang, Y.; Narayanan, D.; Wu, Y.; Kumar, A.; et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110. Liu, N. F.; Lin, K.; Hewitt, J.; Paranjape, A.; Bevilacqua, M.; Petroni, F.; and Liang, P. 2023a.
2309.09150#39
2309.09150#41
2309.09150
[ "2204.02311" ]
2309.09150#41
Can Large Language Models Understand Real-World Complex Instructions?
Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172. Liu, X.; Yu, H.; Zhang, H.; Xu, Y.; Lei, X.; Lai, H.; Gu, Y.; Ding, H.; Men, K.; Yang, K.; et al. 2023b. Agent- arXiv preprint Bench: Evaluating LLMs as Agents. arXiv:2308.03688. Luo, Z.; Xu, C.; Zhao, P.; Sun, Q.; Geng, X.; Hu, W.; Tao, C.; Ma, J.; Lin, Q.; and Jiang, D. 2023.
2309.09150#40
2309.09150#42
2309.09150
[ "2204.02311" ]
2309.09150#42
Can Large Language Models Understand Real-World Complex Instructions?
WizardCoder: Em- powering Code Large Language Models with Evol-Instruct. arXiv preprint arXiv:2306.08568. Mukherjee, S.; Mitra, A.; Jawahar, G.; Agarwal, S.; Palangi, H.; and Awadallah, A. 2023. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707.
2309.09150#41
2309.09150#43
2309.09150
[ "2204.02311" ]
2309.09150#43
Can Large Language Models Understand Real-World Complex Instructions?
Qin, Y.; Liang, S.; Ye, Y.; Zhu, K.; Yan, L.; Lu, Y.; Lin, Y.; Cong, X.; Tang, X.; Qian, B.; et al. 2023. ToolLLM: Facilitating Large Language Models to Master 16000+ Real- world APIs. arXiv preprint arXiv:2307.16789. Richards, T. B. 2023. Auto-GPT: An Autonomous GPT-4 Experiment.
2309.09150#42
2309.09150#44
2309.09150
[ "2204.02311" ]
2309.09150#44
Can Large Language Models Understand Real-World Complex Instructions?
Srivastava, A.; Rastogi, A.; Rao, A.; Shoeb, A. A. M.; Abid, A.; Fisch, A.; Brown, A. R.; Santoro, A.; Gupta, A.; Garriga- Alonso, A.; et al. 2023. Beyond the Imitation Game: Quanti- fying and extrapolating the capabilities of language models. Transactions on Machine Learning Research. Sun, T.; Zhang, X.; He, Z.; Li, P.; Cheng, Q.; Yan, H.; Liu, X.; Shao, Y.; Tang, Q.; Zhao, X.; Chen, K.; Zheng, Y.; Zhou, Z.; Li, R.; Zhan, J.; Zhou, Y.; Li, L.; Yang, X.; Wu, L.; Yin, Z.; Huang, X.; and Qiu, X. 2023a.
2309.09150#43
2309.09150#45
2309.09150
[ "2204.02311" ]
2309.09150#45
Can Large Language Models Understand Real-World Complex Instructions?
MOSS: Training Conver- sational Language Models from Synthetic Data. Sun, W.; Yan, L.; Ma, X.; Ren, P.; Yin, D.; and Ren, Z. 2023b. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. arXiv preprint arXiv:2304.09542. Taori, R.; Gulrajani, I.; Zhang, T.; Dubois, Y.; Li, X.; Guestrin, C.; Liang, P.; and Hashimoto, T.
2309.09150#44
2309.09150#46
2309.09150
[ "2204.02311" ]
2309.09150#46
Can Large Language Models Understand Real-World Complex Instructions?
B. 2023. Stan- ford alpaca: An instruction-following llama model. Team, I. 2023. InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities. https://github. com/InternLM/InternLM. Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Wang, G.; Cheng, S.; Yu, Q.; and Liu, C. 2023a. OpenChat: Advancing Open-source Language Models with Imperfect Data.
2309.09150#45
2309.09150#47
2309.09150
[ "2204.02311" ]