doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.07915
52
Ali Furkan Biten, Ruben Tito, Andres Mafla, Lluis Gomez, Marçal Rusinol, Ernest Valveny, CV Jawa- har, and Dimosthenis Karatzas. Scene text visual question answering. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 4291–4301, 2019. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. David Chen and William B Dolan. Collecting highly parallel data for paraphrase evaluation. In Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies, pp. 190–200, 2011.
2309.07915#52
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
53
Although LLMs demonstrate excellent performance in acquiring, storing, and utilizing knowledge [155], there remain potential issues and unresolved problems. For example, the knowledge acquired by models during training could become outdated or even be incorrect from the start. A simple way to address this is retraining. However, it requires advanced data, extensive time, and computing resources. Even worse, it can lead to catastrophic forgetting [156]. Therefore, some researchers[157; 158; 159] try editing LLMs to locate and modify specific knowledge stored within the models. This involved unloading incorrect knowledge while simultaneously acquiring new knowledge. Their experiments show that this method can partially edit factual knowledge, but its underlying mechanism still requires further research. Besides, LLMs may generate content that conflicts with the source or factual information [224], a phenomenon often referred to as hallucinations [225]. It is one of the critical reasons why LLMs can not be widely used in factually rigorous tasks. To tackle this issue, some researchers [160] proposed a metric to measure the level of hallucinations and provide developers with an effective reference to evaluate the trustworthiness of LLM outputs. Moreover, some researchers[161; 162] enable LLMs to utilize external tools[94; 226; 227] to avoid incorrect 13 knowledge. Both of these methods can alleviate the impact of hallucinations, but further exploration of more effective approaches is still needed. # 3.1.3 Memory
2309.07864#53
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
53
Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. Shikra: Unleashing multimodal llm’s referential dialogue magic, 2023. 10 Preprint Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794, 2022. Xingyu Chen, Zihan Zhao, Lu Chen, JiaBao Ji, Danyang Zhang, Ao Luo, Yuxuan Xiong, and Kai Yu. WebSRC: A dataset for web-based structural reading comprehension. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 4173–4185, Online and Punta Cana, Dominican Republic, November 2021a. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.343. URL https://aclanthology.org/2021.emnlp-main. 343.
2309.07915#53
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
54
13 knowledge. Both of these methods can alleviate the impact of hallucinations, but further exploration of more effective approaches is still needed. # 3.1.3 Memory In our framework, “memory” stores sequences of the agent’s past observations, thoughts and actions, which is akin to the definition presented by Nuxoll et al. [228]. Just as the human brain relies on memory systems to retrospectively harness prior experiences for strategy formulation and decision- making, agents necessitate specific memory mechanisms to ensure their proficient handling of a sequence of consecutive tasks [229; 230; 231]. When faced with complex problems, memory mechanisms help the agent to revisit and apply antecedent strategies effectively. Furthermore, these memory mechanisms enable individuals to adjust to unfamiliar environments by drawing on past experiences.
2309.07864#54
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
54
Xingyu Chen, Zihan Zhao, Lu Chen, Danyang Zhang, Jiabao Ji, Ao Luo, Yuxuan Xiong, and Kai Yu. Websrc: A dataset for web-based structural reading comprehension. arXiv preprint arXiv:2101.09465, 2021b. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //lmsys.org/blog/2023-03-30-vicuna/. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
2309.07915#54
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
55
With the expansion of interaction cycles in LLM-based agents, two primary challenges arise. The first pertains to the sheer length of historical records. LLM-based agents process prior interactions in natural language format, appending historical records to each subsequent input. As these records expand, they might surpass the constraints of the Transformer architecture that most LLM-based agents rely on. When this occurs, the system might truncate some content. The second challenge is the difficulty in extracting relevant memories. As agents amass a vast array of historical observations and action sequences, they grapple with an escalating memory burden. This makes establishing connections between related topics increasingly challenging, potentially causing the agent to misalign its responses with the ongoing context. Methods for better memory capability. Here we introduce several methods to enhance the memory of LLM-based agents. • Raising the length limit of Transformers. The first method tries to address or mitigate the inherent sequence length constraints. The Transformer architecture struggles with long sequences due to these intrinsic limits. As sequence length expands, computational demand grows exponentially due to the pairwise token calculations in the self-attention mechanism. Strategies to mitigate these length restrictions encompass text truncation [163; 164; 232], segmenting inputs [233; 234], and emphasizing key portions of text [235; 236; 237]. Some other works modify the attention mechanism to reduce complexity, thereby accommodating longer sequences [238; 165; 166; 167].
2309.07864#55
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
55
Maria Cipollone, Catherine C Schifter, and Rick A Moffat. Minecraft as a creative tool: A case study. International Journal of Game-Based Learning (IJGBL), 4(2):1–14, 2014. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José MF Moura, Devi Parikh, and Dhruv Batra. Visual dialog. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 326–335, 2017. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. A survey on in-context learning, 2023.
2309.07915#55
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
56
• Summarizing memory. The second strategy for amplifying memory efficiency hinges on the concept of memory summarization. This ensures agents effortlessly extract pivotal details from historical interactions. Various techniques have been proposed for summarizing memory. Using prompts, some methods succinctly integrate memories [168], while others emphasize reflective processes to create condensed memory representations [22; 239]. Hierarchical methods streamline dialogues into both daily snapshots and overarching summaries [170]. Notably, specific strategies translate environmental feedback into textual encapsulations, bolstering agents’ contextual grasp for future engagements [169]. Moreover, in multi-agent environments, vital elements of agent communication are captured and retained [171]. • Compressing memories with vectors or data structures. By employing suitable data structures, intelligent agents boost memory retrieval efficiency, facilitating prompt responses to interactions. Notably, several methodologies lean on embedding vectors for memory sections, plans, or dialogue histories [109; 170; 172; 174]. Another approach translates sentences into triplet configurations [173], while some perceive memory as a unique data object, fostering varied interactions [176]. Furthermore, ChatDB [175] and DB-GPT [240] integrate the LLMrollers with SQL databases, enabling data manipulation through SQL commands.
2309.07864#56
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
56
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. Glm: General language model pretraining with autoregressive blank infilling. arXiv preprint arXiv:2103.10360, 2021. Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva: Exploring the limits of masked visual representation learning at scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 19358–19369, June 2023. Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023.
2309.07915#56
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
57
Methods for memory retrieval. When an agent interacts with its environment or users, it is imperative to retrieve the most appropriate content from its memory. This ensures that the agent accesses relevant and accurate information to execute specific actions. An important question arises: How can an agent select the most suitable memory? Typically, agents retrieve memories in an automated manner [170; 174]. A significant approach in automated retrieval considers three metrics: Recency, Relevance, and Importance. The memory score is determined as a weighted combination of these metrics, with memories having the highest scores being prioritized in the model’s context [22]. 14 Some research introduces the concept of interactive memory objects, which are representations of dialogue history that can be moved, edited, deleted, or combined through summarization. Users can view and manipulate these objects, influencing how the agent perceives the dialogue [176]. Similarly, other studies allow for memory operations like deletion based on specific commands provided by users [175]. Such methods ensure that the memory content aligns closely with user expectations. # 3.1.4 Reasoning and Planning
2309.07864#57
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
57
Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023. Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723, 2020. Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790, 2023. 11 Preprint Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, July 2017.
2309.07915#57
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
58
# 3.1.4 Reasoning and Planning Reasoning. Reasoning, underpinned by evidence and logic, is fundamental to human intellectual endeavors, serving as the cornerstone for problem-solving, decision-making, and critical analysis [241; 242; 243]. Deductive, inductive, and abductive are the primary forms of reasoning commonly recognized in intellectual endeavor [244]. For LLM-based agents, like humans, reasoning capacity is crucial for solving complex tasks [25]. Differing academic views exist regarding the reasoning capabilities of large language models. Some argue language models possess reasoning during pre-training or fine-tuning [244], while others believe it emerges after reaching a certain scale in size [26; 245]. Specifically, the representative Chain-of-Thought (CoT) method [95; 96] has been demonstrated to elicit the reasoning capacities of large language models by guiding LLMs to generate rationales before outputting the answer. Some other strategies have also been presented to enhance the performance of LLMs like self-consistency [97], self-polish [99], self-refine [178] and selection-inference [177], among others. Some studies suggest that the effectiveness of step-by-step reasoning can be attributed to the local statistical structure of training data, with locally structured dependencies between variables yielding higher data efficiency than training on all variables [246].
2309.07864#58
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
58
Wenbo Hu, Yifan Xu, Y Li, W Li, Z Chen, and Z Tu. Bliva: A simple multimodal llm for better handling of text-rich visual questions. arXiv preprint arXiv:2308.09936, 2023. Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et al. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045, 2023a. Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et al. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045, 2023b. Drew A. Hudson and Christopher D. Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019.
2309.07915#58
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
59
Planning. Planning is a key strategy humans employ when facing complex challenges. For humans, planning helps organize thoughts, set objectives, and determine the steps to achieve those objectives [247; 248; 249]. Just as with humans, the ability to plan is crucial for agents, and central to this planning module is the capacity for reasoning [250; 251; 252]. This offers a structured thought process for agents based on LLMs. Through reasoning, agents deconstruct complex tasks into more manageable sub-tasks, devising appropriate plans for each [253; 254]. Moreover, as tasks progress, agents can employ introspection to modify their plans, ensuring they align better with real-world circumstances, leading to adaptive and successful task execution. Typically, planning comprises two stages: plan formulation and plan reflection.
2309.07864#59
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
59
Drew A. Hudson and Christopher D. Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019. Huaizu Jiang, Xiaojian Ma, Weili Nie, Zhiding Yu, Yuke Zhu, and Anima Anandkumar. Bongard-hoi: Benchmarking few-shot visual reasoning for human-object interactions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19056–19065, 2022. Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. The hateful memes challenge: Detecting hate speech in multimodal memes. Advances in Neural Information Processing Systems, 33:2611–2624, 2020. Po-Nien Kung and Nanyun Peng. Do models really learn to follow instructions? an empirical study of instruction tuning. arXiv preprint arXiv:2305.11383, 2023. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning, 2023a.
2309.07915#59
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
60
Typically, planning comprises two stages: plan formulation and plan reflection. • Plan formulation. During the process of plan formulation, agents generally decompose an overarching task into numerous sub-tasks, and various approaches have been proposed in this phase. Notably, some works advocate for LLM-based agents to decompose problems comprehensively in one go, formulating a complete plan at once and then executing it sequentially [98; 179; 255; 256]. In contrast, other studies like the CoT-series employ an adaptive strategy, where they plan and address sub-tasks one at a time, allowing for more fluidity in handling intricate tasks in their entirety [95; 96; 257]. Additionally, some methods emphasize hierarchical planning [182; 185], while others underscore a strategy in which final plans are derived from reasoning steps structured in a tree-like format. The latter approach argues that agents should assess all possible paths before finalizing a plan [97; 181; 184; 258; 184]. While LLM-based agents demonstrate a broad scope of general knowledge, they can occasionally face challenges when tasked with situations that require expertise knowledge. Enhancing these agents by integrating them with planners of specific domains has shown to yield better performance [125; 130; 186; 259].
2309.07864#60
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
60
Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. arXiv preprint arXiv:2306.00890, 2023b. Juncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Hanwang Zhang, Wei Ji, Wenqiao Zhang, Tat- Seng Chua, Siliang Tang, and Yueting Zhuang. Empowering vision-language models to follow interleaved vision-language instructions. arXiv preprint arXiv:2308.04152, 2023c. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre- training for unified vision-language understanding and generation. In International Conference on Machine Learning, pp. 12888–12900. PMLR, 2022.
2309.07915#60
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
61
• Plan reflection. Upon formulating a plan, it’s imperative to reflect upon and evaluate its merits. LLM-based agents leverage internal feedback mechanisms, often drawing insights from pre-existing models, to hone and enhance their strategies and planning approaches [169; 178; 188; 192]. To better align with human values and preferences, agents actively engage with humans, allowing them to rectify some misunderstandings and assimilate this tailored feedback into their planning methodology [108; 189; 190]. Furthermore, they could draw feedback from tangible or virtual surroundings, such as cues from task accomplishments or post-action observations, aiding them in revising and refining their plans [91; 101; 187; 191; 260]. 15 # 3.1.5 Transferability and Generalization
2309.07864#61
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
61
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre- training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023d. Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu Sun, Lingpeng Kong, and Qi Liu. M3it: A large-scale dataset towards multi-modal multilingual instruction tuning, 2023e. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models, 2023f.
2309.07915#61
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
62
15 # 3.1.5 Transferability and Generalization Intelligence shouldn’t be limited to a specific domain or task, but rather encompass a broad range of cognitive skills and abilities [31]. The remarkable nature of the human brain is largely attributed to its high degree of plasticity and adaptability. It can continuously adjust its structure and function in response to external stimuli and internal needs, thereby adapting to different environments and tasks. These years, plenty of research indicates that pre-trained models on large-scale corpora can learn universal language representations [36; 261; 262]. Leveraging the power of pre-trained models, with only a small amount of data for fine-tuning, LLMs can demonstrate excellent performance in downstream tasks [263]. There is no need to train new models from scratch, which saves a lot of computation resources. However, through this task-specific fine-tuning, the models lack versatility and struggle to be generalized to other tasks. Instead of merely functioning as a static knowledge repository, LLM-based agents exhibit dynamic learning ability which enables them to adapt to novel tasks swiftly and robustly [24; 105; 106].
2309.07864#62
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
62
Yunshui Li, Binyuan Hui, ZhiChao Yin, Min Yang, Fei Huang, and Yongbin Li. PaCE: Unified multi-modal dialogue pre-training with progressive and compositional experts. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 13402–13416, Toronto, Canada, July 2023g. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.749. URL https://aclanthology.org/2023.acl-long.749. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 12 Preprint Fangyu Liu, Guy Emerson, and Nigel Collier. Visual spatial reasoning. arXiv preprint arXiv:2205.00363, 2022. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565, 2023a.
2309.07915#62
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
63
Unseen task generalization. Studies show that instruction-tuned LLMs exhibit zero-shot gener- alization without the need for task-specific fine-tuning [24; 25; 105; 106; 107]. With the expansion of model size and corpus size, LLMs gradually exhibit remarkable emergent abilities in unfamiliar tasks [132]. Specifically, LLMs can complete new tasks they do not encounter in the training stage by following the instructions based on their own understanding. One of the implementations is multi-task learning, for example, FLAN [105] finetunes language models on a collection of tasks described via instructions, and T0 [106] introduces a unified framework that converts every language problem into a text-to-text format. Despite being purely a language model, GPT-4 [25] demonstrates remarkable capabilities in a variety of domains and tasks, including abstraction, comprehension, vision, coding, mathematics, medicine, law, understanding of human motives and emotions, and others [31]. It is noticed that the choices in prompting are critical for appropriate predictions, and training directly on the prompts can improve the models’ robustness in generalizing to unseen tasks [264]. Promisingly, such generalization capability can further be enhanced by scaling up both the model size and the quantity or diversity of training instructions [94; 265].
2309.07864#63
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
63
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. 2023b. Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, and Dahua Lin. Mmbench: Is your multi-modal model an all-around player?, 2023c. Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. Iconqa: A new benchmark for abstract diagram understanding and visual language reasoning. arXiv preprint arXiv:2110.13214, 2021. Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507–2521, 2022.
2309.07915#63
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
64
In-context learning. Numerous studies indicate that LLMs can perform a variety of complex tasks through in-context learning (ICL), which refers to the models’ ability to learn from a few examples in the context [195]. Few-shot in-context learning enhances the predictive performance of language models by concatenating the original input with several complete examples as prompts to enrich the context [41]. The key idea of ICL is learning from analogy, which is similar to the learning process of humans [266]. Furthermore, since the prompts are written in natural language, the interaction is interpretable and changeable, making it easier to incorporate human knowledge into LLMs [95; 267]. Unlike the supervised learning process, ICL doesn’t involve fine-tuning or parameter updates, which could greatly reduce the computation costs for adapting the models to new tasks. Beyond text, researchers also explore the potential ICL capabilities in different multimodal tasks [193; 194; 268; 269; 270; 271], making it possible for agents to be applied to large-scale real-world tasks.
2309.07864#64
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
64
Gen Luo, Yiyi Zhou, Tianhe Ren, Shengxin Chen, Xiaoshuai Sun, and Rongrong Ji. Cheap and quick: Efficient vision-language instruction tuning for large language models. arXiv preprint arXiv:2305.15023, 2023. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019. Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. Metaicl: Learning to learn in context. arXiv preprint arXiv:2110.15943, 2021. Ivona Najdenkoska, Xiantong Zhen, and Marcel Worring. Meta learning to bridge vision and language models for multimodal few-shot learning. arXiv preprint arXiv:2302.14794, 2023. OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023.
2309.07915#64
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
65
Continual learning. Recent studies [190; 272] have highlighted the potential of LLMs’ planning capabilities in facilitating continuous learning [196; 197] for agents, which involves continuous acquisition and update of skills. A core challenge in continual learning is catastrophic forgetting [273]: as a model learns new tasks, it tends to lose knowledge from previous tasks. Numerous efforts have been devoted to addressing the above challenge, which can be broadly separated into three groups, introducing regularly used terms in reference to the previous model [274; 275; 276; 277], approximating prior data distributions [278; 279; 280], and designing architectures with task-adaptive parameters [281; 198]. LLM-based agents have emerged as a novel paradigm, leveraging the planning capabilities of LLMs to combine existing skills and address more intricate challenges. Voyager [190] attempts to solve progressively harder tasks proposed by the automatic curriculum devised by GPT-4 [25]. By synthesizing complex skills from simpler programs, the agent not only rapidly enhances its capabilities but also effectively counters catastrophic forgetting. 16 # Perception
2309.07864#65
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
65
OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. Junting Pan, Ziyi Lin, Yuying Ge, Xiatian Zhu, Renrui Zhang, Yi Wang, Yu Qiao, and Hongsheng Li. Retrieving-to-answer: Zero-shot video question answering with frozen large language models, 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748–8763. PMLR, 2021. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–16. IEEE, 2020.
2309.07915#65
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
66
16 # Perception Textual Input §3.2.1 Visual encoder ViT [282], VQVAE [283], Mobile- ViT [284], MLP-Mixer [285], etc. Visual Input §3.2.2 Learnable architecture Query based Kosmos [286], BLIP-2 [287], In- structBLIP [288], MultiModal- GPT [289], Flamingo [290], etc. Projection based PandaGPT [291], LLaVA [292], Minigpt-4 [118], etc. Cascading manner AudioGPT [293], HuggingGPT [180], etc. Auditory Input §3.2.3 Transfer visual method AST [294], HuBERT [295] , X-LLM [296], Video-LLaMA [297], etc. Other Input §3.2.4 InternGPT [298], etc. Figure 4: Typology of the perception module. # 3.2 Perception
2309.07864#66
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
66
Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System op- In Pro- timizations enable training deep learning models with over 100 billion parameters. ceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, pp. 3505–3506, New York, NY, USA, 2020. Association for Com- ISBN 9781450379984. doi: 10.1145/3394486.3406703. URL https: puting Machinery. //doi.org/10.1145/3394486.3406703. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211–252, 2015. Babak Saleh and Ahmed Elgammal. Large-scale classification of fine-art paintings: Learning the right metric on the right feature. arXiv preprint arXiv:1505.00855, 2015. 13 Preprint
2309.07915#66
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
67
Figure 4: Typology of the perception module. # 3.2 Perception Both humans and animals rely on sensory organs like eyes and ears to gather information from their surroundings. These perceptual inputs are converted into neural signals and sent to the brain for processing [299; 300], allowing us to perceive and interact with the world. Similarly, it’s crucial for LLM-based agents to receive information from various sources and modalities. This expanded perceptual space helps agents better understand their environment, make informed decisions, and excel in a broader range of tasks, making it an essential development direction. Agent handles this information to the Brain module for processing through the perception module. In this section, we introduce how to enable LLM-based agents to acquire multimodal perception capabilities, encompassing textual (§ 3.2.1), visual (§ 3.2.2), and auditory inputs (§ 3.2.3). We also consider other potential input forms (§ 3.2.4) such as tactile feedback, gestures, and 3D maps to enrich the agent’s perception domain and enhance its versatility.3). The typology diagram for the LLM-based agent perception is depicted in Figure 4. # 3.2.1 Textual Input
2309.07864#67
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
67
13 Preprint Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-okvqa: A benchmark for visual question answering using world knowledge, 2022. Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards VQA models that can read. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 8317–8326, 2019. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491, 2018.
2309.07915#67
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
68
# 3.2.1 Textual Input Text is a way to carry data, information, and knowledge, making text communication one of the most important ways humans interact with the world. An LLM-based agent already has the fundamental ability to communicate with humans through textual input and output [114]. In a user’s textual input, aside from the explicit content, there are also beliefs, desires, and intentions hidden behind it. Understanding implied meanings is crucial for the agent to grasp the potential and underlying intentions of human users, thereby enhancing its communication efficiency and quality with users. However, as discussed in § 3.1.1, understanding implied meanings within textual input remains challenging for the current LLM-based agent. For example, some works [128; 218; 219; 220] employ reinforcement learning to perceive implied meanings and models feedback to derive rewards. This helps deduce the speaker’s preferences, leading to more personalized and accurate responses from the agent. Additionally, as the agent is designed for use in complex real-world situations, it will inevitably encounter many entirely new tasks. Understanding text instructions for unknown tasks places higher demands on the agent’s text perception abilities. As described in § 3.1.5, an LLM that has undergone instruction tuning [105] can exhibit remarkable zero-shot instruction understanding and generalization abilities, eliminating the need for task-specific fine-tuning. # 3.2.2 Visual Input
2309.07864#68
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
68
Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Can- dace Ross. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5238–5248, 2022a. Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Can- dace Ross. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5238–5248, 2022b. Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. Multimodal few-shot learning with frozen language models. Advances in Neural Information Processing Systems, 34:200–212, 2021.
2309.07915#68
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
69
# 3.2.2 Visual Input Although LLMs exhibit outstanding performance in language comprehension [25; 301] and multi-turn conversations [302], they inherently lack visual perception and can only understand discrete textual content. Visual input usually contains a wealth of information about the world, including properties of objects, spatial relationships, scene layouts, and more in the agent’s surroundings. Therefore, integrating visual information with data from other modalities can offer the agent a broader context and a more precise understanding [120], deepening the agent’s perception of the environment. To help the agent understand the information contained within images, a straightforward approach is to generate corresponding text descriptions for image inputs, known as image captioning [303; 304; 305; 306; 307]. Captions can be directly linked with standard text instructions and fed into the agent. This approach is highly interpretable and doesn’t require additional training for caption generation, which can save a significant number of computational resources. However, caption 17 generation is a low-bandwidth method [120; 308], and it may lose a lot of potential information during the conversion process. Furthermore, the agent’s focus on images may introduce biases.
2309.07864#69
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
69
Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100, 2022a. Zijie J. Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. DiffusionDB: A large-scale prompt gallery dataset for text-to-image generative models. arXiv:2210.14896 [cs], 2022b. URL https://arxiv.org/abs/2210.14896. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners, 2022.
2309.07915#69
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
70
Inspired by the excellent performance of transformers [309] in natural language processing, re- searchers have extended their use to the field of computer vision. Representative works like ViT/VQVAE [282; 283; 284; 285; 310] have successfully encoded visual information using trans- formers. Researchers first divide an image into fixed-size patches and then treat these patches, after linear projection, as input tokens for Transformers [292]. In the end, by calculating self-attention between tokens, they are able to integrate information across the entire image, resulting in a highly effective way to perceive visual content. Therefore, some works [311] try to combine the image encoder and LLM directly to train the entire model in an end-to-end way. While the agent can achieve remarkable visual perception abilities, it comes at the cost of substantial computational resources.
2309.07864#70
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
70
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45, Online, October 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos. 6. Junbin Xiao, Xindi Shang, Angela Yao, and Tat-Seng Chua. Next-qa: Next phase of question- In Proceedings of the IEEE/CVF Conference on answering to explaining temporal actions. Computer Vision and Pattern Recognition, pp. 9777–9786, 2021.
2309.07915#70
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
71
Extensively pre-trained visual encoders and LLMs can greatly enhance the agent’s visual perception and language expression abilities [286; 312]. Freezing one or both of them during training is a widely adopted paradigm that achieves a balance between training resources and model performance [287]. However, LLMs cannot directly understand the output of a visual encoder, so it’s necessary to convert the image encoding into embeddings that LLMs can comprehend. In other words, it involves aligning the visual encoder with the LLM. This usually requires adding an extra learnable interface layer between them. For example, BLIP-2 [287] and InstructBLIP [288] use the Querying Transformer(Q-Former) module as an intermediate layer between the visual encoder and the LLM [288]. Q-Former is a transformer that employs learnable query vectors [289], giving it the capability to extract language-informative visual representations. It can provide the most valuable information to the LLM, reducing the agent’s burden of learning visual-language alignment and thereby mitigating the issue of catastrophic forgetting. At the same time, some researchers adopt a computationally efficient method by using a single projection layer to achieve visual-text
2309.07864#71
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
71
Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5288–5296, 2016. Zhiyang Xu, Ying Shen, and Lifu Huang. MultiInstruct: Improving multi-modal zero-shot learn- ing via instruction tuning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 11445–11465, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.641. URL https://aclanthology.org/2023.acl-long.641. 14 Preprint Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Just ask: Learning to answer questions from millions of narrated videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1686–1697, 2021.
2309.07915#71
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
72
the issue of catastrophic forgetting. At the same time, some researchers adopt a computationally efficient method by using a single projection layer to achieve visual-text alignment, reducing the need for training additional parameters [118; 291; 312]. Moreover, the projection layer can effectively integrate with the learnable interface to adapt the dimensions of its outputs, making them compatible with LLMs [296; 297; 313; 314].
2309.07864#72
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
72
Michal Yarom, Yonatan Bitton, Soravit Changpinyo, Roee Aharoni, Jonathan Herzig, Oran Lang, Eran Ofek, and Idan Szpektor. What you see is what you read? improving text-image alignment evaluation. arXiv preprint arXiv:2305.10400, 2023. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2, 2014. Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling context in referring expressions. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pp. 69–85. Springer, 2016.
2309.07915#72
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
73
Video input consists of a series of continuous image frames. As a result, the methods used by agents to perceive images [287] may be applicable to the realm of videos, allowing the agent to have good perception of video inputs as well. Compared to image information, video information adds a temporal dimension. Therefore, the agent’s understanding of the relationships between different frames in time is crucial for perceiving video information. Some works like Flamingo [290; 315] ensure temporal order when understanding videos using a mask mechanism. The mask mechanism restricts the agent’s view to only access visual information from frames that occurred earlier in time when it perceives a specific frame in the video. # 3.2.3 Auditory Input
2309.07864#73
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
73
Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6720–6731, 2019. Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, and Tao Kong. What matters in training a gpt4-style language model with multimodal inputs? arXiv preprint arXiv:2307.02469, 2023. Ao Zhang, Hao Fei, Yuan Yao, Wei Ji, Li Li, Zhiyuan Liu, and Tat-Seng Chua. Transfer visual prompt generator across llms. CoRR, abs/23045.01278, 2023a. URL https://doi.org/10. 48550/arXiv.2305.01278. Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5317–5327, 2019.
2309.07915#73
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
74
# 3.2.3 Auditory Input Undoubtedly, auditory information is a crucial component of world information. When an agent possesses auditory capabilities, it can improve its awareness of interactive content, the surrounding environment, and even potential dangers. Indeed, there are numerous well-established models and approaches [293; 316; 317] for processing audio as a standalone modality. However, these models often excel at specific tasks. Given the excellent tool-using capabilities of LLMs (which will be discussed in detail in §3.3), a very intuitive idea is that the agent can use LLMs as control hubs, invoking existing toolsets or model repositories in a cascading manner to perceive audio information. For instance, AudioGPT [293], makes full use of the capabilities of models like FastSpeech [317], GenerSpeech [316], Whisper [316], and others [318; 319; 320; 321; 322] which have achieved excellent results in tasks such as Text-to-Speech, Style Transfer, and Speech Recognition.
2309.07864#74
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
74
Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodal chain-of-thought reasoning in language models, 2023b. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. 15 Preprint # A RELATED WORK A.1 VISION-LANGUAGE PRETRAINING Multi-Image Inputs Multi-modal Instruction Tuning Text-to-Image Reference Flamingo Meta learner BLIP-2 LLAVA MiniGPT-4 InstructBLIP Shikra Kosmos-1 Otter MMICL ✓ ✓ ✗ ✗ ✗ ✗ ✗ ✓ ✓ ✓ ✗ ✗ ✗ ✓ ✓ ✓ ✓ ✗ ✓ ✓ ✗ ✗ ✗ ✗ ✗ ✗ ✓ ✗ ✗ ✓ Table 7: Summary of Vision-Language Pre-Trained Models.
2309.07915#74
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
75
An audio spectrogram provides an intuitive representation of the frequency spectrum of an audio signal as it changes over time [323]. For a segment of audio data over a period of time, it can be abstracted into a finite-length audio spectrogram. An audio spectrogram has a 2D representation, which can be visualized as a flat image. Hence, some research [294; 295] efforts aim to migrate perceptual methods from the visual domain to audio. AST (Audio Spectrogram Transformer) [294] employs a Transformer architecture similar to ViT to process audio spectrogram images. By segmenting the audio spectrogram into patches, it achieves effective encoding of audio information. Moreover, some researchers [296; 297] have drawn inspiration from the idea of freezing encoders to reduce training 18 time and computational costs. They align audio encoding with data encoding from other modalities by adding the same learnable interface layer. # 3.2.4 Other Input
2309.07864#75
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
75
Table 7: Summary of Vision-Language Pre-Trained Models. Our work is inspired by recent vision-language pre-training works (Zhu et al., 2023; Liu et al., 2023b; Li et al., 2022; 2023d), which have been proven effective for aligning visual inputs and frozen LLMs to obtain cross-modal generalization ability. BLIP-2 BLIP-2 (Li et al., 2023d) bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. InstructBLIP InstructBLIP (Dai et al., 2023) performs vision-language instruction tuning based on the pre-trained BLIP-2 models with converted multi-modal datasets and the LLaVA (Liu et al., 2023b) dataset generated by GPT-4. MiniGPT-4 MiniGPT-4 (Zhu et al., 2023)aligns a CLIP visual encoder with a frozen Vincuna (Chi- ang et al., 2023) with an artificially collected dialog dataset
2309.07915#75
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
76
18 time and computational costs. They align audio encoding with data encoding from other modalities by adding the same learnable interface layer. # 3.2.4 Other Input As mentioned earlier, many studies have looked into perception units for text, visual, and audio. However, LLM-based agents might be equipped with richer perception modules. In the future, they could perceive and understand diverse modalities in the real world, much like humans. For example, agents could have unique touch and smell organs, allowing them to gather more detailed information when interacting with objects. At the same time, agents can also have a clear sense of the temperature, humidity, and brightness in their surroundings, enabling them to take environment-aware actions. Moreover, by efficiently integrating basic perceptual abilities like vision, text, and light sensitivity, agents can develop various user-friendly perception modules for humans. InternGPT [298] introduces pointing instructions. Users can interact with specific, hard-to-describe portions of an image by using gestures or moving the cursor to select, drag, or draw. The addition of pointing instructions helps provide more precise specifications for individual text instructions. Building upon this, agents have the potential to perceive more complex user inputs. For example, technologies such as eye-tracking in AR/VR devices, body motion capture, and even brainwave signals in brain-computer interaction.
2309.07864#76
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
76
Shikra Shikra (Chen et al., 2023), a VLM which can handle spatial coordinate inputs and outputs in natural language. It makes Shikra excel at referential dialogue and general vision-language tasks, resulting in outstanding performance. However, there is still less work focusing on VLMs with multi-image inputs. Flamingo Flamingo (Tsimpoukelli et al., 2021) achieves multi-visual inputs based on self-attention for images but performs poorly in downstream tasks. Flamingo supports Few-Shot Learning (FSL) in VLM via ICL by leveraging its robust capability to handle multi-visual inputs and uses cross-attention instead of self-attention to get better performance. However, it still suffers from the unableness to explicitly point images, so they introduce a hacky cross-attention mask. Kosmos-1 Kosmos-1 (Huang et al., 2023a), is trained from scratch on billion-scale multi-modal corpora, including interleaved text-image web page data, image-text caption, and language-only instruction tuning data. It can multi-modal Few-Shot Learning and Chain-of-Thought processes, thereby achieving formidable performance. Otter Otter (Li et al., 2023a), an open-source implementation of flamingo and trained with multi- modal instruction in-context tuning data.
2309.07915#76
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
77
Finally, a human-like LLM-based agent should possess awareness of a broader overall environment. At present, numerous mature and widely adopted hardware devices can assist agents in accomplishing this. Lidar [324] can create 3D point cloud maps to help agents detect and identify objects in their surroundings. GPS [325] can provide accurate location coordinates and can be integrated with map data. Inertial Measurement Units (IMUs) can measure and record the three-dimensional motion of objects, offering details about an object’s speed and direction. However, these sensory data are complex and cannot be directly understood by LLM-based agents. Exploring how agents can perceive more comprehensive input is a promising direction for the future. # 3.3 Action
2309.07864#77
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
77
Otter Otter (Li et al., 2023a), an open-source implementation of flamingo and trained with multi- modal instruction in-context tuning data. Meta learner Najdenkoska et al. (2023) uses meta-learning objective to train an adapter that aggregates multiple image features so the original VLM and adapter become a better few-shot learner. 16 Preprint IN-CONTEXT LEARNING It has been well-explored to enable ICL in pre-trained language models (PLM). MetaICL (Min et al., 2021) proposes a meta-training framework for few-shot learning to tune a PLM to do in-context learning on a large set of training tasks. LM-BFF (Gao et al., 2020) studies few-shot fine-tuning of PLMs. However, ICL in VLM is still less explored. Recent works in VLM mainly focus on zero-shot evaluation with single image input. # B MULTI-MODAL ICL DATA We construct two training datasets, text-image interleaved data and in-context learning data, for the text-image relationship challenge and image-image relationship challenge, respectively. In this section, we will cover the data resources.
2309.07915#77
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
78
# 3.3 Action Textual Output §3.3.1 Learning tools Toolformer [92], TALM [326], Instruct- GPT [24], Clarebout et al. [327], etc. Tools §3.3.2 Using tools WebGPT [90], OpenAGI [211], Visual ChatGPT [328], SayCan [179], etc. Action Making tools LATM [329], CREATOR [330], SELF-DEBUGGING [331], etc. Embodied Action §3.3.3 LLM-based Embodied actions SayCan [179], EmbodiedGPT [121], InstructRL [332], Lynch et al. [333], Voyager [190], AlphaBlock [334], DEPS [183], LM-Nav [335], NavGPT [336], etc. Prospective to the embodied action MineDojo [337], Kanitscheider et al. [338], DECKARD [339], Sumers et al. [340], etc. Figure 5: Typology of the action module.
2309.07864#78
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
78
Task Dataset Used Train #samples Val Test License Captioning MS COCO (Lin et al., 2014) DiffusionDB (Wang et al., 2022b) Flickr (Young et al., 2014) NoCaps (Agrawal et al., 2019) Yes Yes Yes Yes 566,747 19,963 144,896 0 25,010 0 768 0 25,010 0 768 4,500 Custom Unknown Unknown Unknown Classification MiniImage (Russakovsky et al., 2015) Yes 38,400 9,600 12,000 Non-commercial VQA VQA v2 (Goyal et al., 2017) ST-VQA (Biten et al., 2019) Text-VQA (Singh et al., 2019) NLVR2 (Suhr et al., 2018) RefCOCO (Yu et al., 2016) Yes Yes Yes Yes Yes 30,000 26,074 27,113 86,373 26,074 30,000 0 0 6,982 0 0 4,070 5,734 6,967 4,070 CC-BY 4.0 Unknown CC BY 4.0 Unknown Unknown KVQA OK-VQA (Marino et al., 2019) Yes 9,009 5,046 0 Unknown Reasoning GQA (Hudson & Manning, 2019) VCR (Zellers et al., 2019) Winoground
2309.07915#78
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
79
Figure 5: Typology of the action module. After humans perceive their environment, their brains integrate, analyze, and reason with the perceived information and make decisions. Subsequently, they employ their nervous systems to control their bodies, enabling adaptive or creative actions in response to the environment, such as engaging in conversation, evading obstacles, or starting a fire. When an agent possesses a brain-like structure with capabilities of knowledge, memory, reasoning, planning, and generalization, as well as multimodal perception, it is also expected to possess a diverse range of actions akin to humans to respond to its surrounding environment. In the construction of the agent, the action module receives action sequences sent by the brain module and carries out actions to interact with the environment. As Figure 5 shows, this section begins with textual output (§ 3.3.1), which is the inherent capability of LLM-based agents. Next we talk about the tool-using capability of LLM-based agents (§ 3.3.2), which has proved effective in enhancing their versatility and expertise. Finally, we discuss equipping the LLM-based agent with embodied action to facilitate its grounding in the physical world (§ 3.3.3). 19 # 3.3.1 Textual Output
2309.07864#79
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07864
80
19 # 3.3.1 Textual Output As discussed in § 3.1.1, the rise and development of Transformer-based generative large language models have endowed LLM-based agents with inherent language generation capabilities [132; 213]. The text quality they generate excels in various aspects such as fluency, relevance, diversity, controllability [127; 214; 134; 216]. Consequently, LLM-based agents can be exceptionally strong language generators. # 3.3.2 Tool Using Tools are extensions of the capabilities of tool users. When faced with complex tasks, humans employ tools to simplify task-solving and enhance efficiency, freeing time and resources. Similarly, agents have the potential to accomplish complex tasks more efficiently and with higher quality if they also learn to use and utilize tools [94].
2309.07864#80
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
80
Table 8: Detailed task descriptions and statistics of our instruction tuning tasks, including all datasets in all types of tasks. The column “Used” indicates whether we use this dataset in the multi-modal in-context tuning stage. # C DATA RESOURCE The data resource used in constructing the MIC dataset is displayed in Fig. 6. Our training dataset comes from 8 task categories and 16 datasets. Image Captioning aims to produce descriptions of the given images according to different needs. Our training dataset includes MS COCO (Lin et al., 2014), DiffusionDB (Wang et al., 2022b), and Flickr 30K (Young et al., 2014). Knowledgeable Visual Question Answering (KVQA) requires the model to make use of commonsense knowledge outside the input image to answer questions. Our training dataset includes OK-VQA (Marino et al., 2019). Image Question Answering (IQA) requires the model to answer the questions based on the image correctly. Our training dataset includes VQAv2 (Goyal et al., 2017), ST-VQA (Biten et al., 2019), Text-VQA (Singh et al., 2019), WikiART (Saleh & Elgammal, 2015) and RefCOCO (Yu et al., 2016).
2309.07915#80
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
81
LLM-based agents have limitations in some aspects, and the use of tools can strengthen the agents’ capabilities. First, although LLM-based agents have a strong knowledge base and expertise, they don’t have the ability to memorize every piece of training data [341; 342]. They may also fail to steer to correct knowledge due to the influence of contextual prompts [226], or even generate hallucinate knowledge [208]. Coupled with the lack of corpus, training data, and tuning for specific fields and scenarios, agents’ expertise is also limited when specializing in specific domains [343]. Specialized tools enable LLMs to enhance their expertise, adapt domain knowledge, and be more suitable for domain-specific needs in a pluggable form. Furthermore, the decision-making process of LLM-based agents lacks transparency, making them less trustworthy in high-risk domains such as healthcare and finance [344]. Additionally, LLMs are susceptible to adversarial attacks [345], and their robustness against slight input modifications is inadequate. In contrast, agents that accomplish tasks with the assistance of tools exhibit stronger interpretability and robustness. The execution process of tools can reflect the agents’ approach to addressing complex requirements and enhance the credibility of their decisions. Moreover, for the reason that tools are specifically designed for their respective usage scenarios, agents utilizing such tools are better equipped to handle slight input modifications and are more resilient against adversarial attacks [94].
2309.07864#81
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
81
Video Question Answering (VideoQA) requires the model to answer questions based on the video correctly. We extract eight frames per video as visual inputs for Video QA tasks. Our training dataset includes MSRVTTQA (Xu et al., 2016). 17 Preprint Video Question Captioning (Image Captioning Few-Shot Image Classification Flckra0k MSRVTT [cca COCO Caption Visual Commonsense ul Diffusiondb Video Question Answering =a MSRVTT QA Natural Language Visual Bongard-HOl |_Nocaps _ Reasoning v2 \ p ivQa SO ‘Knowledge Question Answering MvsD Visual Spatial Reasoning Nonverbal Reasoning on ne Raven IQ Test (owen) coe —e oneal lultiple-Choice ee = NextQA- IconQAa- Visual Dialog LLaVa-Instruct-150K zs Tmage Question Answering Enea VQAv2 STVQA TextVQA Cvewe ) 2 (_ texven_] | Web Page Question Answering OOD Generalization (wikia) ~~ (__Refcoco viewi2 Minecraft Figure 6: Illustration of the data resource used to construct MIC dataset. It consists of 11 tasks and 33 different datasets. The held-in datasets are indicated by white and the held-out datasets are indicated by yellow.
2309.07915#81
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
82
LLM-based agents not only require the use of tools, but are also well-suited for tool integration. Lever- aging the rich world knowledge accumulated through the pre-training process and CoT prompting, LLMs have demonstrated remarkable reasoning and decision-making abilities in complex interactive environments [97], which help agents break down and address tasks specified by users in an appropri- ate way. What’s more, LLMs show significant potential in intent understanding and other aspects [25; 201; 202; 203]. When agents are combined with tools, the threshold for tool utilization can be lowered, thereby fully unleashing the creative potential of human users [94].
2309.07864#82
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
82
Video Captioning requires the model to give the caption based on the video. We extract eight frames per video as visual inputs for Video Captioning tasks. Our training dataset includes MSRVTT (Xu et al., 2016). Visual Reasoning requires the model to correctly perform image reasoning and answer questions. Our training dataset includes GQA (Hudson & Manning, 2019), VCR (Zellers et al., 2019), and NLVR2 (Suhr et al., 2018). Image Classification involves classifying an image based on a given set of candidate labels. Our training dataset includes MiniImage (Russakovsky et al., 2015). Visual Dialog requires the model to hold a meaningful dialog about visual content with humans in natural, conversational language. Our training dataset includes LLAVA-Instruct-150K (Liu et al., 2023b). Our testing dataset comes from 10 task categories and 18 datasets. Image Captioning includes the Nocaps (Agrawal et al., 2019) dataset. Knowledgeable Visual Question Answering (KVQA) includes the ScienceQA (Lu et al., 2022) and A-OKVQA (Schwenk et al., 2022) datasets. Image Question Answering (IQA) includes the VizWiz (Bigham et al., 2010) dataset.
2309.07915#82
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
83
Understanding tools. A prerequisite for an agent to use tools effectively is a comprehensive understanding of the tools’ application scenarios and invocation methods. Without this understanding, the process of the agent using tools will become untrustworthy and fail to genuinely enhance the agent’s capabilities. Leveraging the powerful zero-shot and few-shot learning abilities of LLMs [40; 41], agents can acquire knowledge about tools by utilizing zero-shot prompts that describe tool functionalities and parameters, or few-shot prompts that provide demonstrations of specific tool usage scenarios and corresponding methods [92; 326]. These learning approaches parallel human methods of learning by consulting tool manuals or observing others using tools [94]. A single tool is often insufficient when facing complex tasks. Therefore, the agents should first decompose the complex task into subtasks in an appropriate manner, and their understanding of tools play a significant role in task decomposition. Learning to use tools. The methods for agents to learn to utilize tools primarily consist of learning from demonstrations and learning from feedback. This involves mimicking the behavior of human experts [346; 347; 348], as well as understanding the consequences of their actions and making adjustments based on feedback received from both the environment and humans [24; 349; 350]. Environmental feedback encompasses result feedback on whether actions have successfully completed the task and intermediate feedback that captures changes in the environmental state caused by actions; human feedback comprises explicit evaluations and implicit behaviors, such as clicking on links [94]. 20
2309.07864#83
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
83
Image Question Answering (IQA) includes the VizWiz (Bigham et al., 2010) dataset. Visual Reasoning includes the Winoground (Thrush et al., 2022b), VSR (Liu et al., 2022) and IconQA (Lu et al., 2021) dataset. Winoground proposes a task of matching two given images and two captions correctly. The challenge of this task is that both captions contain a completely identical set of words, only in a different order. VSR describes the spatial relation of two individual objects in the image, and a VLM needs to judge whether the caption correctly describes the image (True) or not (False). The IconQA dataset has two sub-datasets: image question answering with multiple text choice and image question answering with multiple image choice. Web Page Question Answering (Web QA) includes the Websrc (Chen et al., 2021a; Huang et al., 2023a) datasets. The model must answer questions based on the web image and the optional extracted texts. We sampled 2000 instances from Websrc for the evaluation. To align with KOSMOS-1 (Huang et al., 2023a), we only use the web image as input.
2309.07915#83
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
84
20 If an agent rigidly applies tools without adaptability, it cannot achieve acceptable performance in all scenarios. Agents need to generalize their tool usage skills learned in specific contexts to more general situations, such as transferring a model trained on Yahoo search to Google search. To accomplish this, it’s necessary for agents to grasp the common principles or patterns in tool usage strategies, which can potentially be achieved through meta-tool learning [327]. Enhancing the agent’s understanding of relationships between simple and complex tools, such as how complex tools are built on simpler ones, can contribute to the agents’ capacity to generalize tool usage. This allows agents to effectively discern nuances across various application scenarios and transfer previously learned knowledge to new tools [94]. Curriculum learning [351], which allows an agent to start from simple tools and progressively learn complex ones, aligns with the requirements. Moreover, benefiting from the understanding of user intent reasoning and planning abilities, agents can better design methods of tool utilization and collaboration and then provide higher-quality outcomes.
2309.07864#84
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
84
Video Question Answering (VideoQA) includes the iVQA (Yang et al., 2021), MVSD (Chen & Dolan, 2011), and NextQA (Xiao et al., 2021) dateset. The NextQA dataset has two sub-datasets: video question answering with multiple choice and open-domain video question answering. 18 Preprint Interleaved Multi-model Prompts with multiple images | Insrvtions Image 0 is (IMGO) [An image 1 is (IMGI} a Questions % § § § 4 => a> => By g vs Tokenizer & Embedding 6,3 53 =o | ah on 4 jb § gb 4 nm --- £ wee ees ha see eee he ! Value i Key ® Query yet, : hy) Text Embedding : Output . : : Y 1 (A) Visual Prompts Ad & Nota FEN =a Povsseeecseceeeceeeeeeeee o OS Unfreeze ' Add & Normal { 1 . * Freeze eae eee ee eee eee }o-- oer nner Language Response Figure 7: Illustration of the MMICL structure.
2309.07915#84
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
85
Making tools for self-sufficiency. Existing tools are often designed for human convenience, which might not be optimal for agents. To make agents use tools better, there’s a need for tools specifically designed for agents. These tools should be more modular and have input-output formats that are more suitable for agents. If instructions and demonstrations are provided, LLM-based agents also possess the ability to create tools by generating executable programs, or integrating existing tools into more powerful ones [94; 330; 352]. and they can learn to perform self-debugging [331]. Moreover, if the agent that serves as a tool maker successfully creates a tool, it can produce packages containing the tool’s code and demonstrations for other agents in a multi-agent system, in addition to using the tool itself [329]. Speculatively, in the future, agents might become self-sufficient and exhibit a high degree of autonomy in terms of tools.
2309.07864#85
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
85
Figure 7: Illustration of the MMICL structure. Few-shot Image Classification includes the HatefulMemes (Kiela et al., 2020) and Bonard-HOI (Jiang et al., 2022) dataset. HatefulMemes requires the model to determine if a meme is hateful based on the image and explanation provided. Bonard-HOI is the benchmark for evaluating the model’s ability in Few-Shot Visual Reasoning for Human-Object Interactions. It provides few-shot examples with challenging negatives, where positive and negative images only differ in action labels. The model is then asked whether the final image is positive or negative. We sampled 2000 instances from Bonard-HOI for the evaluation. Nonverbal Reasoning includes the Raven IQ test (Huang et al., 2023a). Each instance in the Raven IQ test has 3 or 8 images as inputs and six candidate images with a unique correct completion, and the goal is to predict the next image from the candidates. Visual Dialog includes the visual dialog dataset (Das et al., 2017). We use the question of the final dialogue as the question for instance and take all preceding dialogues as the context to perform open-domain image question answering.
2309.07915#85
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
86
Tools can expand the action space of LLM-based agents. With the help of tools, agents can utilize various external resources such as web applications and other LMs during the reasoning and planning phase [92]. This process can provide information with high expertise, reliability, diversity, and quality for LLM-based agents, facilitating their decision-making and action. For example, search-based tools can improve the scope and quality of the knowledge accessible to the agents with the aid of external databases, knowledge graphs, and web pages, while domain-specific tools can enhance an agent’s expertise in the corresponding field [211; 353]. Some researchers have already developed LLM-based controllers that generate SQL statements to query databases, or to convert user queries into search requests and use search engines to obtain the desired results [90; 175]. What’s more, LLM-based agents can use scientific tools to execute tasks like organic synthesis in chemistry, or interface with Python interpreters to enhance their performance on intricate mathematical computation tasks [354; 355]. For multi-agent systems, communication tools (e.g., emails) may serve as a means for agents to interact with each other under strict security constraints, facilitating their collaboration, and showing autonomy and flexibility [94].
2309.07864#86
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07864
87
Although the tools mentioned before enhance the capabilities of agents, the medium of interaction with the environment remains text-based. However, tools are designed to expand the functionality of language models, and their outputs are not limited to text. Tools for non-textual output can diversify the modalities of agent actions, thereby expanding the application scenarios of LLM-based agents. For example, image processing and generation can be accomplished by an agent that draws on a visual model [328]. In aerospace engineering, agents are being explored for modeling physics and solving complex differential equations [356]; in the field of robotics, agents are required to plan physical operations and control the robot execution [179]; and so on. Agents that are capable of dynamically interacting with the environment or the world through tools, or in a multimodal manner, can be referred to as digitally embodied [94]. The embodiment of agents has been a central focus of embodied learning research. We will make a deep discussion on agents’ embodied action in §3.3.3. # 3.3.3 Embodied Action
2309.07864#87
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
87
More detailed task descriptions and statistics about the datasets are shown in Table 8. # D MODEL STRUCTURE As shown in Fig. 7 MMICL treats the image and language representations equally and combines them into interleaved image-text representations, similar to the original input. Each given image is encoded by a vision encoder (e.g., ViT (Radford et al., 2021; Fang et al., 2023)) to get the vision representation of the image. Then, we use the Q-former as the VPG to extract the visual embedding. We utilize a fully connected layer as the projection layer to convert each visual embedding to the same dimension as the text embedding of the LLMs. This alignment helps the LLM to understand the images. Our approach treats the visual and text embedding equally, enabling a flexible combination of visual and textual content. Finally, we combine the visual embeddings of multiple images with text embeddings in an interleaved style and then feed them into the LLM. We set the weights for mapping query and value vectors in the attention layer of LLM as learnable to better adapt to the multi-modal context with multiple images. During the pre-training, we freeze the image encoder, Q-former, and the backbone LLM while jointly training the language projection and the query and value vectors of the LLM. 19 # Preprint
2309.07915#87
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
88
# 3.3.3 Embodied Action In the pursuit of Artificial General Intelligence (AGI), the embodied agent is considered a pivotal paradigm while it strives to integrate model intelligence with the physical world. The Embodiment hypothesis [357] draws inspiration from the human intelligence development process, posing that an agent’s intelligence arises from continuous interaction and feedback with the environment rather than relying solely on well-curated textbooks. Similarly, unlike traditional deep learning models that learn explicit capabilities from the internet datasets to solve domain problems, people anticipate that LLM- based agents’ behaviors will no longer be limited to pure text output or calling exact tools to perform 21 particular domain tasks [358]. Instead, they should be capable of actively perceiving, comprehending, and interacting with physical environments, making decisions, and generating specific behaviors to modify the environment based on LLM’s extensive internal knowledge. We collectively term these as embodied actions, which enable agents’ ability to interact with and comprehend the world in a manner closely resembling human behavior.
2309.07864#88
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07864
89
The potential of LLM-based agents for embodied actions. Before the widespread rise of LLMs, researchers tended to use methods like reinforcement learning to explore the embodied actions of agents. Despite the extensive success of RL-based embodiment [359; 360; 361], it does have certain limitations in some aspects. In brief, RL algorithms face limitations in terms of data efficiency, generalization, and complex problem reasoning due to challenges in modeling the dynamic and often ambiguous real environment, or their heavy reliance on precise reward signal representations [362]. Recent studies have indicated that leveraging the rich internal knowledge acquired during the pre-training of LLMs can effectively alleviate these issues [120; 187; 258; 363]. • Cost efficiency. Some on-policy algorithms struggle with sample efficiency as they require fresh data for policy updates while gathering enough embodied data for high-performance training is costly and noisy. The constraint is also found in some end-to-end models [364; 365; 366]. By leveraging the intrinsic knowledge from LLMs, agents like PaLM-E [120] jointly train robotic data with general visual-language data to achieve significant transfer ability in embodied tasks while also showcasing that geometric input representations can improve training data efficiency.
2309.07864#89
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
89
(1) Carefully analyze image 0: [IMG0] {image} to generate a concise and accurate description that accurately represents the objects, people, and scenery present. (2) Use clear and concise language that accurately describes the content of image 0: [IMG0] {image}. (3) Your caption should provide sufficient information about image 0: [IMG0] {image} so that someone who has not seen the image can understand it. (4) image 0 is [IMG0] {image}. Be specific and detailed in your description of image 0, but also try to capture the essence of image 0 in a succinct way. (5) image 0 is [IMG0] {image}. Based on the image 0, describe what is contained in this photo. Your caption should be no more than a few sentences and should be grammatically correct and free of spelling errors. (6) Include information in your caption that is specific to image 0: [IMG0] {image} and avoid using generic or ambiguous descriptions. (7) image 0 is [IMG0] {image}. Based on the image 0, give a caption about this image. Think about what message or story image 0 is conveying, and try to capture that in your image caption.
2309.07915#89
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
90
• Embodied action generalization. As discussed in section §3.1.5, an agent’s competence should extend beyond specific tasks. When faced with intricate, uncharted real-world environments, it’s imperative that the agent exhibits dynamic learning and generalization capabilities. However, the majority of RL algorithms are designed to train and evaluate relevant skills for specific tasks [101; 367; 368; 369]. In contrast, fine-tuned by diverse forms and rich task types, LLMs have showcased remarkable cross-task generalization capabilities [370; 371]. For instance, PaLM- E exhibits surprising zero-shot or one-shot generalization capabilities to new objects or novel combinations of existing objects [120]. Further, language proficiency represents a distinctive advantage of LLM-based agents, serving both as a means to interact with the environment and as a medium for transferring foundational skills to new tasks [372]. SayCan [179] decomposes task instructions presented in prompts using LLMs into corresponding skill commands, but in partially observable environments, limited prior skills often do not achieve satisfactory performance [101]. To address this, Voyager [190] introduces the skill library component to continuously collect novel self-verified skills, which allows for the agent’s lifelong learning capabilities.
2309.07864#90
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
90
Based on the image 0, give a caption about this image. Think about what message or story image 0 is conveying, and try to capture that in your image caption. (8) Based on the image 0, give a caption about this image. Your caption should provide enough detail about image 0: [IMG0] {image} to give the viewer a sense of what is happening in the image. (9) Give a caption about this image. Avoid using overly complex language or jargon in your caption of image 0: [IMG0] {image} that might confuse the viewer. (10) Be creative in your approach to captioning image 0: [IMG0] {image} and try to convey a unique perspective or story.
2309.07915#90
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
91
• Embodied action planning. Planning constitutes a pivotal strategy employed by humans in response to complex problems as well as LLM-based agents. Before LLMs exhibited remarkable reasoning abilities, researchers introduced Hierarchical Reinforcement Learning (HRL) methods while the high-level policy constraints sub-goals for the low-level policy and the low-level policy produces appropriate action signals [373; 374; 375]. Similar to the role of high-level policies, LLMs with emerging reasoning abilities [26] can be seamlessly applied to complex tasks in a zero-shot or few-shot manner [95; 97; 98; 99]. In addition, external feedback from the environment can further enhance LLM-based agents’ planning performance. Based on the current environmental feedback, some work [101; 91; 100; 376] dynamically generate, maintain, and adjust high-level action plans in order to minimize dependency on prior knowledge in partially observable environments, thereby grounding the plan. Feedback can also come from models or humans, which can usually be referred to as the critics, assessing task completion based on the current state and task prompts [25; 190]. Embodied actions for LLM-based agents. Depending on the agents’ level of autonomy in a task or the complexity of actions, there are several fundamental LLM-based embodied actions, primarily including observation, manipulation, and navigation.
2309.07864#91
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07864
92
• Observation. Observation constitutes the primary ways by which the agent acquires environmental information and updates states, playing a crucial role in enhancing the efficiency of subsequent embodied actions. As mentioned in §3.2, observation by embodied agents primarily occurs in environments with various inputs, which are ultimately converged into a multimodal signal. A common approach entails a pre-trained Vision Transformer (ViT) used as the alignment module for text and visual information and special tokens are marked to denote the positions of multimodal data [120; 332; 121]. Soundspaces [377] proposes the identification of physical spatial geometric 22 elements guided by reverberant audio input, enhancing the agent’s observations with a more comprehensive perspective [375]. In recent times, even more research takes audio as a modality for embedded observation. Apart from the widely employed cascading paradigm [293; 378; 316], audio information encoding similar to ViT further enhances the seamless integration of audio with other modalities of inputs [294]. The agent’s observation of the environment can also be derived from real-time linguistic instructions from humans, while human feedback helps the agent in acquiring detail information that may not be readily obtained or parsed [333; 190].
2309.07864#92
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
92
Templates of Image Classification (MiniImagenet, etc) (1) image 0 is [IMG0] {image}. Please identify the object or concept depicted in image 0. (2) image 0 is [IMG0] {image}. What is the main subject of image 0? (3) image 0 is [IMG0] {image}. Can you recognize and label the object shown in image 0? (4) image 0 is [IMG0] {image}. Identify the category or class to which image 0 belongs. (5) image 0 is [IMG0] {image}. Based on the visual content, determine what image 0 represents. (6) image 0 is [IMG0] {image}. What is the name or label of the item captured in image 0? (7) image 0 is [IMG0] {image}. Please provide a description or identification of the subject in image 0. (8) image 0 is [IMG0] {image}. From the visual cues, determine the object or entity depicted in image 0. (9) image 0 is [IMG0] {image}. Can you recognize and name the primary element shown in image 0? (10) image 0 is [IMG0] {image}. Identify the object or concept that best describes what is depicted in image 0.
2309.07915#92
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
93
• Manipulation. In general, manipulation tasks for embodied agents include object rearrangements, tabletop manipulation, and mobile manipulation [23; 120]. The typical case entails the agent executing a sequence of tasks in the kitchen, which includes retrieving items from drawers and handing them to the user, as well as cleaning the tabletop [179]. Besides precise observation, this involves combining a series of subgoals by leveraging LLM. Consequently, maintaining synchronization between the agent’s state and the subgoals is of significance. DEPS [183] utilizes an LLM-based interactive planning approach to maintain this consistency and help error correction from agent’s feedback throughout the multi-step, long-haul reasoning process. In contrast to these, AlphaBlock [334] focuses on more challenging manipulation tasks (e.g. making a smiley face using building blocks), which requires the agent to have a more grounded understanding of the instructions. Unlike the existing open-loop paradigm, AlphaBlock constructs a dataset comprising 35 complex high-level tasks, along with corresponding multi-step planning and observation pairs, and then fine-tunes a multimodal model to enhance its comprehension of high-level cognitive instructions.
2309.07864#93
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
93
Table 10: Instruction templates used for transforming datasets into instruction tuning data. (I) {image} denotes image embedding encoded by image encoder, image embedding will be concatenated with language embedding as input. <imagej> denotes image token to exact reference the j-th image in an instance as described in Sec. 2.2.1. # E DATA BALANCE Previous studies have shown that the data balance of training data could significantly influence the model performance (Dai et al., 2023). Mixing the training data of each dataset uniformly could cause the model to overfit smaller datasets and underfit larger datasets, causing poor performance. In order to alleviate this problem, we employ a sampling strategy to sample datasets with probabilities proportional to the square root of the number of training samples following Dai et al. (2023). Formally, given D datasets with N¨ training samples tN1, N2, . . . , NDu, the probability pd of data samples being selected from a dataset during training is as follows. ? pd “ ř Nd ? D i“1 Ni (4) # F INSTRUCTION TEMPLATE FOR DATA CONSTRUCTION
2309.07915#93
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
94
• Navigation. Navigation permits agents to dynamically alter their positions within the environ- ment, which often involves multi-angle and multi-object observations, as well as long-horizon manipulations based on current exploration [23]. Before navigation, it is essential for embodied agents to establish prior internal maps about the external environment, which are typically in the form of a topological map, semantic map or occupancy map [358]. For example, LM-Nav [335] utilizes the VNM [379] to create an internal topological map. It further leverages the LLM and VLM for decomposing input commands and analyzing the environment to find the optimal path. Furthermore, some [380; 381] highlight the importance of spatial representation to achieve the precise localization of spatial targets rather than conventional point or object-centric navigation actions by leveraging the pre-trained VLM model to combine visual features from images with 3D reconstructions of the physical world [358]. Navigation is usually a long-horizon task, where the upcoming states of the agent are influenced by its past actions. A memory buffer and summary mechanism are needed to serve as a reference for historical information [336], which is also employed in Smallville and Voyager [22; 190;
2309.07864#94
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
94
? pd “ ř Nd ? D i“1 Ni (4) # F INSTRUCTION TEMPLATE FOR DATA CONSTRUCTION As Sec. 2.2.3, the constructions of MICrequire carefully designed templates. The instruction templates for each task are presented in this section. The templates for tasks MSCOCO, Flick30k, Nocaps, and Diffusiondb are presented in Table 9. The templates for tasks MiniImagenet are presented in Table 10. The templates for tasks VQAv2, S-VQA, WikiART and RefCOCO are presented in Table 11. The templates for task OKVQA are presented in Table 13. The templates for task MSRVTT are presented in Table 14. The templates for tasks MSRVTTQA and MSVD are presented in Table 15 20 # Preprint Templates of Image Question Answering (VQAv2, ST-VQA, WikiART, RefCOCO, etc) VQAv2
2309.07915#94
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
95
A memory buffer and summary mechanism are needed to serve as a reference for historical information [336], which is also employed in Smallville and Voyager [22; 190; 382; 383]. Additionally, as mentioned in §3.2, some works have proposed the audio input is also of great significance, but integrating audio information presents challenges in associating it with the visual environment. A basic framework includes a dynamic path planner that uses visual and auditory observations along with spatial memories to plan a series of actions for navigation [375; 384].
2309.07864#95
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
95
(1) image 0 is [IMG0] {image}. For the question, carefully examine the image and use your knowledge to determine the correct answer. Question: question Answer: (2) image 0 is [IMG0] {image}. Given the picture [IMG0], pay attention to the wording of question and answer the following question: question Answer: (3) Read the question carefully and look at image 0 labeled [IMG0] {image}. Use your intuition and common sense when answering the question: question (4) Answer each question based on the information presented in image 0: [IMG0] {image}. Given the picture [IMG0], what is the answer to the question: question Answer: (5) Please refer to image 0: [IMG0] {image} when answering the following questions: question Answer: (6) Questions is related to image 0: [IMG0] {image}. Please analyze the image and provide the correct answer for the question: question (7) Read the question carefully and look at image 0 labeled [IMG0] {image}. Use your intuition and common sense when answering the question: question (8) Consider all of the information in image 0 labeled [IMG0] {image} when answering the question: question
2309.07915#95
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
96
By integrating these, the agent can accomplish more complex tasks, such as embodied question answering, whose primary objective is autonomous exploration of the environment, and responding to pre-defined multimodal questions, such as Is the watermelon in the kitchen larger than the pot? Which one is harder? To address these questions, the agent needs to navigate to the kitchen, observe the sizes of both objects and then answer the questions through comparison [358]. In terms of control strategies, as previously mentioned, LLM-based agents trained on particular embodied datasets typically generate high-level policy commands to control low-level policies for achieving specific sub-goals. The low-level policy can be a robotic transformer [120; 385; 386], which takes images and instructions as inputs and produces control commands for the end effector as well as robotic arms in particular embodied tasks [179]. Recently, in virtual embodied environments, the high-level strategies are utilized to control agents in gaming [172; 183; 190; 337] or simulated worlds [22; 108; 109]. For instance, Voyager [190] calls the Mineflayer [387] API interface to continuously acquire various skills and explore the world.
2309.07864#96
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
96
and common sense when answering the question: question (8) Consider all of the information in image 0 labeled [IMG0] {image} when answering the question: question (9) Take your time when answering each question. Don’t rush through the questions, and make sure you have carefully considered all of the information provided in image 0 labeled [IMG0] {image} and the question before making your selection. Question: question Answer: (10) Use the image 0: [IMG0] {image} as a visual aid to help you answer the questions accurately. Question:question Answer:
2309.07915#96
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
98
23 interest in investigating agents’ embodied actions within simulated environments like Minecraft [183; 338; 337; 190; 339]. By utilizing the Mineflayer [387] API, these investigations enable cost- effective examination of a wide range of embodied agents’ operations including exploration, planning, self-improvement, and even lifelong learning [190]. Despite notable progress, achieving optimal embodied actions remains a challenge due to the significant disparity between simulated platforms and the physical world. To enable the effective deployment of embodied agents in real-world scenarios, an increasing demand exists for embodied task paradigms and evaluation criteria that closely mirror real-world conditions [358]. On the other hand, learning to ground language for agents is also an obstacle. For example, expressions like “jump down like a cat” primarily convey a sense of lightness and tranquility, but this linguistic metaphor requires adequate world knowledge [30]. [340] endeavors to amalgamate text distillation with Hindsight Experience Replay (HER) to construct a dataset as the supervised signal for the training process. Nevertheless, additional investigation on grounding embodied datasets still remains necessary while embodied action plays an increasingly pivotal role across various domains in human life. # 4 Agents in Practice: Harnessing AI for Good
2309.07864#98
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
98
(1) Answer each question based on the information presented in image 0: [IMG0] {image}. Given the picture [IMG0], what is the answer to the question: question Answer: (2) Please refer to image 0: [IMG0] {image} when answering the following questions: question Answer: (3) Questions is related to image 0: [IMG0] {image}. Please analyze the image and provide the correct answer for the question: question (4) For each question, use the image 0: [IMG0] {image} as a reference to answer the question: question (5) Make sure your answers are based on the information presented in the image 0: [IMG0] {image}, and any OCR text associated with it. Question:question Answer: (6) Answer the question as accurately as possible using the information provided in the image 0: [IMG0] {image}, and any OCR text associated with it. Question:question Answer: (7) Please ensure that you are answering the question based on the information presented in the image 0: [IMG0] {image}.Question:question Answer: (8) The image 0: [IMG0] {image} is the primary source of information for answering the questions. Please refer to it carefully when
2309.07915#98
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
99
Task-oriented Deploytment §4.1.1 Web scenarios WebAgent [388], Mind2Web [389], WebGum [390], WebArena [391], Webshop [392], WebGPT [90], Kim et al. [393], Zheng et al. [394], etc. Life scenarios InterAct [395], PET [182], Huang et al. [258], Gramopadhye et al. [396], Raman et al. [256], etc. Single Agent Deployment §4.1 Innovation-oriented Deploytment §4.1.2 Li et al. [397], Feldt et al. [398], ChatMOF [399], ChemCrow [354], Boiko et al. [110], SCIENCEWORLD et al. [400], etc. Lifecycle-oriented Deploytment §4.1.3 Voyager [190], GITM [172], DEPS [183], Plan4MC [401], Nottingham et al. [339], etc. Disordered cooperation ChatLLM [402], RoCo [403], Blind Judgement [404], etc. Multi-Agents Interaction §4.2 Cooperative Interaction §4.2.1 Ordered cooperation MetaGPT [405], ChatDev [109], CAMEL [108], AutoGen [406],
2309.07864#99
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
99
Answer: (8) The image 0: [IMG0] {image} is the primary source of information for answering the questions. Please refer to it carefully when answering question: question Answer: (9) Pay close attention to the details in image 0: [IMG0] {image}, as they may provide important information for answering the questions. Question:question Answer: (10) Use the image 0: [IMG0] {image} as a visual aid to help you understand the context and answer the questions accurately. Ques- tion:question Answer:
2309.07915#99
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
100
§4.2 Cooperative Interaction §4.2.1 Ordered cooperation MetaGPT [405], ChatDev [109], CAMEL [108], AutoGen [406], SwiftSage [185], ProAgent [407], DERA [408], Talebi- rad et al. [409], AgentVerse [410], CGMI [411], Liu et al. [27], etc. Adversarial Interaction §4.2.2 ChatEval [171], Xiong et al. [412], Du et al. [111], Fu et al. [129], Liang et al. [112], etc. Education Dona [413], Math Agents [414], etc. Instructor-Executor Paradigm §4.3.1 Health Hsu et al. [415], HuatuoGPT [416], Zhongjing [417], LISSA [418], etc. Human-Agent Interaction §4.3 Other Applications Gao et al. [419], PEER [420], DIAL- GEN [421], AssistGPT [422], etc. Equal Partnership Paradigm §4.3.2 Empathetic Communicator Human-Level Participant SAPIEN [423], Hsu et al. [415], Liu et al. [424], etc. Bakhtin et al. [425], FAIR
2309.07864#100
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
101
(1) image 0 is [IMG0] {image}. Please provide information about the artist, genre, and style of this artwork. (2) image 0 is [IMG0] {image}. I would like to know the artist’s name, the genre, and the specific style depicted in this painting. (3) image 0 is [IMG0] {image}. Could you identify the artistic genre, the artist, and the style portrayed in this artwork? (4) image 0 is [IMG0] {image}. In this painting, which genre does it belong to, who is the artist, and what is the predominant style? (5) image 0 is [IMG0] {image}. Tell me about the artist, genre, and style associated with this particular artwork. (6) image 0 is [IMG0] {image}. This piece of art seems intriguing. Can you provide details about the genre, the artist, and the style it represents? (7) image 0 is [IMG0] {image}. Identify the genre, artist, and style of this captivating artwork, please. (8) image 0 is [IMG0] {image}. I’m curious to learn about the artist’s name, the genre, and the distinctive style showcased in
2309.07915#101
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
102
# Agents in Practice: Harnessing AI for Good Figure 6: Typology of applications of LLM-based agents. The LLM-based agent, as an emerging direction, has gained increasing attention from researchers. Many applications in specific domains and tasks have already been developed, showcasing the powerful and versatile capabilities of agents. We can state with great confidence that, the possibility of having a personal agent capable of assisting users with typical daily tasks is larger than ever before [398]. As an LLM-based agent, its design objective should always be beneficial to humans, i.e., humans can harness AI for good. Specifically, we expect the agent to achieve the following objectives: 24 () Q:2 2K Single Agent : Agent-Agent i Agent-Human Figure 7: Scenarios of LLM-based agent applications. We mainly introduce three scenarios: single- agent deployment, multi-agent interaction, and human-agent interaction. A single agent possesses diverse capabilities and can demonstrate outstanding task-solving performance in various application orientations. When multiple agents interact, they can achieve advancement through cooperative or adversarial interactions. Furthermore, in human-agent interactions, human feedback can enable agents to perform tasks more efficiently and safely, while agents can also provide better service to humans. 1. Assist users in breaking free from daily tasks and repetitive labor, thereby Alleviating human work pressure and enhancing task-solving efficiency.
2309.07864#102
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
102
image 0 is [IMG0] {image}. I’m curious to learn about the artist’s name, the genre, and the distinctive style showcased in this artwork. (9) image 0 is [IMG0] {image}. Could you enlighten me about the genre, artist, and the artistic style that characterizes this beautiful piece? (10) image 0 is [IMG0] {image}. In terms of genre, artist, and style, what information can you provide regarding this fascinating artwork?
2309.07915#102
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
103
1. Assist users in breaking free from daily tasks and repetitive labor, thereby Alleviating human work pressure and enhancing task-solving efficiency. 2. No longer necessitates users to provide explicit low-level instructions. Instead, the agent can independently analyze, plan, and solve problems. 3. After freeing users’ hands, the agent also liberates their minds to engage in exploratory and innovative work, realizing their full potential in cutting-edge scientific fields. In this section, we provide an in-depth overview of current applications of LLM-based agents, aiming to offer a broad perspective for the practical deployment scenarios (see Figure 7). First, we elucidate the diverse application scenarios of Single Agent, including task-oriented, innovation-oriented, and lifecycle-oriented scenarios (§ 4.1). Then, we present the significant coordinating potential of Multiple Agents. Whether through cooperative interaction for complementarity or adversarial interaction for advancement, both approaches can lead to higher task efficiency and response quality (§ 4.2). Finally, we categorize the interactive collaboration between humans and agents into two paradigms and introduce the main forms and specific applications respectively (§ 4.3). The topological diagram for LLM-based agent applications is depicted in Figure 6. # 4.1 General Ability of Single Agent
2309.07864#103
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07864
104
# 4.1 General Ability of Single Agent Currently, there is a vibrant development of application instances of LLM-based agents [429; 430; 431]. AutoGPT [114] is one of the ongoing popular open-source projects aiming to achieve a fully autonomous system. Apart from the basic functions of large language models like GPT-4, the AutoGPT framework also incorporates various practical external tools and long/short-term memory management. After users input their customized objectives, they can free their hands and wait for AutoGPT to automatically generate thoughts and perform specific tasks, all without requiring additional user prompts. As shown in Figure 8, we introduce the astonishingly diverse capabilities that the agent exhibits in scenarios where only one single agent is present. # 4.1.1 Task-oriented Deployment The LLM-based agents, which can understand human natural language commands and perform everyday tasks [391], are currently among the most favored and practically valuable agents by users. This is because they have the potential to enhance task efficiency, alleviate user workload, and promote access for a broader user base. In task-oriented deployment, the agent follows high-level instructions from users, undertaking tasks such as goal decomposition [182; 258; 388; 394], sequence planning of sub-goals [182; 395], interactive exploration of the environment [256; 391; 390; 392], until the final objective is achieved.
2309.07864#104
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
104
(1) image 0 is [IMG0] {image}.Given image 0, create a descriptive caption that accurately represents the content of the image, including the item located in the {quadrant} of the image. (2) Use your knowledge of the image 0 and the {quadrant} location to generate a detailed and accurate caption that captures the essence of the scene. Keep in mind that image 0 is [IMG0] {image}. (3) image 0 is [IMG0] {image}. When writing your caption, be sure to include specific details about the item located in the {quadrant} of the image 0, such as its size, shape, color, and position. (4) Think about the intended audience for your caption and use appropriate language and tone. Consider the context of the image: [IMG0] {image} and the {quadrant} location when creating your caption, and make sure that it accurately reflects the content of the image. (5) Your caption should be concise and to the point, while still capturing the essence of the image 0 and the item located in the {quadrant} of the image. Avoid including irrelevant information in your caption that detracts from the main content of the image. Remember that image 0 is [IMG0]
2309.07915#104
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07915
105
the {quadrant} of the image. Avoid including irrelevant information in your caption that detracts from the main content of the image. Remember that image 0 is [IMG0] {image}. (6) image 0 is [IMG0] {image}. Check your caption for accuracy and grammatical errors before submitting. Be creative in your approach to captioning the image and the item located in the {quadrant}. (7) image 0 is [IMG0] {image}. Given image 0, describe the item in the {quadrant} of the image. (8) image 0 is [IMG0] {image}. Using image 0, provide a caption for the object located in the {quadrant} of the image. (9) For image 0: [IMG0] {image}, describe the object in the {quadrant} of the image. (10) Given the image 0: [IMG0] {image}. Generate a description for the item located in the {quadrant} of the image. (11) image 0 is [IMG0] {image}. Using the provided image 0, describe the object located in the {quadrant} of the image.
2309.07915#105
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]
2309.07864
106
Figure 8: Practical applications of the single LLM-based agent in different scenarios. In task- oriented deployment, agents assist human users in solving daily tasks. They need to possess basic instruction comprehension and task decomposition abilities. In innovation-oriented deployment, agents demonstrate the potential for autonomous exploration in scientific domains. In lifecycle- oriented deployment, agents have the ability to continuously explore, learn, and utilize new skills to ensure long-term survival in an open world. and trial-and-error [182], they predict the next action. However, due to the limitation of foundation language models, agents often rely on reinforcement learning during actual execution [432; 433; 434]. With the gradual evolution of LLMs [301], agents equipped with stronger text understanding and generation abilities have demonstrated great potential to perform tasks through natural language. Due to their oversimplified nature, naive text-based scenarios have been inadequate as testing grounds for LLM-based agents [391]. More realistic and complex simulated test environments have been constructed to meet the demand. Based on task types, we divide these simulated environments into web scenarios and life scenarios, and introduce the specific roles that agents play in them.
2309.07864#106
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07864
107
In web scenarios. Performing specific tasks on behalf of users in a web scenario is known as the web navigation problem [390]. Agents interpret user instructions, break them down into multiple basic operations, and interact with computers. This often includes web tasks such as filling out forms, online shopping, and sending emails. Agents need to possess the ability to understand instructions within complex web scenarios, adapt to changes (such as noisy text and dynamic HTML web pages), and generalize successful operations [391]. In this way, agents can achieve accessibility and automation when dealing with unseen tasks in the future [435], ultimately freeing humans from repeated interactions with computer UIs. Agents trained through reinforcement learning can effectively mimic human behavior using predefined actions like typing, searching, navigating to the next page, etc. They perform well in basic tasks such as online shopping [392] and search engine retrieval [90], which have been widely explored. However, agents without LLM capabilities may struggle to adapt to the more realistic and complex scenarios in the real-world Internet. In dynamic, content-rich web pages such as online forums or online business management [391], agents often face challenges in performance.
2309.07864#107
The Rise and Potential of Large Language Model Based Agents: A Survey
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List.
http://arxiv.org/pdf/2309.07864
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
cs.AI, cs.CL
86 pages, 12 figures
null
cs.AI
20230914
20230919
[ { "id": "2305.08982" }, { "id": "1910.00125" }, { "id": "1511.06342" }, { "id": "2301.13688" }, { "id": "2011.00583" }, { "id": "1907.12108" }, { "id": "1701.07274" }, { "id": "2304.10592" }, { "id": "2112.00639" }, { "id": "2303.11366" }, { "id": "2302.02676" }, { "id": "1810.03548" }, { "id": "2304.06027" }, { "id": "1806.10729" }, { "id": "2212.10560" }, { "id": "2210.13431" } ]
2309.07915
107
(1) Look at image 0 labeled [IMG0] {image} carefully and read question: question. Try to understand what is being asked before selecting an answer. (2) image 0 is [IMG0] {image}. Consider all of the information in image 0 labeled [IMG0] when answering question. Look at objects, colors, shapes, and other details that may be relevant to question: question Answer: (3) image 0 is [IMG0] {image}. Read each answer choice carefully and answers question : question based on the information provided in image 0. (4) image 0 is [IMG0] {image}. Given the picture [IMG0], pay attention to the wording of question and answer the following question: question Answer: (5) Read the question carefully and look at image 0 labeled [IMG0] {image}. Use your intuition and common sense when answering the question: question (6) Consider all of the information in image 0 labeled [IMG0] {image} when answering the question: question (7) Take your time when answering each question. Don’t rush through the questions, and make sure you have carefully considered all of the information provided in image 0 labeled [IMG0] {image} and the question before making your
2309.07915#107
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context.
http://arxiv.org/pdf/2309.07915
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
cs.CL, cs.AI, cs.CV
Code, dataset, checkpoints, and demos are available at https://github.com/PKUnlp-icler/MIC
null
cs.CL
20230914
20231002
[ { "id": "2305.15023" }, { "id": "1505.00855" }, { "id": "2306.14565" }, { "id": "2101.09465" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11383" }, { "id": "2302.14794" }, { "id": "2209.06794" }, { "id": "2110.15943" }, { "id": "2305.04790" }, { "id": "2110.13214" }, { "id": "2210.11416" }, { "id": "2205.00363" }, { "id": "2302.14045" }, { "id": "2205.14100" }, { "id": "2305.10400" }, { "id": "2012.15723" }, { "id": "2103.10360" }, { "id": "2308.09936" }, { "id": "1811.00491" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2307.02469" }, { "id": "2308.04152" }, { "id": "2210.14896" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2306.00890" } ]