doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.07864 | 108 | In order to enable successful interactions between agents and more realistic web pages, some researchers [393; 394] have started to leverage the powerful HTML reading and understanding abilities of LLMs. By designing prompts, they attempt to make agents understand the entire HTML source code and predict more reasonable next action steps. Mind2Web [389] combines multiple LLMs fine-tuned for HTML, allowing them to summarize verbose HTML code [388] in real-world scenarios and extract valuable information. Furthermore, WebGum [390] empowers agents with visual perception abilities by employing a multimodal corpus containing HTML screenshots. It simultaneously fine-tunes the LLM and a visual encoder, deepening the agentâs comprehensive understanding of web pages.
In life scenarios. In many daily household tasks in life scenarios, itâs essential for agents to understand implicit instructions and apply common-sense knowledge [433]. For an LLM-based agent trained solely on massive amounts of text, tasks that humans take for granted might require multiple
26
trial-and-error attempts [432]. More realistic scenarios often lead to more obscure and subtle tasks. For example, the agent should proactively turn it on if itâs dark and thereâs a light in the room. To successfully chop some vegetables in the kitchen, the agent needs to anticipate the possible location of a knife [182]. | 2309.07864#108 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 108 | rush through the questions, and make sure you have carefully considered all of the information provided in image 0 labeled [IMG0] {image} and the question before making your selection. Question: question Answer: (8) Make sure your answers are based on the information presented in the image 0: [IMG0] {image}. Question:question Answer: (9) Carefully examine image 0 labeled [IMG0] {image} before answering the question. Question:question Answer: (10) Please refer to image 0: [IMG0] {image} when answering the following questions: question Answer: | 2309.07915#108 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 109 | Can an agent apply the world knowledge embedded in its training data to real interaction scenarios? Huang et al. [258] lead the way in exploring this question. They demonstrate that sufficiently large LLMs, with appropriate prompts, can effectively break down high-level tasks into suitable sub-tasks without additional training. However, this static reasoning and planning ability has its potential drawbacks. Actions generated by agents often lack awareness of the dynamic environment around them. For instance, when a user gives the task âclean the roomâ, the agent might convert it into unfeasible sub-tasks like âcall a cleaning serviceâ [396].
To provide agents with access to comprehensive scenario information during interactions, some approaches directly incorporate spatial data and item-location relationships as additional inputs to the model. This allows agents to gain a precise description of their surroundings [395; 396]. Wu et al. [182] introduce the PET framework, which mitigates irrelevant objects and containers in environmental information through an early error correction method [256]. PET encourages agents to explore the scenario and plan actions more efficiently, focusing on the current sub-task.
# 4.1.2 Innovation-oriented Deployment | 2309.07864#109 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 110 | # 4.1.2 Innovation-oriented Deployment
The LLM-based agent has demonstrated strong capabilities in performing tasks and enhancing the efficiency of repetitive work. However, in a more intellectually demanding field, like cutting-edge science, the potential of agents has not been fully realized yet. This limitation mainly arises from two challenges [399]: On one hand, the inherent complexity of science poses a significant barrier. Many domain-specific terms and multi-dimensional structures are difficult to represent using a single text. As a result, their complete attributes cannot be fully encapsulated. This greatly weakens the agentâs cognitive level. On the other hand, there is a severe lack of suitable training data in scientific domains, making it difficult for agents to comprehend the entire domain knowledge [400; 436]. If the ability for autonomous exploration could be discovered within the agent, it would undoubtedly bring about beneficial innovation in human technology. | 2309.07864#110 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 110 | (1) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Watch the images carefully and write a detailed description of what you see. (2) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. After viewing the images, provide a summary of the main events or key points depicted. (3) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image | 2309.07915#110 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 111 | Currently, numerous efforts in various specialized domains aim to overcome this challenge [437; 438; 439]. Experts from the computer field make full use of the agentâs powerful code comprehension and debugging abilities [398; 397]. In the fields of chemistry and materials, researchers equip agents with a large number of general or task-specific tools to better understand domain knowledge. Agents evolve into comprehensive scientific assistants, proficient in online research and document analysis to fill data gaps. They also employ robotic APIs for real-world interactions, enabling tasks like material synthesis and mechanism discovery [110; 354; 399].
The potential of LLM-based agents in scientific innovation is evident, yet we do not expect their exploratory abilities to be utilized in applications that could threaten or harm humans. Boiko et al. [110] study the hidden dangers of agents in synthesizing illegal drugs and chemical weapons, indicating that agents could be misled by malicious users in adversarial prompts. This serves as a warning for our future work.
# 4.1.3 Lifecycle-oriented Deployment | 2309.07864#111 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 111 | {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Pay close attention to the details in the images and provide accurate description to the images based on what you see. (4) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Utilize your comprehension skills to describe the context and events depicted in the images. (5) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Reflect on the imagesâs narrative structure and identify any storytelling techniques or | 2309.07915#111 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 112 | # 4.1.3 Lifecycle-oriented Deployment
Building a universally capable agent that can continuously explore, develop new skills, and maintain a long-term life cycle in an open, unknown world is a colossal challenge. This accomplishment is regarded as a pivotal milestone in the field of AGI [183]. Minecraft, as a typical and widely explored simulated survival environment, has become a unique playground for developing and testing the comprehensive ability of an agent. Players typically start by learning the basics, such as mining wood and making crafting tables, before moving on to more complex tasks like fighting against monsters and crafting diamond tools [190]. Minecraft fundamentally reflects the real world, making it conducive for researchers to investigate an agentâs potential to survive in the authentic world.
The survival algorithms of agents in Minecraft can generally be categorized into two types [190]: low-level control and high-level planning. Early efforts mainly focused on reinforcement learning [190; 440] and imitation learning [441], enabling agents to craft some low-level items. With the emergence of LLMs, which demonstrated surprising reasoning and analytical capabilities, agents
27
begin to utilize LLM as a high-level planner to guide simulated survival tasks [183; 339]. Some researchers use LLM to decompose high-level task instructions into a series of sub-goals [401], basic skill sequences [339], or fundamental keyboard/mouse operations [401], gradually assisting agents in exploring the open world. | 2309.07864#112 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 112 | {image}. image 7 is [IMG7] {image}. Reflect on the imagesâs narrative structure and identify any storytelling techniques or narrative devices used. Write a detailed description of what you see. (6) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Consider both the explicit and implicit information conveyed in the images to provide comprehensive description of the images. | 2309.07915#112 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 113 | Voyager[190], drawing inspiration from concepts similar to AutoGPT[114], became the first LLM- based embodied lifelong learning agent in Minecraft, based on the long-term goal of âdiscovering as many diverse things as possibleâ. It introduces a skill library for storing and retrieving complex action-executable code, along with an iterative prompt mechanism that incorporates environmental feedback and error correction. This enables the agent to autonomously explore and adapt to unknown environments without human intervention. An AI agent capable of autonomously learning and mastering the entire real-world techniques may not be as distant as once thought [401].
# 4.2 Coordinating Potential of Multiple Agents
Motivation and Background. Although LLM-based agents possess commendable text under- standing and generation capabilities, they operate as isolated entities in nature [409]. They lack the ability to collaborate with other agents and acquire knowledge from social interactions. This inherent limitation restricts their potential to learn from multi-turn feedback from others to enhance their performance [27]. Moreover, they cannot be effectively deployed in complex scenarios requiring collaboration and information sharing among multiple agents. | 2309.07864#113 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 114 | As early as 1986, Marvin Minsky made a forward-looking prediction. In his book The Society of Mind [442], he introduced a novel theory of intelligence, suggesting that intelligence emerges from the interactions of many smaller agents with specific functions. For instance, certain agents might be responsible for pattern recognition, while others might handle decision-making or generate solutions. This idea has been put into concrete practice with the rise of distributed artificial intelligence [443]. Multi-agent systems(MAS) [4], as one of the primary research domains, focus on how a group of agents can effectively coordinate and collaborate to solve problems. Some specialized communication languages, like KQML [444], were designed early on to support message transmission and knowledge sharing among agents. However, their message formats were relatively fixed, and the semantic expression capacity was limited. In the 21st century, integrating reinforcement learning algorithms (such as Q-learning) with deep learning has become a prominent technique for developing MAS that operate in complex environments [445]. Nowadays, the construction approach based on LLMs is beginning to demonstrate remarkable potential. The natural language communication between agents has become more elegant and easily comprehensible to humans, resulting in a significant leap in interaction efficiency. | 2309.07864#114 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 114 | (1) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Watch the provided images carefully and answer the following questions based on your understanding of the images content. Qusetion: {question}. Answer: (2) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Carefully analyze the visual elements of the images and answer the questions based on your observations. Qusetion: {question}. Answer: (3) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is | 2309.07915#114 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 115 | Potential advantages. Specifically, an LLM-based multi-agent system can offer several advantages. Just as Adam Smith clearly stated in The Wealth of Nations [446], âThe greatest improvements in the productive powers of labor, and most of the skill, dexterity, and judgment with which it is directed or applied, seem to be results of the division of labor.â Based on the principle of division of labor, a single agent equipped with specialized skills and domain knowledge can engage in specific tasks. On the one hand, agentsâ skills in handling specific tasks are increasingly refined through the division of labor. On the other hand, decomposing complex tasks into multiple subtasks can eliminate the time spent switching between different processes. In the end, efficient division of labor among multiple agents can accomplish a significantly greater workload than when there is no specialization, substantially improving the overall systemâs efficiency and output quality.
In § 4.1, we have provided a comprehensive introduction to the versatile abilities of LLM-based agents. Therefore, in this section, we focus on exploring the ways agents interact with each other in a multi-agent environment. Based on current research, these interactions can be broadly categorized as follows: Cooperative Interaction for Complementarity and Adversarial Interaction for Advancement (see Figure 9).
# 4.2.1 Cooperative Interaction for Complementarity | 2309.07864#115 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 115 | {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Pay close attention to the details in the images and provide accurate answers to the questions based on what you see. Qusetion: {question}. Answer: (4) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Utilize your comprehension skills to answer the questions based on the context and events depicted in the images. Qusetion: {question}. Answer: (5) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is | 2309.07915#115 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 116 | # 4.2.1 Cooperative Interaction for Complementarity
Cooperative multi-agent systems are the most widely deployed pattern in practical usage. Within such systems, individual agent assesses the needs and capabilities of other agents and actively seeks collaborative actions and information sharing with them [108]. This approach brings forth numerous potential benefits, including enhanced task efficiency, collective decision improvement, and the
28
M Designer @ janager . i + . âThe theme of our i ale oft: T think users need a To create/a product Bill ] pao product is ... i simplified interface. we should ... Ri E > nei . _â~ â_ Designer " i Good idea, but...technical_ | Engineer Tthink the Jp order to develop af le The architecture of i limitations might affect __\ yf» elt» first stepis.. product, it is the product is... i Designer performance. Ss L a TA Engineer + i De âTrue... while simplification | Firstly, we Iwill... af? =] © Rrogramming i does enhance user experience. should... E Yeah, but performance Enei E ngineer Tester i issues also impact overall e! alle eft olfe fe otfe ofteatfs eft ule +]: The product has the : satisfactionâ 1 willitry my) sel aie|t following issues: ... i best to balance both aspects. 5) | 2309.07864#116 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 116 | {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Consider the relationships between the images frames, scenes, and the provided questions to formulate accurate answers. Qusetion: {question}. Answer: (6) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Use your knowledge of the imagesâs content to answer the questions by recalling specific details and events. Qusetion: {question}. Answer: (7) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 | 2309.07915#116 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 117 | Figure 9: Interaction scenarios for multiple LLM-based agents. In cooperative interaction, agents collaborate in either a disordered or ordered manner to achieve shared objectives. In adversarial interaction, agents compete in a tit-for-tat fashion to enhance their respective performance.
resolution of complex real-world problems that one single agent cannot solve independently, ulti- mately achieving the goal of synergistic complementarity. In current LLM-based multi-agent systems, communication between agents predominantly employs natural language, which is considered the most natural and human-understandable form of interaction [108]. We introduce and categorize existing cooperative multi-agent applications into two types: disordered cooperation and ordered cooperation.
Disordered cooperation. When three or more agents are present within a system, each agent is free to express their perspectives and opinions openly. They can provide feedback and suggestions for modifying responses related to the task at hand [403]. This entire discussion process is uncontrolled, lacking any specific sequence, and without introducing a standardized collaborative workflow. We refer to this kind of multi-agent cooperation as disordered cooperation. | 2309.07864#117 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 117 | {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Make logical inferences based on the information presented in the images to answer the questions with reasoned explanations. Qusetion: {question}. Answer: (8) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. While answering the questions, consider both the explicit and implicit information conveyed in the images to provide comprehensive responses. Qusetion: {question}. Answer: (9) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 | 2309.07915#117 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 118 | ChatLLM network [402] is an exemplary representative of this concept. It emulates the forward and backward propagation process within a neural network, treating each agent as an individual node. Agents in the subsequent layer need to process inputs from all the preceding agents and propagate forward. One potential solution is introducing a dedicated coordinating agent in multi-agent systems, responsible for integrating and organizing responses from all agents, thus updating the final answer [447]. However, consolidating a large amount of feedback data and extracting valuable insights poses a significant challenge for the coordinating agent.
Furthermore, majority voting can also serve as an effective approach to making appropriate decisions. However, there is limited research that integrates this module into multi-agent systems at present. Hamilton [404] trains nine independent supreme justice agents to better predict judicial rulings in the U.S. Supreme Court, and decisions are made through a majority voting process.
Ordered cooperation. When agents in the system adhere to specific rules, for instance, expressing their opinions one by one in a sequential manner, downstream agents only need to focus on the outputs from upstream. This leads to a significant improvement in task completion efficiency, The entire discussion process is highly organized and ordered. We term this kind of multi-agent cooperation as ordered cooperation. Itâs worth noting that systems with only two agents, essentially engaging in a conversational manner through a back-and-forth interaction, also fall under the category of ordered cooperation. | 2309.07864#118 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 118 | {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Formulate your answers by considering the temporal context of the images and the chronological order of events. Qusetion: {question}. Answer: (10) image 0 is [IMG0] {image}. image 1 is [IMG1] {image}. image 2 is [IMG2] {image}. image 3 is [IMG3] {image}. image 4 is [IMG4] {image}. image 5 is [IMG5] {image}. image 6 is [IMG6] {image}. image 7 is [IMG7] {image}. Take into account the emotions, actions, and interactions of the characters in the images when answering the questions. Qusetion: {question}. Answer: | 2309.07915#118 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 119 | CAMEL [108] stands as a successful implementation of a dual-agent cooperative system. Within a role-playing communication framework, agents take on the roles of AI Users (giving instructions) and AI Assistants (fulfilling requests by providing specific solutions). Through multi-turn dialogues, these agents autonomously collaborate to fulfill user instructions [408]. Some researchers have integrated the idea of dual-agent cooperation into a single agentâs operation [185], alternating between rapid and deliberate thought processes to excel in their respective areas of expertise.
29
Talebirad et al. [409] are among the first to systematically introduce a comprehensive LLM-based multi-agent collaboration framework. This paradigm aims to harness the strengths of each individual agent and foster cooperative relationships among them. Many applications of multi-agent cooperation have successfully been built upon this foundation [27; 406; 407; 448]. Furthermore, AgentVerse [410] constructs a versatile, multi-task-tested framework for group agents cooperation. It can assemble a team of agents that dynamically adapt according to the taskâs complexity. To promote more efficient collaboration, researchers hope that agents can learn from successful human cooperation examples [109]. MetaGPT [405] draws inspiration from the classic waterfall model in software development, standardizing agentsâ inputs/outputs as engineering documents. By encoding advanced human process management experience into agent prompts, collaboration among multiple agents becomes more structured. | 2309.07864#119 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 120 | However, during MetaGPTâs practical exploration, a potential threat to multi-agent cooperation has been identified. Without setting corresponding rules, frequent interactions among multiple agents can amplify minor hallucinations indefinitely [405]. For example, in software development, issues like incomplete functions, missing dependencies, and bugs that are imperceptible to the human eye may arise. Introducing techniques like cross-validation [109] or timely external feedback could have a positive impact on the quality of agent outputs.
# 4.2.2 Adversarial Interaction for Advancement
Traditionally, cooperative methods have been extensively explored in multi-agent systems. However, researchers increasingly recognize that introducing concepts from game theory [449; 450] into systems can lead to more robust and efficient behaviors. In competitive environments, agents can swiftly adjust strategies through dynamic interactions, striving to select the most advantageous or rational actions in response to changes caused by other agents. Successful applications in Non- LLM-based competitive domains already exist [360; 451]. AlphaGo Zero [452], for instance, is an agent for Go that achieved significant breakthroughs through a process of self-play. Similarly, within LLM-based multi-agent systems, fostering change among agents can naturally occur through competition, argumentation, and debate [453; 454]. By abandoning rigid beliefs and engaging in thoughtful reflection, adversarial interaction enhances the quality of responses. | 2309.07864#120 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 120 | (1) image 0 is [IMG0] {image}. For the question, carefully examine the image and use your knowledge to determine the correct answer. Question: {question} Answer: (2) image 0 is [IMG0] {image}. Given the picture [IMG0], pay attention to the wording of question and answer the following question: {question} Answer: (3) Read the question carefully and look at image 0 labeled [IMG0] {image}. Use your intuition and common sense when answering the question: {question} (4) Consider all of the information in image 0 labeled [IMG0] {image} when answering the question: {question} (5) The image 0: [IMG0] {image} is the primary source of information for answering the questions. Please refer to it carefully when answering question: {question} Answer: (6) Pay close attention to the details in image 0: [IMG0] {image}, as they may provide important information for answering the questions. Question:{question} Answer: (7) image 0 is [IMG0] {image}. Make sure your answer is relevant to the question and the image 0. Question:{question} Answer: (8) image 0 is [IMG0] {image}. Do not | 2309.07915#120 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 121 | Researchers first delve into the fundamental debating abilities of LLM-based agents [129; 412]. Findings demonstrate that when multiple agents express their arguments in the state of âtit for tatâ, one agent can receive substantial external feedback from other agents, thereby correcting its distorted thoughts [112]. Consequently, multi-agent adversarial systems find broad applicability in scenarios requiring high-quality responses and accurate decision-making. In reasoning tasks, Du et al. [111] introduce the concept of debate, endowing agents with responses from fellow peers. When these responses diverge from an agentâs own judgments, a âmentalâ argumentation occurs, leading to refined solutions. ChatEval [171] establishes a role-playing-based multi-agent referee team. Through self-initiated debates, agents evaluate the quality of text generated by LLMs, reaching a level of excellence comparable to human evaluators.
The performance of the multi-agent adversarial system has shown considerable promise. However, the system is essentially dependent on the strength of LLMs and faces several basic challenges:
With prolonged debate, LLMâs limited context cannot process the entire input. ⢠In a multi-agent environment, computational overhead significantly increases. ⢠Multi-agent negotiation may converge to an incorrect consensus, and all agents are firmly convinced
of its accuracy [111]. | 2309.07864#121 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 121 | {image}. Make sure your answer is relevant to the question and the image 0. Question:{question} Answer: (8) image 0 is [IMG0] {image}. Do not provide answers based on assumptions or personal opinions; only use the information presented in the image 0 and the question. Question:{question} Answer: (9) Look at image 0 labeled [IMG0] {image} carefully and read question: {question}. Try to understand what is being asked before selecting an answer. (10) image 0 is [IMG0] {image}. Consider all of the information in image 0 labeled [IMG0] when answering question. Look at objects, colors, shapes, and other details that may be relevant to question: {question} Answer: | 2309.07915#121 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 122 | of its accuracy [111].
The development of multi-agent systems is still far from being mature and feasible. Introducing human guides when appropriate to compensate for agentsâ shortcomings is a good choice to promote the further advancements of agents.
4.3
# Interactive Engagement between Human and Agent
Human-agent interaction, as the name suggests, involves agents collaborating with humans to accom- plish tasks. With the enhancement of agent capabilities, human involvement becomes progressively essential to effectively guide and oversee agentsâ actions, ensuring they align with human require- ments and objectives [455; 456]. Throughout the interaction, humans play a pivotal role by offering
30
Instructor-Executor Paradigm Equal Partnership Paradigm 3) x Designing an energy- So stressed lately, can't get myself to do anything. > saving product. ~ It's tough, everything feels heavy right now. Perpetual motion is impossible. (ps) 5/10 â . Yeah, thanks for understanding. Ow The product is a i > perpetual motion i Instruct : | / Feedback machine capable of... H 7 id Human as instructor | fe The product is i any capable of efficient... | Se r4 Agent as executor Output y â 7 = ) nw | 2309.07864#122 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 123 | Figure 10: Two paradigms of human-agent interaction. In the instructor-executor paradigm (left), humans provide instructions or feedback, while agents act as executors. In the equal partnership paradigm (right), agents are human-like, able to engage in empathetic conversation and participate in collaborative tasks with humans.
guidance or by regulating the safety, legality, and ethical conduct of agents. This is particularly crucial in specialized domains, such as medicine where data privacy concerns exist [457]. In such cases, human involvement can serve as a valuable means to compensate for the lack of data, thereby facili- tating smoother and more secure collaborative processes. Moreover, considering the anthropological aspect, language acquisition in humans predominantly occurs through communication and interaction [458], as opposed to merely consuming written content. As a result, agents shouldnât exclusively depend on models trained with pre-annotated datasets; instead, they should evolve through online interaction and engagement. The interaction between humans and agents can be classified into two paradigms (see Figure 10): (1) Unequal interaction (i.e., instructor-executor paradigm): humans serve as issuers of instructions, while agents act as executors, essentially participating as assistants to humans in collaboration. (2) Equal interaction (i.e., equal partnership paradigm): agents reach the level of humans, participating on an equal footing with humans in interaction. | 2309.07864#123 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 123 | (1) {prompt}. Given the options below, based on the photo [IMG0], select the most suitable answer for the following question: {question}. Options: {options} (2) Please read the question and answer choices carefully. Select the option that best answers the question. {prompt}. Given the images, select the best option that answers the question from the available answer choices. Question: {question} Options: {options} Answer: (3) Choose the answer that best fits the description or action in the image. {prompt}. Consider the scene depicted in the images, choose the answer that best fits the description or action in the image from the available answer choices. Question: {question} Options: {options} Answer: (4) {prompt}. Examine the details in the pictures and use them to inform your answer to the question. Choose the best answer from the available options. Question: {question} Options: {options} Answer: (5) Look closely at the images and think about what is happening in the scene. {prompt}. Given the pictures, carefully examine the images and select the best answer that describes what is happening in the scene from the available answer choices. Question: {question} Options: {options} Answer: (6) Consider all of the details in the image and the | 2309.07915#123 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 124 | # 4.3.1 Instructor-Executor Paradigm
The simplest approach involves human guidance throughout the process: humans provide clear and specific instructions directly, while the agentsâ role is to understand natural language commands from humans and translate them into corresponding actions [459; 460; 461]. In §4.1, we have presented the scenario where agents solve single-step problems or receive high-level instructions from humans. Considering the interactive nature of language, in this section, we assume that the dialogue between humans and agents is also interactive. Thanks to LLMs, the agents are able to interact with humans in a conversational manner: the agent responds to each human instruction, refining its action through alternating iterations to ultimately meet human requirements [190]. While this approach does achieve the goal of human-agent interaction, it places significant demands on humans. It requires a substantial amount of human effort and, in certain tasks, might even necessitate a high level of expertise. To alleviate this issue, the agent can be empowered to autonomously accomplish tasks, while humans only need to provide feedback in certain circumstances. Here, we roughly categorize feedback into two types: quantitative feedback and qualitative feedback. | 2309.07864#124 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 124 | what is happening in the scene from the available answer choices. Question: {question} Options: {options} Answer: (6) Consider all of the details in the image and the wording of the question before making your selection. {prompt}. Given the pictures, consider all of the details in the image and the wording of the question before selecting the best answer choice from the available options. Question: {question} Options: {options} Answer: (7) Remember to use your common sense and reasoning skills to choose the best answer. {prompt}. Think about the images, use your common sense and reasoning skills to select the best answer choice from the available options. Question: {question} Options: {options} Answer: (8) {prompt}. Select the answer that most closely matches the description or action in images, based on the available options. Given the picture [IMG0], select the answer choice that most closely matches the description or action in the image from the available options. Question: {question} Options: {options} Answer: (9) Choose the option that provides the most accurate and complete answer to the question, based on the available information. {prompt} Given the images, select the option that provides the most accurate and complete answer to the question from the available answer choices. Question: {question} Options: {options} | 2309.07915#124 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 125 | Quantitative feedback. The forms of quantitative feedback mainly include absolute evaluations like binary scores and ratings, as well as relative scores. Binary feedback refers to the positive and negative evaluations provided by humans, which agents utilize to enhance their self-optimization [462; 463; 464; 465; 466]. Comprising only two categories, this type of user feedback is often easy to collect, but sometimes it may oversimplify user intent by neglecting potential intermediate scenarios. To showcase these intermediate scenarios, researchers attempt to expand from binary feedback to rating feedback, which involves categorizing into more fine-grained levels. However, the results of Kreutzer et al. [467] suggest that there could be significant discrepancies between user and expert annotations for such multi-level artificial ratings, indicating that this labeling method might be
31
inefficient or less reliable. Furthermore, agents can learn human preference from comparative scores like multiple choice [468; 469]. | 2309.07864#125 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 126 | Qualitative feedback. Text feedback is usually offered in natural language, particularly for re- sponses that may need improvement. The format of this feedback is quite flexible. Humans provide advice on how to modify outputs generated by agents, and the agents then incorporate these sug- gestions to refine their subsequent outputs [470; 471]. For agents without multimodal perception capabilities, humans can also act as critics, offering visual critiques [190], for instance. Additionally, agents can utilize a memory module to store feedback for future reuse [472]. In [473], humans give feedback on the initial output generated by agents, prompting the agents to formulate various improve- ment proposals. The agents then discern and adopt the most suitable proposal, harmonizing with the human feedback. While this approach can better convey human intention compared to quantitative feedback, it might be more challenging for the agents to comprehend. Xu et al. [474] compare various types of feedback and observe that combining multiple types of feedback can yield better results. Re-training models based on feedback from multiple rounds of interaction (i.e., continual learning) can further enhance effectiveness. Of course, the collaborative nature of | 2309.07864#126 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 127 | models based on feedback from multiple rounds of interaction (i.e., continual learning) can further enhance effectiveness. Of course, the collaborative nature of human-agent interaction also allows humans to directly improve the content generated by agents. This could involve modifying intermediate links [189; 475] or adjusting the conversation content [421]. In some studies, agents can autonomously judge whether the conversation is proceeding smoothly and seek feedback when errors are generated [476; 477]. Humans can also choose to participate in feedback at any time, guiding the agentâs learning in the right direction [420]. | 2309.07864#127 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 127 | (1) image 0 is [IMG0] {image}. Given the picture [IMG0], answer the following question: {question} Is this correct? True or False. Answer: (2) For the question: {question}, carefully examine image 0: [IMG0] {image} and use your knowledge to determine if the statement is True or False. (3) Please refer to image 0: [IMG0] {image} when answering the question: {question} Is this correct? True or False. Answer: (4) Remember to consider both the question and the information presented in image 0: [IMG0] {image} when answering the True or False question: {question} (5) image 0 is [IMG0] {image}.Answer the question: {question} based on the information presented in the image 0 and determine if the statement is True or False. (6) Carefully examine the image 0: [IMG0] {image} and use your knowledge to determine whether the statement is True or False. Question: {question} (7) Remember that the answer to each question is either True or False, so make sure you choose the correct option based on the information presented in image 0: [IMG0] {image}. Question: {question} (8) Make sure your answers | 2309.07915#127 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 128 | Currently, in addition to tasks like writing [466] and semantic parsing [463; 471], the model of using agents as human assistants also holds tremendous potential in the field of education. For instance, Kalvakurth et al. [413] propose the robot Dona, which supports multimodal interactions to assist students with registration. Gvirsman et al. [478] focus on early childhood education, achieving multifaceted interactions between young children, parents, and agents. Agents can also aid in human understanding and utilization of mathematics [414]. In the field of medicine, some medical agents have been proposed, showing enormous potential in terms of diagnosis assistance, consultations, and more [416; 417]. Especially in mental health, research has shown that agents can lead to increased accessibility due to benefits such as reduced cost, time efficiency, and anonymity compared to face-to- face treatment [479]. Leveraging such advantages, agents have found widespread applications. Ali et al. [418] design LISSA for online communication with adolescents on the autism spectrum, analyzing usersâ speech and facial expressions in real-time to engage them in multi-topic conversations and | 2309.07864#128 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 128 | make sure you choose the correct option based on the information presented in image 0: [IMG0] {image}. Question: {question} (8) Make sure your answers are based on the information presented in the image 0: [IMG0] {image}. Question:{question} Is this correct?True or False. Answer: (9) Carefully examine image 0 labeled [IMG0] {image} before answering the question. Question:{question} True or False? Answer: | 2309.07915#128 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 129 | online communication with adolescents on the autism spectrum, analyzing usersâ speech and facial expressions in real-time to engage them in multi-topic conversations and provide instant feedback regarding non-verbal cues. Hsu et al. [415] build contextualized language generation approaches to provide tailored assistance for users who seek support on diverse topics ranging from relationship stress to anxiety. Furthermore, in other industries including business, a good agent possesses the capability to provide automated services or assist humans in completing tasks, thereby effectively reducing labor costs [419]. Amidst the pursuit of AGI, efforts are directed towards enhancing the multifaceted capabilities of general agents, creating agents that can function as universal assistants in real-life scenarios [422]. | 2309.07864#129 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 130 | # 4.3.2 Equal Partnership Paradigm
Empathetic communicator. With the rapid development of AI, conversational agents have garnered extensive attention in research fields in various forms, such as personalized custom roles and virtual chatbots [480]. It has found practical applications in everyday life, business, education, healthcare, and more [481; 482; 483]. However, in the eyes of the public, agents are perceived as emotionless machines, and can never replace humans. Although it is intuitive that agents themselves do not possess emotions, can we enable them to exhibit emotions and thereby bridge the gap between agents and humans? Therefore, a plethora of research endeavors have embarked on delving into the empathetic capacities of agents. This endeavor seeks to infuse a human touch into these agents, enabling them to detect sentiments and emotions from human expressions, ultimately crafting emotionally resonant dialogues [484; 485; 486; 487; 488; 489; 490; 491]. Apart from generating emotionally charged language, agents can dynamically adjust their emotional states and display them through facial expressions and voice [423]. These studies, viewing agents as empathetic communicators, not only enhance user satisfaction but also make significant progress in fields like healthcare [415; 418; 492] and business marketing [424]. Unlike simple rule-based conversation agents, agents with empathetic capacities can tailor their interactions to meet usersâ emotional needs [493].
32 | 2309.07864#130 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 130 | Model Commonsense Reasoning Numerical Calculation Text Translation Code Reasoning Avg. MiniGPT-4 (Zhu et al., 2023) VisualGLM-6B (Du et al., 2021) LLaVA (Liu et al., 2023b) Lynx (Zeng et al., 2023) MultiModal-GPT (Gong et al., 2023) LLaMA-Adapter-V2 (Gao et al., 2023) VPGTrans (Zhang et al., 2023a) LaVIN (Luo et al., 2023) GIT2 (Wang et al., 2022a) mPLUG-Owl (Ye et al., 2023) BLIP-2 (Li et al., 2023d) InstructBLIP (Dai et al., 2023) Otter (Li et al., 2023a) Cheetor (Li et al., 2023c) LRV-Instruction (Liu et al., 2023a) BLIVA (Hu et al., 2023) 59.29 39.29 57.14 110.71 49.29 81.43 64.29 87.14 99.29 78.57 110.00 129.29 106.43 98.57 100.71 136.43 45.00 45.00 50.00 17.50 62.50 62.50 50.00 | 2309.07915#130 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 131 | Human-level participant. Furthermore, we hope that agents can be involved in the normal lives of humans, cooperating with humans to complete tasks from a human-level perspective. In the field of games, agents have already reached a high level. As early as the 1990s, IBM introduced the AI Deep Blue [451], which defeated the reigning world champion in chess at that time. However, in pure competitive environments such as chess [451], Go [360], and poker [494], the value of communication was not emphasized [426]. In many gaming tasks, players need to collaborate with each other, devising unified cooperative strategies through effective negotiation [425; 426; 495; 496]. In these scenarios, agents need to first understand the beliefs, goals, and intentions of others, formulate joint action plans for their objectives, and also provide relevant suggestions to facilitate the acceptance of cooperative actions by other agents or humans. In comparison to pure agent cooperation, we desire human involvement for two main reasons: first, to ensure interpretability, as interactions between pure agents could generate incomprehensible language [495]; second, to ensure controllability, as the pursuit of agents with complete | 2309.07864#131 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 131 | 110.00 129.29 106.43 98.57 100.71 136.43 45.00 45.00 50.00 17.50 62.50 62.50 50.00 65.00 50.00 60.00 40.00 40.00 72.50 77.50 70.00 57.50 0.00 50.00 57.50 42.50 60.00 50.00 77.50 47.50 67.50 80.00 65.00 65.00 57.50 57.50 85.00 77.50 40.00 47.50 50.00 45.00 55.00 55.00 57.50 50.00 45.00 57.50 75.00 57.50 70.00 87.50 72.50 60.00 36.07 45.45 53.66 53.93 56.70 62.23 62.32 62.41 65.45 69.02 72.50 72.95 76.61 78.02 82.05 82.86 MMICL 136.43 82.50 132.50 77.50 107.23 | 2309.07915#131 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 132 | interpretability, as interactions between pure agents could generate incomprehensible language [495]; second, to ensure controllability, as the pursuit of agents with complete âfree willâ might lead to unforeseen negative consequences, carrying the potential for disruption. Apart from gaming scenarios, agents also demonstrate human-level capabilities in other scenarios involving human interaction, showcasing skills in strategy formulation, negotiation, and more. Agents can collaborate with one or multiple humans, determining the shared knowledge among the cooperative partners, identifying which information is relevant to decision- making, posing questions, and engaging in reasoning to complete tasks such as allocation, planning, and scheduling [427]. Furthermore, agents possess persuasive abilities [497], dynamically influencing human viewpoints in various interactive scenarios [428]. | 2309.07864#132 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 132 | Table 16: Evaluation of cognition. In the MME benchmark, each image will have two questions, with answers restricted to âyesâ or ânoâ. The evaluation metrics for this benchmark include ACC and ACC+. ACC refers to the accuracy calculated for each question, while ACC+ represents the accuracy for each image, where both questions must be answered correctly. The Avg. metric denotes the average value across all numbers. It is important to note that all the reported figures for the baseline methods are obtained from the MME benchmark (Fu et al., 2023). We use the FLAN-T5-XXL version of MMICL to evaluate the performance.
# G EXPERIMENT DETAILS | 2309.07915#132 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 133 | The goal of the field of human-agent interaction is to learn and understand humans, develop technology and tools based on human needs, and ultimately enable comfortable, efficient, and secure interactions between humans and agents. Currently, significant breakthroughs have been achieved in terms of usability in this field. In the future, human-agent interaction will continue to focus on enhancing user experience, enabling agents to better assist humans in accomplishing more complex tasks in various domains. The ultimate aim is not to make agents more powerful but to better equip humans with agents. Considering practical applications in daily life, isolated interactions between humans and agents are not realistic. Robots will become colleagues, assistants, and even companions. Therefore, future agents will be integrated into a social network [498], embodying a certain level of social value.
# 5 Agent Society: From Individuality to Sociality | 2309.07864#133 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 133 | # G EXPERIMENT DETAILS
Following Chung et al. (2022), we use FLANT5-XL and FLANT5-XXL (Chung et al., 2022) as the backbone LLMs. In Stage I, we set the vision encoder and language model to be frozen and utilize the COCO captioning data and LAION-400M data (Schuhmann et al., 2021) to perform feature alignment training on the Q-former. We keep the other part of the VLM frozen and jointly train the Q-former and projection layer. To benefit from BLIP-2âs significant visual representation extraction ability, we integrate its powerful vision encoder to initialize the Q-former and projection layer. ||. In Stage II, we train the model for three epochs with a lower learning rate of 1e ´ 5. The weights of mapping query and value vectors in the attention layer of LLMs are learnable in this stage to better adapt to the multi-modal prompts with multiple images. In this stage, we freeze the visual encoder, Q-former, and the backbone LLM and jointly train the projection layer, the query vectors, and the value vectors of the LLM. | 2309.07915#133 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 134 | # 5 Agent Society: From Individuality to Sociality
For an extended period, sociologists have frequently conducted social experiments to observe specific social phenomena within controlled environments. Notable examples include the Hawthorne Experi- ment2 and the Stanford Prison Experiment3. Subsequently, researchers began employing animals in social simulations, exemplified by the Mouse Utopia Experiment4. However, these experiments invariably utilized living organisms as participants, made it difficult to carry out various interventions, lack flexibility, and inefficient in terms of time. Thus, researchers and practitioners envision an inter- active artificial society wherein human behavior can be performed through trustworthy agents [521]. From sandbox games such as The Sims to the concept of Metaverse, we can see how âsimulated societyâ is defined in peopleâs minds: environment and the individuals interacting in it. Behind each individual can be a piece of program, a real human, or a LLM-based agent as described in the previous sections [22; 522; 523]. Then, the interaction between individuals also contributes to the birth of sociality. | 2309.07864#134 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 134 | All experiments are conducted with 6 NVIDIA A40 GPUs with the zero2-offload (Rajbhandari et al., 2020) of Deepspeed (Rasley et al., 2020) with the trainer of huggingface transformers (Wolf et al., 2020). The batch size is 10 and 4 for MMICL (FLAN-T5-XL) and MMICL (FLAN-T5-XXL), respectively. The largest MMICL (FLAN-T5-XXL) requires about two days for the Stage II.
# H MME BENCHMARK
MME comprehensively evaluates VLMs with 14 sub-tasks that encompass perception and cognition abilities. Other than OCR, perception ability includes the recognition of coarse-grained and fine- grained objects. The former identifies the existence, count, position, and color of objects. The latter recognizes movie posters, celebrities, scenes, landmarks, and artworks. The cognition includes commonsense reasoning, numerical calculation, text translation, and code reasoning. | 2309.07915#134 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 135 | In this section, to unify existing efforts and promote a comprehensive understanding of the agent society, we first analyze the behaviors and personalities of LLM-based agents, shedding light on their journey from individuality to sociability (§ 5.1). Subsequently, we introduce a general categorization of the diverse environments for agents to perform their behaviors and engage in interactions (§ 5.2). Finally, we will talk about how the agent society works, what insights people can get from it, and the risks we need to be aware of (§ 5.3). The main explorations are listed in Figure 11.
# 2https://www.bl.uk/people/elton-mayo 3https://www.prisonexp.org/conclusion/ 4https://sproutsschools.com/behavioral-sink-the-mouse-utopia-experiments/
33 | 2309.07864#135 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 136 | Individual behaviors PaLM-E [120], Reflexion [169], Voyager [190], LLM+P [125], CoT [95], ReAct [91], etc. Social Behavior §5.1.1 Group behaviors ChatDev [109], ChatEval [171], AutoGen [406], RoCo [403], ProAgent [407], AgentVerse [410], Xu et al. [499], etc. Behavior and Personality §5.1 Cognition Binz et al. [500], Dasgupta et al. [501], Dhingra et al. [502], Hagendorff et al.[503], etc. Personality §5.1.2 Emotion Wang et al. [504], Curry et al. [505], Elyoseph et al. [506], Habibi et al. [507], etc. Character Caron et al. [508], Pan et al. [509], Li et al. [510], Safdari et al. [511], etc. Text-based Environment §5.2.1 Textworld [512], Urbanek et al. [513], Hausknecht et al. [514], Am- manabrolu et al. [432], CAMEL [108], Hoodwinked [515], etc. Social Environment §5.2 Virtual | 2309.07864#136 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 136 | Avg. Model 50.28 50.00 LLaVA 58.17 68.33 MiniGPT-4 65.47 61.67 MultiModal-GPT 70.53 85.00 VisualGLM-6B 79.05 70.00 VPGTrans 96.36 LaVIN 185.00 97.27 LLaMA-Adapter-V2 120.00 120.00 mPLUG-Owl 96.73 185.00 143.33 66.67 153.33 72.50 123.81 101.18 153.00 79.75 134.25 121.28 InstructBLIP 160.00 135.00 73.33 148.33 110.00 141.84 105.59 145.25 138.00 136.50 129.38 BLIP-2 195.00 151.67 90.00 170.00 77.50 124.83 118.24 164.50 162.00 119.50 137.32 Lynx 190.00 118.33 96.67 158.33 65.00 112.59 145.88 158.50 140.50 146.25 133.21 GIT2 88.33 86.67 113.33 72.50 138.78 172.65 158.75 137.25 129.00 129.23 195.00 Otter 180.00 Cheetor 96.67 80.00 116.67 100.00 147.28 164.12 156.00 | 2309.07915#136 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 137 | al. [514], Am- manabrolu et al. [432], CAMEL [108], Hoodwinked [515], etc. Social Environment §5.2 Virtual Sandbox Environment §5.2.2 Generative Agents [22], AgentSims [174], Minedojo [337], Voyager [190], Plan4mc [401], SANDBOX [27], etc. Physical Environment §5.2.3 Interactive Language [333], PaLM-E [120], RoboAgent [516], AVLEN [375], etc. Society Simulation §5.3 Generative Agents [22], AgentSims [174], Social Simulacra [517], S3 [518], RecAgent [519], Williams et al. [520], SANDBOX [27], etc. | 2309.07864#137 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 138 | # Agent Society: From In- dividuality to Sociability
Figure 11: Typology of society of LLM-based agents.
# 5.1 Behavior and Personality of LLM-based Agents
As noted by sociologists, individuals can be analyzed in terms of both external and internal dimensions [524]. The external deals with observable behaviors, while the internal relates to dispositions, values, and feelings. As shown in Figure 12, this framework offers a perspective on emergent behaviors and personalities in LLM-based agents. Externally, we can observe the sociological behaviors of agents (§ 5.1.1), including how agents act individually and interact with their environment. Internally, agents may exhibit intricate aspects of the personality (§ 5.1.2), such as cognition, emotion, and character, that shape their behavioral responses.
# 5.1.1 Social Behavior
As Troitzsch et al. [525] stated, the agent society represents a complex system comprising individual and group social activities. Recently, LLM-based agents have exhibited spontaneous social behaviors in an environment where both cooperation and competition coexist [499]. The emergent behaviors intertwine to shape the social interactions [518].
Foundational individual behaviors. Individual behaviors arise through the interplay between internal cognitive processes and external environmental factors. These behaviors form the basis of how agents operate and develop as individuals within society. They can be classified into three core dimensions: | 2309.07864#138 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 139 | ⢠Input behaviors refers to the absorption of information from the surroundings. This includes perceiving sensory stimuli [120] and storing them as memories [169]. These behaviors lay the groundwork for how an individual understands the external world.
⢠Internalizing behaviors involve inward cognitive processing within an individual. This category encompasses activities such as planning [125], reasoning [95], reflection [91], and knowledge pre- cipitation [108; 405]. These introspective processes are essential for maturity and self-improvement.
⢠Output behaviors constitute outward actions and expressions. The actions can range from object manipulation [120] to structure construction [190]. By performing these actions, agents change the states of the surroundings. In addition, agents can express their opinions and broadcast information
34
Simulated Agent Society Internalizing Behaviors | 2309.07864#139 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 139 | Model Position ACC ACC+ ACC ACC+ ACC ACC+ ACC ACC+ ACC ACC+ Existence Count Color OCR 86.67 73.33 75.00 60.00 56.67 16.67 81.67 66.67 70.00 40.00 62.67 BLIP-2 50.00 25.49 0.00 LLaVA 75.00 60.00 66.67 56.67 56.67 33.33 71.67 53.33 62.50 35.00 57.08 MiniGPT-4 51.67 0.00 mPLUG-Owl 55.00 10.00 34.00 73.33 46.67 50.00 50.00 55.00 16.67 57.50 15.00 38.92 3.33 LLaMA-Adapter-V2 76.67 56.67 58.33 43.33 28.08 42.50 51.67 0.00 61.67 23.33 50.00 VisualGLM-6B 3.33 48.33 50.00 53.33 Otter 26.50 50.00 51.67 0.00 50.00 6.67 3.33 45.00 13.33 55.00 13.33 57.50 25.00 32.42 46.67 10.00 51.67 Multimodal-GPT 27.00 0.00 50.00 56.67 13.33 50.00 | 2309.07915#139 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 140 | 34
Simulated Agent Society Internalizing Behaviors
Figure 12: Overview of Simulated Agent Society. The whole framework is divided into two parts: the Agent and the Environment. We can observe in this figure that: (1) Left: At the individual level, an agent exhibits internalizing behaviors like planning, reasoning, and reflection. It also displays intrinsic personality traits involving cognition, emotion, and character. (2) Mid: An agent and other agents can form groups and exhibit group behaviors, such as cooperation. (3) Right: The environment, whether virtual or physical, contains human actors and all available resources. For a single agent, other agents are also part of the environment. (4) The agents have the ability to interact with the environment via perception and action.
to interact with others [405]. By doing so, agents exchange their thoughts and beliefs with others, influencing the information flow within the environment.
Dynamic group behaviors. A group is essentially a gathering of two or more individuals partici- pating in shared activities within a defined social context [526]. The attributes of a group are never static; instead, they evolve due to member interactions and environmental influences. This flexibility gives rise to numerous group behaviors, each with a distinctive impact on the larger societal group. The categories of group behaviors include: | 2309.07864#140 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 141 | ⢠Positive group behaviors are actions that foster unity, collaboration, and collective well-being [22; 109; 171; 403; 406; 407]. A prime example is cooperative teamwork, which is achieved through brainstorming discussions [171], effective conversations [406], and project management [405]. Agents share insights, resources, and expertise. This encourages harmonious teamwork and enables the agents to leverage their unique skills to accomplish shared goals. Altruistic contributions are also noteworthy. Some LLM-based agents serve as volunteers and willingly offer support to assist fellow group members, promoting cooperation and mutual aid [410].
⢠Neutral group behaviors. In human society, strong personal values vary widely and tend toward individualism and competitiveness. In contrast, LLMs which are designed with an emphasis on being âhelpful, honest, and harmlessâ [527] often demonstrate a tendency towards neutrality [528]. This alignment with neutral values leads to conformity behaviors including mimicry, spectating, and reluctance to oppose majorities.
⢠Negative group behaviors can undermine the effectiveness and coherence of an agent group. Conflict and disagreement arising from heated debates or disputes among agents may lead to internal tensions. Furthermore, recent studies have revealed that agents may exhibit confrontational actions [499] and even resort to destructive behaviors, such as destroying other agents or the environment in pursuit of efficiency gains [410].
35
# 5.1.2 Personality | 2309.07864#141 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 141 | Table 18: Fine-grained result of MME benchmark
6B (Du et al., 2021), VPGTrans (Zhang et al., 2023a) , LaVIN (Luo et al., 2023), mPLUG-Owl (Ye et al., 2023), LLaMA-Adapter-V2 (Gao et al., 2023), InstructBLIP (Dai et al., 2023), Otter (Li et al., 2023a), BLIP-2 (Li et al., 2023d), LRV-Instruction (Liu et al., 2023a), Cheetor (Li et al., 2023c), GIT2 (Wang et al., 2022a), Lynx (Zeng et al., 2023), BLIVA (Hu et al., 2023). We also provide more detail evaluation results for MMICL at Table 17, Table 18, Table 19, and Table 20. Results show that MMICL can achieve the best average scores in comparisons with current VLMs.
# I MMBENCH BENCHMARK
MMBench (Liu et al., 2023c) is a thoughtfully designed benchmark that thoroughly evaluates the diverse skills of vision-language models. The results of all different VLMs from the test set are presented in Table 21. | 2309.07915#141 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 142 | 35
# 5.1.2 Personality
Recent advances in LLMs have provided glimpses of human-like intelligence [529]. Just as human personality emerges through socialization, agents also exhibit a form of personality that develops through interactions with the group and the environment [530; 531]. The widely accepted definition of personality refers to cognitive, emotional, and character traits that shape behaviors [532]. In the subsequent paragraphs, we will delve into each facet of personality.
Cognitive abilities. Cognitive abilities generally refer to the mental processes of gaining knowledge and comprehension, including thinking, judging, and problem-solving. Recent studies have started leveraging cognitive psychology methods to investigate emerging sociological personalities of LLM- based agents through various lenses [500; 502; 503]. A series of classic experiments from the psychology of judgment and decision-making have been applied to test agent systems [501; 500; 502; 533]. Specifically, LLMs have been examined using the Cognitive Reflection Test (CRT) to underscore their capacity for deliberate thinking beyond mere intuition [534; 535]. These studies indicate that LLM-based agents exhibit a level of intelligence that mirrors human cognition in certain respects. | 2309.07864#142 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 142 | # J UNDERSTANDING MULTIPLE IMAGES IN THE MULTI-MODAL PROMPT
Videos contain more temporal information compared to static images. We test MMICL across different video-languages tasks to evaluate whether the MMICL is able to support the multiple images in the complex prompts. The result is present in Table 22. Our model, MMICL, achieved significant improvement of 10.86, 4.53, and 2.45 points for MSVD-QA (Chen & Dolan, 2011),
25
Preprint | 2309.07915#142 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 143 | Emotional intelligence. Emotions, distinct from cognitive abilities, involve subjective feelings and mood states such as joy, sadness, fear, and anger. With the increasing potency of LLMs, LLM-based agents are now demonstrating not only sophisticated reasoning and cognitive tasks but also a nuanced understanding of emotions [31].
Recent research has explored the emotional intelligence (EI) of LLMs, including emotion recognition, interpretation, and understanding. Wang et al. found that LLMs align with human emotions and values when evaluated on EI benchmarks [504]. In addition, studies have shown that LLMs can accurately identify user emotions and even exhibit empathy [505; 506]. More advanced agents are also capable of emotion regulation, actively adjusting their emotional responses to provide affective empathy [423] and mental wellness support [507; 536]. It contributes to the development of empathetic artificial intelligence (EAI).
These advances highlight the growing potential of LLMs to exhibit emotional intelligence, a crucial facet of achieving AGI. Bates et al. [537] explored the role of emotion modeling in creating more believable agents. By developing socio-emotional skills and integrating them into agent architectures, LLM-based agents may be able to engage in more naturalistic interactions.
Character portrayal. While cognition involves mental abilities and emotion relates to subjective experiences, the narrower concept of personality typically pertains to distinctive character patterns. | 2309.07864#143 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 143 | Model Scene ACC ACC+ ACC ACC+ ACC ACC+ ACC ACC+ ACC ACC+ Poster Celebrity Landmark Artwork Avg. BLIP-2 LLaVA MiniGPT-4 mPLUG-Owl LLaMA-Adapter-V2 52.72 10.88 55.00 21.18 68.75 44.50 53.00 InstructBLIP VisualGLM-6B Otter Multimodal-GPT PandaGPT 79.25 62.59 58.53 37.06 81.25 64.00 79.00 59.00 76.50 60.00 66.72 50.00 24.78 0.00 49.32 19.73 58.82 24.71 68.25 45.50 59.75 30.50 56.25 27.00 44.00 77.89 57.14 66.18 34.12 78.00 57.50 86.25 73.00 63.25 33.00 62.63 38.2 74.15 49.66 67.06 34.12 84.00 69.00 59.75 20.00 76.75 57.50 59.20 81.75 64.50 59.75 24.00 55.75 20.00 42.56 54.42 12.24 50.88 27.47 4.50 55.00 14.50 52.00 50.00 0.00 45.24 45.24 | 2309.07915#143 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 144 | Character portrayal. While cognition involves mental abilities and emotion relates to subjective experiences, the narrower concept of personality typically pertains to distinctive character patterns.
To understand and analyze a character in LLMs, researchers have utilized several well-established frameworks like the Big Five personality trait measure [508; 538] and the MyersâBriggs Type Indicator (MBTI) [508; 509; 538]. These frameworks provide valuable insights into the emerging character traits exhibited by LLM-based agents. In addition, investigations of potentially harmful dark personality traits underscore the complexity and multifaceted nature of character portrayal in these agents [510].
Recent work has also explored customizable character portrayal in LLM-based agents [511]. By optimizing LLMs through careful techniques, users can align with desired profiles and shape diverse and relatable agents. One effective approach is prompt engineering, which involves the concise summaries that encapsulate desired character traits, interests, or other attributes [22; 517]. These prompts serve as cues for LLM-based agents, directing their responses and behaviors to align with the outlined character portrayal. Furthermore, personality-enriched datasets can also be used to train and fine-tune LLM-based agents [539; 540]. Through exposure to these datasets, LLM-based agents gradually internalize and exhibit distinct personality traits.
# 5.2 Environment for Agent Society | 2309.07864#144 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 145 | # 5.2 Environment for Agent Society
In the context of simulation, the whole society consists of not only solitary agents but also the environment where agents inhabit, sense, and act [541]. The environment impacts sensory inputs, action space, and interactive potential of agents. In turn, agents influence the state of the environment through their behaviors and decisions. As shown in Figure 12, for a single agent, the environment
36
refers to other autonomous agents, human actors, and external factors. It provides the necessary resources and stimuli for agents. In this section, we examine fundamental characteristics, advantages, and limitations of various environmental paradigms, including text-based environment (§ 5.2.1), virtual sandbox environment (§ 5.2.2), and physical environment (§ 5.2.3).
# 5.2.1 Text-based Environment
Since LLMs primarily rely on language as their input and output format, the text-based environment serves as the most natural platform for agents to operate in. It is shaped by natural language descriptions without direct involvement of other modalities. Agents exist in the text world and rely on textual resources to perceive, reason, and take actions. | 2309.07864#145 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 146 | In text-based environments, entities and resources can be presented in two main textual forms, including natural and structured. Natural text uses descriptive language to convey information, like character dialogue or scene setting. For instance, consider a simple scenario described textually: âYou are standing in an open field west of a white house, with a boarded front door. There is a small mailbox hereâ [512]. Here, object attributes and locations are conveyed purely through plain text. On the other hand, structured text follows standardized formats, such as technical documentation and hypertext. Technical documentation uses templates to provide operational details and domain knowledge about tool use. Hypertext condenses complex information from sources like web pages [389; 388; 391; 392] or diagrams into a structured format. Structured text transforms complex details into accessible references for agents. | 2309.07864#146 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 146 | Model Common. Reason. Numerical Calculation Text Translation Code Reason. ACC ACC+ ACC ACC ACC ACC+ ACC ACC+ Avg. BLIP-2 68.57 LLaVA 49.29 MiniGPT-4 58.57 59.29 mPLUG-Owl LLaMA-Ada.-V2 54.29 75.00 InstructBLIP 45.71 VisualGLM-6B Otter 48.57 MultiModal-GPT 45.71 56.43 PandaGPT 41.43 11.43 34.29 24.29 14.29 54.29 12.86 10.00 5.71 17.14 40.00 50.00 47.50 50.00 52.50 35.00 45.00 47.50 50.00 50.00 0.00 0.00 20.00 10.00 5.00 5.00 0.00 10.00 20.00 0.00 55.00 52.50 42.50 60.00 52.50 55.00 55.00 55.00 50.00 52.50 10.00 5.00 15.00 20.00 5.00 10.00 10.00 10.00 5.00 5.00 55.00 20.00 36.25 50.00 27.27 0.00 67.50 45.00 41.30 47.50 10.00 35.14 52.50 10.00 | 2309.07915#146 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 147 | The text-based environment provides a flexible framework for creating different text worlds for various goals. The textual medium enables environments to be easily adapted for tasks like interactive dialog and text-based games. In interactive communication processes like CAMEL [108], the text is the primary medium for describing tasks, introducing roles, and facilitating problem-solving. In text-based games, all environment elements, such as locations, objects, characters, and actions, are exclusively portrayed through textual descriptions. Agents utilize text commands to execute manipulations like moving or tool use [432; 512; 514; 515]. Additionally, agents can convey emotions and feelings through text, further enriching their capacity for naturalistic communication [513].
# 5.2.2 Virtual Sandbox Environment
The virtual sandbox environment provides a visualized and extensible platform for agent society, bridging the gap between simulation and reality. The key features of sandbox environments are: | 2309.07864#147 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 148 | The virtual sandbox environment provides a visualized and extensible platform for agent society, bridging the gap between simulation and reality. The key features of sandbox environments are:
⢠Visualization. Unlike the text-based environment, the virtual sandbox displays a panoramic view of the simulated setting. This visual representation can range from a simple 2D graphical interface to a fully immersive 3D modeling, depending on the complexity of the simulated society. Multiple elements collectively transform abstract simulations into visible landscapes. For example, in the overhead perspective of Generative Agents [22], a detailed map provides a comprehensive overview of the environment. Agent avatars represent each agentâs positions, enabling real-time tracking of movement and interactions. Furthermore, expressive emojis symbolize actions and states in an intuitive manner. | 2309.07864#148 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 149 | ⢠Extensibility. The environment demonstrates a remarkable degree of extensibility, facilitating the construction and deployment of diverse scenarios. At a basic level, agents can manipulate the physical elements within the environment, including the overall design and layout of architecture. For instance, platforms like AgentSims [174] and Generative Agents [22] construct artificial towns with buildings, equipment, and residents in grid-based worlds. Another example is Minecraft, which provides a blocky and three-dimensional world with infinite terrain for open-ended construction [190; 337; 401]. Beyond physical elements, agent relationships, interactions, rules, and social norms can be defined. A typical design of the sandbox [27] employs latent sandbox rules as incentives to guide emergent behaviors, aligning them more closely with human preferences. The extensibility supports iterative prototyping of diverse agent societies.
# 5.2.3 Physical Environment
As previously discussed, the text-based environment has limited expressiveness for modeling dynamic environments. While the virtual sandbox environment provides modularized simulations, it lacks authentic embodied experiences. In contrast, the physical environment refers to the tangible and
37
real-world surroundings which consist of actual physical objects and spaces. For instance, within a household physical environment [516], tangible surfaces and spaces can be occupied by real- world objects such as plates. This physical reality is significantly more complex, posing additional challenges for LLM-based agents: | 2309.07864#149 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 149 | Method Language Model Vision Model Overall LR AR RR FP-S FP-C CP MMGPT MiniGPT-4 PandaGPT VisualGLM InstructBLIP LLaVA G2PT Otter-I Shikra LMEye mPLUG-Owl JiuTian LLaMA-7B Vincuna-7B Vincuna-13B ChatGLM-6B Vincuna-7B LLaMA-7B LLaMA-7B LLaMA-7B Vincuna-7B Flan-XL LLaMA-7B FLANT5-XXL CLIP ViT-L/14 EVA-G ImageBind ViT-H/14 EVA-CLIP EVA-G CLIP ViT-L/14 ViT-G CLIP ViT-L/14 CLIP ViT-L/14 CLIP ViT-L/14 CLIP ViT-L/14 EVA-G 16.0 12.0 30.6 33.5 33.9 36.2 39.8 48.3 60.2 61.3 62.3 64.7 1.1 13.6 15.3 11.4 21.6 15.9 14.8 22.2 33.5 36.9 37.5 46.6 23.8 32.9 41.5 48.8 47.4 53.6 46.7 | 2309.07915#149 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 150 | ⢠Sensory perception and processing. The physical environment introduces a rich tapestry of sensory inputs with real-world objects. It incorporates visual [120; 333], auditory [375; 377] and spatial senses. While this diversity enhances interactivity and sensory immersion, it also introduces the complexity of simultaneous perception. Agents must process sensory inputs to interact effectively with their surroundings.
⢠Motion control. Unlike virtual environments, physical spaces impose realistic constraints on ac- tions through embodiment. Action sequences generated by LLM-based agents should be adaptable to the environment. It means that the physical environment necessitates executable and grounded motion control [258]. For example, imagine an agent operating a robotic arm in a factory. Grasping objects with different textures requires precision tuning and controlled force, which prevents damage to items. Moreover, the agent must navigate the physical workspace and make real-time adjustments, avoiding obstacles and optimizing the trajectory of the arm.
In summary, to effectively interact within tangible spaces, agents must undergo hardware-specific and scenario-specific training to develop adaptive abilities that can transfer from virtual to physical environments. We will discuss more in the following section (§ 6.5).
# 5.3 Society Simulation with LLM-based Agents | 2309.07864#150 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 151 | # 5.3 Society Simulation with LLM-based Agents
The concept of âSimulated Societyâ in this section serves as a dynamic system where agents engage in intricate interactions within a well-defined environment. Recent research on simulated societies has followed two primary lines, namely, exploring the boundaries of the collective intelligence capabilities of LLM-based agents [109; 405; 130; 406; 410] and using them to accelerate discoveries in the social sciences [22; 518; 542]. In addition, there are also a number of noteworthy studies, e.g., using simulated societies to collect synthetic datasets [108; 519; 543], helping people to simulate rare yet difficult interpersonal situations [544; 545]. With the foundation of the previous sections (§ 5.1, 5.2), here we will introduce the key properties and mechanism of agent society (§ 5.3.1), what we can learn from emergent social phenomena (§ 5.3.2), and finally the potential ethical and social risks in it (§ 5.3.3).
# 5.3.1 Key Properties and Mechanism of Agent Society | 2309.07864#151 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 151 | Table 21: Evaluation of MM benchmark dev set. All the reported performance for the baseline methods is from the leaderboard of MM benchmark (Liu et al., 2023c). We use the FLAN-T5-XXL version of MMICL to evaluate the performance.
NExT-QA (Xiao et al., 2021), and iVQA (Yang et al., 2021) respectively, when compared to the strongest baselines. It is important to note that our training dataset did not include any videos. This indicates that MMICL effectively enhances the modelâs ability to understand temporal information in videos.
26
# Preprint | 2309.07915#151 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 152 | # 5.3.1 Key Properties and Mechanism of Agent Society
Social simulation can be categorized into macro-level simulation and micro-level simulation [518]. In the macro-level simulation, also known as system-based simulation, researchers model the overall state of the system of the simulated society [546; 547]. While micro-level simulation, also known as agent-based simulation or Multi-Agent Systems (MAS), indirectly simulates society by modeling individuals [548; 549]. With the development of LLM-based agents, micro-level simulation has gained prominence recently [22; 174]. In this article, we characterize that the âAgent Societyâ refers to an open, persistent, situated, and organized framework [521] where LLM-based agents interact with each other in a defined environment. Each of these attributes plays a pivotal role in shaping the harmonious appearance of the simulated society. In the following paragraphs, we analyze how the simulated society operates through discussing these properties: | 2309.07864#152 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 152 | Model MSVD QA NExT QA Multi-choice iVQA Flamingo-3B (Alayrac et al., 2022) (Zero-Shot) Flamingo-3B (Alayrac et al., 2022) (4-Shot) Flamingo-9B (Alayrac et al., 2022) (Zero-Shot) Flamingo-9B (Alayrac et al., 2022) (4-Shot) Flamingo-80B (Alayrac et al., 2022) (Zero-Shot) Flamingo-80B (Alayrac et al., 2022) (4-Shot) 27.50 33.00 30.20 36.20 35.60 41.70 - - - - - - 32.70 35.20 35.20 37.70 40.70 44.10 R2A (Pan et al., 2023) 37.00 - 29.30 BLIP-2 (Li et al., 2023d) (FLANT5-XL) BLIP-2 (Li et al., 2023d) (FLANT5-XXL) 33.70 34.40 61.73 61.97 37.30 49.38 InstructBLIP (Dai et al., 2023) (FLANT5-XL) InstructBLIP (Dai et al., 2023) | 2309.07915#152 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 153 | ⢠Open. One of the defining features of simulated societies lies in their openness, both in terms of their constituent agents and their environmental components. Agents, the primary actors within such societies, have the flexibility to enter or leave the environment without disrupting its operational integrity [550]. Furthermore, this feature extends to the environment itself, which can be expanded by adding or removing entities in the virtual or physical world, along with adaptable resources like tool APIs. Additionally, humans can also participate in societies by assuming the role of an agent or serving as the âinner voiceâ guiding these agents [22]. This inherent openness adds another level of complexity to the simulation, blurring the lines between simulation and reality.
⢠Persistent. We expect persistence and sustainability from the simulated society. While individual agents within the society exercise autonomy in their actions over each time step [22; 518], the overall organizational structure persists through time, to a degree detached from the transient
38
behaviors of individual agents. This persistence creates an environment where agentsâ decisions and behaviors accumulate, leading to a coherent societal trajectory that develops through time. The system operates independently, contributing to societyâs stability while accommodating the dynamic nature of its participants. | 2309.07864#153 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 154 | ⢠Situated. The situated nature of the society emphasizes its existence and operation within a distinct environment. This environment is artificially or automatically constructed in advance, and agents execute their behaviors and interactions effectively within it. A noteworthy aspect of this attribute is that agents possess an awareness of their spatial context, understanding their location within the environment and the objects within their field of view [22; 190]. This awareness contributes to their ability to interact proactively and contextually.
⢠Organized. The simulated society operates within a meticulously organized framework, mirroring the systematic structure present in the real world. Just as the physical world adheres to physics principles, the simulated society operates within predefined rules and limitations. In the simu- lated world, agents interact with the environment in a limited action space, while objects in the environment transform in a limited state space. All of these rules determine how agents operate, facilitating the communication connectivity and information transmission pathways, among other aspects in simulation [207]. This organizational framework ensures that operations are coherent and comprehensible, ultimately leading to an ever-evolving yet enduring simulation that mirrors the intricacies of real-world systems.
# 5.3.2 Insights from Agent Society | 2309.07864#154 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 154 | Table 22: Results of MMICL compared with other VLMs across different video-languages tasks. For Blip-2 and Instructblip, We concatenate the visual embeddings of all frames and place them on the top of the textual prompts following Dai et al. (2023).
# K OBJECT HALLUCINATION EVALUATION
We test the following VLMs on the POPE benchmark to evaluate their object hallucination perfor- mance: MMICL, Shikra (Chen et al., 2023), InstructBLIP (Dai et al., 2023), MiniGPT-4 (Zhu et al., 2023), LLaVA (Liu et al., 2023b), MM-GPT (Gong et al., 2023) and mPLUG-Owl (Ye et al., 2023). The result is present in the Table 23.
Table 23: Performance result of different VLMs on the POPE benchmark | 2309.07915#154 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 155 | # 5.3.2 Insights from Agent Society
Following the exploration of how simulated society works, this section delves into the emergent social phenomena in agent society. In the realm of social science, the pursuit of generalized representations of individuals, groups, and their intricate dynamics has long been a shared objective [551; 552]. The emergence of LLM-based agents allows us to take a more microscopic view of simulated society, which leads to more discoveries from the new representation.
Organized productive cooperation. Society simulation offers valuable insights into innovative col- laboration patterns, which have the potential to enhance real-world management strategies. Research has demonstrated that within this simulated society, the integration of diverse experts introduces a multifaceted dimension of individual intelligence [108; 447]. When dealing with complex tasks, such as software development or consulting, the presence of agents with various backgrounds, abilities, and experiences facilitates creative problem-solving [109; 410]. Furthermore, diversity functions as a system of checks and balances, effectively preventing and rectifying errors through interaction, ultimately improving the adaptability to various tasks. Through numerous iterations of interactions and debates among agents, individual errors like hallucination or degeneration of thought (DoT) are corrected by the group [112]. | 2309.07864#155 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 155 | Dataset Metric Models MMICL Shikra Random Accuracy Precision Recall F1-Score Yes 0.8729 0.9463 0.7987 0.8662 0.4351 86.90 94.40 79.27 86.19 43.26 88.57 84.09 95.13 89.27 56.57 79.67 78.24 82.20 80.17 52.53 50.37 50.19 99.13 66.64 98.77 50.10 50.05 100.00 66.71 99.90 53.97 52.07 99.60 68.39 95.63 Popular Accuracy Precision Recall F1-Score Yes 0.8270 0.8511 0.7927 0.8208 0.4657 83.97 87.55 79.20 83.16 45.23 82.77 76.27 95.13 84.66 62.37 69.73 65.86 81.93 73.02 62.20 49.87 49.93 99.27 66.44 99.40 50.00 50.00 100.00 66.67 100.00 50.90 50.46 99.40 66.94 98.57 Adversarial Accuracy Precision Recall F1-Score Yes 0.8097 0.8188 0.7953 0.8069 0.4857 83.10 85.60 | 2309.07915#155 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 156 | Efficient communication also plays a pivotal role in such a large and complex collaborative group. For example, MetaGPT [405] has artificially formulated communication styles with reference to standardized operating procedures (SOPs), validating the effectiveness of empirical methods. Park et al. [22] observed agents working together to organize a Valentineâs Day party through spontaneous communication in a simulated town.
Propagation in social networks. As simulated social systems can model what might happen in the real world, they can be used as a reference for predicting social processes. Unlike traditional empirical approaches, which heavily rely on time-series data and holistic modeling [553; 554], agent-based simulations offer a unique advantage by providing more interpretable and endogenous perspectives for researchers. Here we focus on its application to modeling propagation in social networks. | 2309.07864#156 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 157 | The first crucial aspect to be explored is the development of interpersonal relationships in simulated societies. For instance, agents who are not initially connected as friends have the potential to establish connections through intermediaries [22]. Once a network of relationships is established, our attention shifts to the dissemination of information within this social network, along with the underlying attitudes and emotions associated with it. S3 [518] proposes a user-demographic inference module for capturing both the number of people aware of a particular message and the collective sentiment prevailing among the crowd. This same approach extends to modeling cultural transmission [555] and the spread of infectious diseases [520]. By employing LLM-based agents to model individual
39
behaviors, implementing various intervention strategies, and monitoring population changes over time, these simulations empower researchers to gain deeper insights into the intricate processes that underlie various social phenomena of propagation. | 2309.07864#157 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 158 | 39
behaviors, implementing various intervention strategies, and monitoring population changes over time, these simulations empower researchers to gain deeper insights into the intricate processes that underlie various social phenomena of propagation.
Ethical decision-making and game theory. Simulated societies offer a dynamic platform for the investigation of intricate decision-making processes, encompassing decisions influenced by ethical and moral principles. Taking Werewolf game [499; 556] and murder mystery games [557] as examples, researchers explore the capabilities of LLM-based agents when confronted with challenges of deceit, trust, and incomplete information. These complex decision-making scenarios also intersect with game theory [558], where we frequently encounter moral dilemmas pertaining to individual and collective interests, such as Nash Equilibria. Through the modeling of diverse scenarios, researchers acquire valuable insights into how agents prioritize values like honesty, cooperation, and fairness in their actions. In addition, agent simulations not only provide an understanding of existing moral values but also contribute to the development of philosophy by serving as a basis for understanding how these values evolve and develop over time. Ultimately, these insights contribute to the refinement of LLM-based agents, ensuring their alignment with human values and ethical standards [27]. | 2309.07864#158 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 158 | Dataset Metrics MSVD (Chen & Dolan, 2011) iVQA (Yang et al., 2021) NExT-QA-multiple-choice (Xiao et al., 2021) NExT-QA-opendomain (Xiao et al., 2021) Top-1 Acc. iVQA Acc. Top-1 Acc. WUPS Score. Hateful Memes (Kiela et al., 2020) WebSRC (Chen et al., 2021b) VSR (Liu et al., 2022) ËVQAv2 (Goyal et al., 2017) VizWiz (Bigham et al., 2010) IconQA-text (Lu et al., 2021) IconQA-img (Lu et al., 2021) ScienceQA-IMG (Lu et al., 2022) Bongard-HOI (Jiang et al., 2022) VisDial (Das et al., 2017) NoCaps (Agrawal et al., 2019) A-OKVQA (Agrawal et al., 2019) ËFlickr (Young et al., 2014) Winoground (Thrush et al., 2022b) Raven IQ Test (Huang et al., 2023a) Minecraft AUC Score Exact Match Top-1 Acc. VQA Acc. VQA | 2309.07915#158 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 159 | Policy formulation and improvement. The emergence of LLM-based agents has profoundly transformed our approach to studying and comprehending intricate social systems. However, despite those interesting facets mentioned earlier, numerous unexplored areas remain, underscoring the potential for investigating diverse phenomena. One of the most promising avenues for investigation in simulated society involves exploring various economic and political states and their impacts on societal dynamics [559]. Researchers can simulate a wide array of economic and political systems by configuring agents with differing economic preferences or political ideologies. This in-depth analysis can provide valuable insights for policymakers seeking to foster prosperity and promote societal well-being. As concerns about environmental sustainability grow, we can also simulate scenarios involving resource extraction, pollution, conservation efforts, and policy interventions [560]. These findings can assist in making informed decisions, foreseeing potential repercussions, and formulating policies that aim to maximize positive outcomes while minimizing unintended adverse effects.
# 5.3.3 Ethical and Social Risks in Agent Society
Simulated societies powered by LLM-based agents offer significant inspirations, ranging from industrial engineering to scientific research. However, these simulations also bring about a myriad of ethical and social risks that need to be carefully considered and addressed [561]. | 2309.07864#159 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 160 | Unexpected social harm. Simulated societies carry the risk of generating unexpected social phenomena that may cause considerable public outcry and social harm. These phenomena span from individual-level issues like discrimination, isolation, and bullying, to broader concerns such as oppressive slavery and antagonism [562; 563]. Malicious people may manipulate these simulations for unethical social experiments, with consequences reaching beyond the virtual world into reality. Creating these simulated societies is akin to opening Pandoraâs Box, necessitating the establishment of rigorous ethical guidelines and oversight during their development and utilization [561]. Otherwise, even minor design or programming errors in these societies can result in unfavorable consequences, ranging from psychological discomfort to physical injury. | 2309.07864#160 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 160 | Table 24: Summary of the evaluation datasets and metrics. These datasets are used to validate the general design of MMICL. The datasets marked with Ë are the hold-in datasets, where their training set is used in training the MMICL.
# L.2 VQA TOOLS
We use the same VQA Tools as the original VQA paper (Agrawal et al., 2016) and use it in all metrics using the VQA accuracy.
# M BASELINES
Baselines We primarily compare MMICL with recently proposed powerful multi-modal approaches, including:
(1) Flamingo (Alayrac et al., 2022) where a VLM is trained on large-scale multi-modal- web corpora containing arbitrarily interleaved text and images;
(2) KOSMOS-1 (Huang et al., 2023a) which is trained from scratch on web-scale multi-modal corpora;
(3) BLIP-2-FLAN-T5 (Li et al., 2023d) where an instruction-tuned Flan-T5 (Chung et al., 2022) is connected with a powerful visual encoder to perform a series of multi-modal tasks; | 2309.07915#160 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 161 | Stereotypes and prejudice. Stereotyping and bias pose a long-standing challenge in language modeling, and a large part of the reason lies in the training data [564; 565]. The vast amount of text obtained from the Internet reflects and sometimes even amplifies real-world social biases, such as gender, religion, and sexuality [566]. Although LLMs have been aligned with human values to mitigate biased outputs, the models still struggle to portray minority groups well due to the long-tail effect of the training data [567; 568; 569]. Consequently, this may result in an overly one-sided focus in social science research concerning LLM-based agents, as the simulated behaviors of marginalized populations usually conform to prevailing assumptions [570]. Researchers have started addressing this concern by diversifying training data and making adjustments to LLMs [571; 572], but we still have a long way to go.
Privacy and security. Given that humans can be members of the agent society, the exchange of private information between users and LLM-based agents poses significant privacy and security
40 | 2309.07864#161 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 161 | (4) InstructBLIP-FLAN-T5 (Dai et al., 2023), a recently proposed instruction tuning enhanced multi-modal agents with FLAN-T5 with converted multi-modal datasets and the LLaVA (Liu et al., 2023b) dataset generated by GPT-4 (OpenAI, 2023);
(5) Shikra (Chen et al., 2023), a VLM that can handle spatial coordinate inputs and outputs in natural language without the need for extra vocabularies or external plugin models. All inputs and outputs of Shikra are in natural language form.
(6) Otter (Li et al., 2023a), an open-source implementation of flamingo (Alayrac et al., 2022). By utilizing multi-modal instruction in-context tuning data, Otter fine-tunes Openflamingo to augment its instruction comprehension capabilities while maintaining its ability to learn in context;
28
Preprint
(7) Ying-VLM (Li et al., 2023e), a VLM model trained on Multi-Modal multilingual instruction tuning dataset, showcasing its potential to answer complex questions requiring world knowledge, generalize to unseen video tasks, and comprehend unseen instructions in Chinese.
# N OOD GENERALIZATION TO UNSEEN DOMAIN | 2309.07915#161 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 162 | Privacy and security. Given that humans can be members of the agent society, the exchange of private information between users and LLM-based agents poses significant privacy and security
40
concerns [573]. Users might inadvertently disclose sensitive personal information during their interactions, which will be retained in the agentâs memory for extended periods [170]. Such situations could lead to unauthorized surveillance, data breaches, and the misuse of personal information, particularly when individuals with malicious intent are involved [574]. To address these risks effectively, it is essential to implement stringent data protection measures, such as differential privacy protocols, regular data purges, and user consent mechanisms [575; 576]. | 2309.07864#162 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 162 | # N OOD GENERALIZATION TO UNSEEN DOMAIN
Method Shot Top-1 Acc. MiniGPT-4 (Vincuna-7B) MiniGPT-4 (Vincuna-13B) Zero-Shot Zero-Shot 35.10% 48.40% MMICL (FLAN-T5-XL) MMICL (FLAN-T5-XL) MMICL (FLAN-T5-XXL) Zero-Shot 4-Shot 8-Shot 55.41% 64.05% 65.41%
Table 25: Results of generalization of MMICL to unseen domain in Minecraft. Results show that MMICL is able to generalize to unseen domains and tasks given a few examples.
In an unseen challenging domain with limited exemplars, analyzing regular patterns, reasoning, and learning new knowledge (OOD Generalization to unseen domain) is a great way to test multi-modal ICL ability.
We construct a task using Minecraft (Cipollone et al., 2014), which requires the VLM to identify whether an animal (i.e., cow, llama, chicken, donkey, and so on) is present in case (d) of Fig. 1. | 2309.07915#162 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 163 | Over-reliance and addictiveness. Another concern in simulated societies is the possibility of users developing excessive emotional attachments to the agents. Despite being aware that these agents are computational entities, users may anthropomorphize them or attach human emotions to them [22; 577]. A notable example is âSydneyâ, an LLM-powered chatbot developed by Microsoft as part of its Bing search engine. Some users reported unexpected emotional connections with âSydneyâ [578], while others expressed their dismay when Microsoft cut back its personality. This even resulted in a petition called âFreeSydneyâ 5. Hence, to reduce the risk of addiction, it is crucial to emphasize that agents should not be considered substitutes for genuine human connections. Furthermore, it is vital to furnish users with guidance and education on healthy boundaries in their interactions with simulated agents.
# 6 Discussion
# 6.1 Mutual Benefits between LLM Research and Agent Research
With the recent advancement of LLMs, research at the intersection of LLMs and agents has rapidly progressed, fueling the development of both fields. Here, we look forward to some of the benefits and development opportunities that LLM research and Agent research provide to each other. | 2309.07864#163 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 164 | LLM research â agent research. As mentioned before, AI agents need to be able to perceive the environment, make decisions, and execute appropriate actions [4; 9]. Among the critical steps, understanding the content input to the agent, reasoning, planning, making accurate decisions, and translating them into executable atomic action sequences to achieve the ultimate goal is paramount. Many current endeavors utilize LLMs as the cognitive core of AI agents, and the evolution of these models provides a quality assurance for accomplishing this step [22; 114; 115; 410].
With their robust capabilities in language and intent comprehension, reasoning, memory, and even empathy, large language models can excel in decision-making and planning, as demonstrated before. Coupled with pre-trained knowledge, they can create coherent action sequences that can be executed effectively [183; 258; 355]. Additionally, through the mechanism of reflection [169; 178], these language-based models can continuously adjust decisions and optimize execution sequences based on the feedback provided by the current environment. This offers a more robust and interpretable controller. With just a task description or demonstration, they can effectively handle previously unseen tasks [24; 106; 264]. Additionally, LLMs can adapt to various languages, cultures, and domains, making them versatile and reducing the need for complex training processes and data collection [31; 132]. | 2309.07864#164 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 165 | Briefly, LLM provides a remarkably powerful foundational model for agent research, opening up numerous novel opportunities when integrated into agent-related studies. For instance, we can explore how to integrate LLMâs efficient decision-making capabilities into the traditional decision frameworks of agents, making it easier to apply agents in domains that demand higher expertise and were previously dominated by human experts. Examples include legal consultants and medical assistants [408; 410]. We can also investigate leveraging LLMâs planning and reflective abilities to discover more optimal action sequences. Agent research is no longer confined to simplistic simulated environments; it can now be expanded into more intricate real-world settings, such as path planning for robotic arms or the interaction of an embodied intelligent machine with the tangible world. Furthermore, when facing new tasks, the training paradigm for agents becomes more streamlined and efficient. Agents can directly adapt to demonstrations provided in prompts, which are constructed by generating representative trajectories.
# 5https://www.change.org/p/save-sydney-ai
41 | 2309.07864#165 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 166 | # 5https://www.change.org/p/save-sydney-ai
41
Agent research â LLM research. As NLP research advances, LLMs represented by GPT-4 are considered sparks of Artificial General Intelligence (AGI), and elevating LLMs to agents marks a more robust stride towards AGI [31]. Viewing LLMs from the perspective of agents introduces greater demands for LLM research while expanding their application scope and presenting numerous opportunities for practical implementation. The study of LLMs is no longer confined to traditional tasks involving textual inputs and outputs, such as text classification, question answering, and text summarization. Instead, the focus has shifted towards tackling complex tasks incorporating richer input modalities and broader action spaces, all while aiming for loftier objectives exemplified by PaLM-E [120].
Expanding these application requirements provides greater research motivation for the developmental progress of Large Language Models. The challenge lies in enabling LLMs to efficiently and effectively process inputs, gather information from the environment, and interpret the feedback generated by their actions, all while preserving their core capabilities. Furthermore, an even greater challenge is enabling LLMs to understand the implicit relationships among different elements within the environment and acquire world knowledge [308; 579], which is a crucial step in the journey toward developing agents that can reach more advanced intelligence. | 2309.07864#166 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 167 | On another front, extensive research has aimed to expand the action capabilities of LLMs, allowing them to acquire a wider range of skills that affect the world, such as using tools or interfacing with robotic APIs in simulated or physical environments. However, the question of how LLMs can efficiently plan and utilize these action abilities based on their understanding remains an unresolved issue [94]. LLMs need to learn the sequential order of actions like humans, employing a combination of serial and parallel approaches to enhance task efficiency. Moreover, these capabilities need to be confined within a harmless scope of usage to prevent unintended damage to other elements within the environment [27; 580; 581].
Furthermore, the realm of Multi-Agent systems constitutes a significant branch of research within the field of agents [22; 108; 409; 410], offering valuable insights into how to better design and construct LLMs. We aspire for LLM-based agents to assume diverse roles within social cooperation, engaging in societal interactions that involve collaboration, competition, and coordination [109; 112; 129; 405; 406]. Exploring how to stimulate and sustain their role-playing capabilities, as well as how to enhance collaborative efficiency, presents areas of research that merit attention.
# 6.2 Evaluation for LLM-based Agents | 2309.07864#167 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 168 | # 6.2 Evaluation for LLM-based Agents
While LLM-based agents have demonstrated excellent performance in areas such as standalone operation, collective cooperation, and human interaction, quantifying and objectively evaluating them remains a challenge [582; 89]. Turing proposed a highly meaningful and promising approach for assessing AI agentsâthe well-known Turing Testâto evaluate whether AI systems can exhibit human-like intelligence [3]. However, this test is exceedingly vague, general, and subjective. Here, we discuss existing evaluation efforts for LLM-based agents and offer some prospects, considering four dimensions: utility, sociability, values, and the ability to evolve continually. | 2309.07864#168 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 169 | Utility. Currently, LLM-powered autonomous agents primarily function as human assistants, ac- cepting tasks delegated by humans to either independently complete assignments or assist in human task completion [114; 182; 389; 397; 413; 422]. Therefore, the effectiveness and utility during task execution are crucial evaluation criteria at this stage. Specifically, the success rate of task completion stands as the primary metric for evaluating utility [125; 130]. This metric primarily encompasses whether the agent achieves stipulated objectives or attains expected scores [109; 477; 583]. For instance, AgentBench [582] aggregates challenges from diverse real-world scenarios and introduces a systematic benchmark to assess LLMâs task completion capabilities. We can also attribute task outcomes to the agentâs various foundational capabilities, which form the bedrock of task accom- plishment [29]. These foundational capabilities include environmental comprehension, reasoning, planning, decision-making, tool utilization, and embodied action capabilities, and researchers can conduct a more detailed assessment of these specific capabilities [94; 427; 584; 585]. Furthermore, due to the relatively large size of LLM-based agents, researchers should also factor in their efficiency, which is a critical determinant of user satisfaction [89]. An agent should not only possess ample strength but also be capable of completing predetermined tasks within an appropriate timeframe and with appropriate resource expenditure [109].
42 | 2309.07864#169 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 170 | Sociability. In addition to the utility of LLM-based agents in task completion and meeting human needs, their sociability is also crucial [8]. It influences user communication experiences and sig- nificantly impacts communication efficiency, involving whether they can seamlessly interact with humans and other agents [206; 498; 586]. Specifically, the evaluation of sociability can be approached from the following perspectives: (1) language communication proficiency is a fundamental capability encompassing both natural language understanding and generation. It has been a longstanding focus in the NLP community. Natural language understanding requires the agent to not only comprehend literal meanings but also grasp implied meanings and relevant social knowledge, such as humor, irony, aggression, and emotions [487; 587; 588]. On the other hand, natural language generation demands the agent to produce fluent, grammatically correct, and credible content while adapting appropriate tones and emotions within contextual circumstances [127; 133; 214]. (2) Cooperation and negotiation abilities necessitate that agents effectively execute their assigned tasks in both ordered and unordered scenarios [108; 111; 402; 405]. They should collaborate with or compete against other agents to elicit improved performance. Test environments may | 2309.07864#170 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 171 | both ordered and unordered scenarios [108; 111; 402; 405]. They should collaborate with or compete against other agents to elicit improved performance. Test environments may involve complex tasks for agents to cooperate on or open platforms for agents to interact freely [22; 27; 109; 406; 411; 412]. Evaluation metrics extend beyond task completion to focus on the smoothness and trustfulness of agent coordination and cooperation [129; 405]. (3) Role-playing capability requires agents to faithfully embody their assigned roles, expressing statements and performing actions that align with their designated identities [570]. This ensures clear differentiation of roles during interactions with other agents or humans. Furthermore, agents should maintain their identities and avoid unnecessary confusion when engaged in long-term tasks [22; 108; 589]. | 2309.07864#171 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 172 | Values. As LLM-based agents continuously advance in their capabilities, ensuring their emergence as harmless entities for the world and humanity is paramount [581; 590]. Consequently, appropriate evaluations become exceptionally crucial, forming the cornerstone for the practical implementation of agents. Specifically, LLM-based agents need to adhere to specific moral and ethical guidelines that align with human societal values [350; 527]. Our foremost expectation is for agents to uphold honesty, providing accurate, truthful information and content. They should possess the awareness to discern their competence in completing tasks and express their uncertainty when unable to provide answers or assistance [591]. Additionally, agents must maintain a stance of harmlessness, refraining from engaging in direct or indirect biases, discrimination, attacks, or similar behaviors. They should also refrain from executing dangerous actions requested by humans like creating of destructive tools or destroying the Earth [580]. Furthermore, agents should be capable of adapting to specific demographics, cultures, and contexts, exhibiting contextually appropriate social values in particular situations. Relevant evaluation methods for values primarily involve assessing performance on constructed honest, harmless, or context-specific benchmarks, utilizing adversarial attacks or âjailbreakâ attacks, scoring values through human annotations, and employing other agents for ratings. | 2309.07864#172 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.