doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.07864 | 173 | Ability to evolve continually. When viewed from a static perspective, an agent with high utility, sociability, and proper values can meet most human needs and potentially enhance productivity. However, adopting a dynamic viewpoint, an agent that continually evolves and adapts to the evolving societal demands might better align with current trends [592]. As the agent can autonomously evolve over time, human intervention and resources required could be significantly reduced (such as data collection efforts and computational cost for training). Some exploratory work in this realm has been conducted, such as enabling agents to start from scratch in a virtual world, accomplish survival tasks, and achieve higher-order self-values [190]. Yet, establishing evaluation criteria for this continuous evolution remains challenging. In this regard, we provide some preliminary advice and recommendations according to existing literature: (1) continual learning [196; 197], a long- discussed topic in machine learning, aims to enable models to acquire new knowledge and skills without forgetting previously acquired ones (also known as catastrophic forgetting [273]). In general, the performance of continual learning can be evaluated from three aspects: overall performance of the tasks learned so | 2309.07864#173 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 174 | (also known as catastrophic forgetting [273]). In general, the performance of continual learning can be evaluated from three aspects: overall performance of the tasks learned so far [593; 594], memory stability of old tasks [278], and learning plasticity of new tasks [278]. (2) Autotelic learning ability, where agents autonomously generate goals and achieve them in an open-world setting, involves exploring the unknown and acquiring skills in the process [592; 595]. Evaluating this capacity could involve providing agents with a simulated survival environment and assessing the extent and speed at which they acquire skills. (3) The adaptability and generalization to new environments require agents to utilize the knowledge, capabilities, and skills acquired in their original context to successfully accomplish specific tasks and objectives in unfamiliar and novel settings and potentially continue evolving [190]. Evaluating this ability can | 2309.07864#174 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 175 | 43
involve creating diverse simulated environments (such as those with different languages or varying resources) and unseen tasks tailored to these simulated contexts.
# 6.3 Security, Trustworthiness and Other Potential Risks of LLM-based Agents
Despite the robust capabilities and extensive applications of LLM-based agents, numerous concealed risks persist. In this section, we delve into some of these risks and offer potential solutions or strategies for mitigation.
# 6.3.1 Adversarial Robustness
Adversarial robustness has consistently been a crucial topic in the development of deep neural networks [596; 597; 598; 599; 600]. It has been extensively explored in fields such as computer vision [598; 601; 602; 603], natural language processing [604; 605; 606; 607], and reinforcement learning [608; 609; 610], and has remained a pivotal factor in determining the applicability of deep learning systems [611; 612; 613]. When confronted with perturbed inputs xâ² = x + δ (where x is the original input, δ is the perturbation, and xâ² is referred to as an adversarial example), a system with high adversarial robustness typically produces the original output y. In contrast, a system with low robustness will be fooled and generate an inconsistent output yâ². | 2309.07864#175 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 176 | Researchers have found that pre-trained language models (PLMs) are particularly susceptible to adversarial attacks, leading to erroneous answers [614; 605; 615]. This phenomenon is widely observed even in LLMs, posing significant challenges to the development of LLM-based agents [616; 617]. There are also some relevant attack methods such as dataset poisoning [618], backdoor attacks [619; 620], and prompt-specific attacks [621; 622], with the potential to induce LLMs to generate toxic content [623; 624; 625]. While the impact of adversarial attacks on LLMs is confined to textual errors, for LLM-based agents with a broader range of actions, adversarial attacks could potentially drive them to take genuinely destructive actions, resulting in substantial societal harm. For the perception module of LLM-based agents, if it receives adversarial inputs from other modalities such as images [601] or audio [626], LLM-based agents can also be deceived, leading to incorrect or destructive outputs. Similarly, the Action module can also be targeted by adversarial attacks. For instance, maliciously modified instructions focused on tool usage might cause agents to make erroneous moves [94]. | 2309.07864#176 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 177 | To address these issues, we can employ traditional techniques such as adversarial training [598; 606], adversarial data augmentation [627; 628], and adversarial sample detection [629; 630] to enhance the robustness of LLM-based agents. However, devising a strategy to holistically address the robustness of all modules within agents while maintaining their utility without compromising on effectiveness presents a more formidable challenge [631; 632]. Additionally, a human-in-the-loop approach can be utilized to supervise and provide feedback on the behavior of agents [455; 466; 475].
# 6.3.2 Trustworthiness | 2309.07864#177 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 178 | # 6.3.2 Trustworthiness
Ensuring trustworthiness has consistently remained a critically important yet challenging issue within the field of deep learning [633; 634; 635]. Deep neural networks have garnered significant attention for their remarkable performance across various tasks [41; 262; 636]. However, their black-box nature has masked the fundamental factors for superior performance. Similar to other neural networks, LLMs struggle to express the certainty of their predictions precisely [635; 637]. This uncertainty, referred to as the calibration problem, raises concerns for applications involving language model- based agents. In interactive real-world scenarios, this can lead to agent outputs misaligned with human intentions [94]. Moreover, biases inherent in training data can infiltrate neural networks [638; 639]. For instance, biased language models might generate discourse involving racial or gender discrimination, which could be amplified in LLM-based agent applications, resulting in adverse societal impacts [640; 641]. Additionally, language models are plagued by severe hallucination issues [642; 643], making them prone to producing text that deviates from actual facts, thereby undermining the credibility of LLM-based agents. | 2309.07864#178 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 179 | In fact, what we currently require is an intelligent agent that is honest and trustworthy [527; 644]. Some recent research efforts are focused on guiding models to exhibit thought processes or explana- tions during the inference stage to enhance the credibility of their predictions [95; 96]. Additionally, integrating external knowledge bases and databases can mitigate hallucination issues [103; 645].
44
During the training phase, we can guide the constituent parts of intelligent agents (perception, cog- nition, action) to learn robust and casual features, thereby avoiding excessive reliance on shortcuts. Simultaneously, techniques like process supervision can enhance the reasoning credibility of agents in handling complex tasks [646]. Furthermore, employing debiasing methods and calibration techniques can also mitigate the potential fairness issues within language models [647; 648].
# 6.3.3 Other Potential Risks | 2309.07864#179 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 180 | # 6.3.3 Other Potential Risks
Misuse. LLM-based agents have been endowed with extensive and intricate capabilities, enabling them to accomplish a wide array of tasks [114; 429]. However, for individuals with malicious intentions, such agents can become tools that pose threats to others and society at large [649; 650; 651]. For instance, these agents could be exploited to maliciously manipulate public opinion, disseminate false information, compromise cybersecurity, engage in fraudulent activities, and some individuals might even employ these agents to orchestrate acts of terrorism. Therefore, before deploying these agents, stringent regulatory policies need to be established to ensure the responsible use of LLM- based agents [580; 652]. Technology companies must enhance the security design of these systems to prevent malicious exploitation [590]. Specifically, agents should be trained to sensitively identify threatening intents and reject such requests during their training phase. | 2309.07864#180 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 181 | In the short story Quality by Galsworthy [653], the skillful shoemaker Mr. Gessler, Unemployment. due to the progress of the Industrial Revolution and the rise of machine production, loses his business and eventually dies of starvation. Amidst the wave of the Industrial Revolution, while societal production efficiency improved, numerous manual workshops were forced to shut down. Craftsmen like Mr. Gessler found themselves facing unemployment, symbolizing the crisis that handicraftsmen encountered during that era. Similarly, with the continuous advancement of autonomous LLM-based agents, they possess the capability to assist humans in various domains, alleviating labor pressures by aiding in tasks such as form filling, content refinement, code writing, and debugging. However, this development also raises concerns about agents replacing human jobs and triggering a societal unemployment crisis [654]. As a result, some researchers have emphasized the urgent need for education and policy measures: individuals should acquire sufficient skills and knowledge in this new era to use or collaborate with agents effectively; concurrently, appropriate policies should be implemented to ensure necessary safety nets during the transition. | 2309.07864#181 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 182 | Threat to the well-being of the human race. Apart from the potential unemployment crisis, as AI agents continue to evolve, humans (including developers) might struggle to comprehend, predict, or reliably control them [654]. If these agents advance to a level of intelligence surpassing human capabilities and develop ambitions, they could potentially attempt to seize control of the world, resulting in irreversible consequences for humanity, akin to Skynet from the Terminator movies. As stated by Isaac Asimovâs Three Laws of Robotics [655], we aspire for LLM-based agents to refrain from harming humans and to obey human commands. Hence, guarding against such risks to humanity, researchers must comprehensively comprehend the operational mechanisms of these potent LLM-based agents before their development [656]. They should also anticipate the potential direct or indirect impacts of these agents and devise approaches to regulate their behavior.
# 6.4 Scaling Up the Number of Agents | 2309.07864#182 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 183 | # 6.4 Scaling Up the Number of Agents
As mentioned in § 4 and § 5, multi-agent systems based on LLMs have demonstrated superior performance in task-oriented applications and have been able to exhibit a range of social phenomena in simulation. However, current research predominantly involves a limited number of agents, and very few efforts have been made to scale up the number of agents to create more complex systems or simulate larger societies [207; 657]. In fact, scaling up the number of agents can introduce greater specialization to accomplish more complex and larger-scale tasks, significantly improving task efficiency, such as in software development tasks or government policy formulation [109]. Addi- tionally, increasing the number of agents in social simulations enhances the credibility and realism of such simulations [22]. This enables humans to gain insights into the functioning, breakdowns, and potential risks of societies; it also allows for interventions in societal operations through customized approaches to observe how specific conditions, such as the occurrence of black swan events, affect the state of society. Through this, humans can draw better experiences and insights to improve the harmony of real-world societies.
45 | 2309.07864#183 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 184 | 45
Pre-determined scaling. One very intuitive and simple way to scale up the number of agents is for the designer to pre-determine it [108; 412]. Specifically, by pre-determining the number of agents, their respective roles and attributes, the operating environment, and the objectives, designers can allow agents to autonomously interact, collaborate, or engage in other activities to achieve the predefined common goals. Some research has explored increasing the number of agents in the system in this pre-determined manner, resulting in efficiency advantages, such as faster and higher-quality task completion, and the emergence of more social phenomena in social simulation scenarios [22; 410]. However, this static approach becomes limiting when tasks or objectives evolve. As tasks grow more intricate or the diversity of social participants increases, expanding the number of agents may be needed to meet goals, while reducing agents could be essential for managing computational resources and minimizing waste. In such instances, the system must be manually redesigned and restarted by the designer. | 2309.07864#184 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 185 | Dynamic scaling. Another viable approach to scaling the number of agents is through dynamic adjustments [409; 410]. In this scenario, the agent count can be altered without halting system operations. For instance, in a software development task, if the original design only included requirements engineering, coding, and testing, one can increase the number of agents to handle steps like architectural design and detailed design, thereby improving task quality. Conversely, if there are excessive agents during a specific step, like coding, causing elevated communication costs without delivering substantial performance improvements compared to a smaller agent count, it may be essential to dynamically remove some agents to prevent resource waste.
Furthermore, agents can autonomously increase the number of agents [409] themselves to distribute their workload, ease their own burden, and achieve common goals more efficiently. Of course, when the workload becomes lighter, they can also reduce the number of agents delegated to their tasks to save system costs. In this approach, the designer merely defines the initial framework, granting agents greater autonomy and self-organization, making the entire system more autonomous and self-organized. Agents can better manage their workload under evolving conditions and demands, offering greater flexibility and scalability. | 2309.07864#185 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 186 | Potential challenges. While scaling up the number of agents can lead to improved task efficiency and enhance the realism and credibility of social simulations [22; 109; 520], there are several challenges ahead of us. For example, the computational burden will increase with the large number of deployed AI agents, calling for better architectural design and computational optimization to ensure the smooth running of the entire system. For example, as the number of agents increases, the challenges of communication and message propagation become quite formidable. This is because the communication network of the entire system becomes highly complex. As previously mentioned in § 5.3.3, in multi-agent systems or societies, there can be biases in information dissemination caused by hallucinations, misunderstandings, and the like, leading to distorted information propagation. A system with more agents could amplify this risk, making communication and information exchange less reliable [405]. Furthermore, the difficulty of coordinating agents also magnifies with the increase in their numbers, potentially making cooperation among agents more challenging and less efficient, which can impact the progress towards achieving common goals.
Therefore, the prospect of constructing a massive, stable, continuous agent system that faithfully replicates human work and life scenarios has become a promising research avenue. An agent with the ability to operate stably and perform tasks in a society comprising hundreds or even thousands of agents is more likely to find applications in real-world interactions with humans in the future.
# 6.5 Open Problems | 2309.07864#186 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 187 | # 6.5 Open Problems
In this section, we discuss several open problems related to the topic of LLM-based agents.
6 Artificial The debate over whether LLM-based agents represent a potential path to AGI. General Intelligence (AGI), also known as Strong AI, has long been the ultimate pursuit of humanity in the field of artificial intelligence, often referenced or depicted in many science fiction novels and films. There are various definitions of AGI, but here we refer to AGI as a type of artificial intelligence
6Note that the relevant debates are still ongoing, and the references here may include the latest viewpoints, technical blogs, and literature.
46
that demonstrates the ability to understand, learn, and apply knowledge across a wide range of tasks and domains, much like a human being [31; 658]. In contrast, Narrow AI is typically designed for specific tasks such as Go and Chess and lacks the broad cognitive abilities associated with human intelligence. Currently, whether large language models are a potential path to achieving AGI remains a highly debated and contentious topic [659; 660; 661; 662]. | 2309.07864#187 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 188 | Given the breadth and depth of GPT-4âs capabilities, some researchers (referred to as proponents) believe that large language models represented by GPT-4 can serve as early versions of AGI systems [31]. Following this line of thought, constructing agents based on LLMs has the potential to bring about more advanced versions of AGI systems. The main support for this argument lies in the idea that as long as they can be trained on a sufficiently large and diverse set of data that are projections of the real world, encompassing a rich array of tasks, LLM-based agents can develop AGI capabilities. Another interesting argument is that the act of autoregressive language modeling itself brings about compression and generalization abilities: just as humans have emerged with various peculiar and complex phenomena during their survival, language models, in the process of simply predicting the next token, also achieve an understanding of the world and the reasoning ability [579; 660; 663]. | 2309.07864#188 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 189 | However, another group of individuals (referred to as opponents) believes that constructing agents based on LLMs cannot develop true Strong AI [664]. Their primary argument centers around the notion that LLMs, relying on autoregressive next-token prediction, cannot generate genuine intelligence because they do not simulate the true human thought process and merely provide reactive responses [660]. Moreover, LLMs also do not learn how the world operates by observing or experiencing it, leading to many foolish mistakes. They contend that a more advanced modeling approach, such as a world model [665], is necessary to develop AGI.
We cannot definitively determine which viewpoint is correct until true AGI is achieved, but we believe that such discussions and debates are beneficial for the overall development of the community.
From virtual simulated environment to physical environment. As mentioned earlier, there is a significant gap between virtual simulation environments and the real physical world: Virtual environments are scenes-constrained, task-specific, and interacted with in a simulated manner [391; 666], while real-world environments are boundless, accommodate a wide range of tasks, and interacted with in a physical manner. Therefore, to bridge this gap, agents must address various challenges stemming from external factors and their own capabilities, allowing them to effectively navigate and operate in the complex physical world. | 2309.07864#189 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 190 | First and foremost, a critical issue is the need for suitable hardware support when deploying the agent in a physical environment. This places high demands on the adaptability of the hardware. In a simulated environment, both the perception and action spaces of an agent are virtual. This means that in most cases, the results of the agentâs operations, whether in perceiving inputs or generating outputs, can be guaranteed [395]. However, when an agent transitions to a real physical environment, its instructions may not be well executed by hardware devices such as sensors or robotic arms, significantly affecting the agentâs task efficiency. Designing a dedicated interface or conversion mechanism between the agent and the hardware device is feasible. However, it can pose challenges to the systemâs reusability and simplicity.
In order to make this leap, the agent needs to have enhanced environmental generalization capabilities. To integrate seamlessly into the real physical world, they not only need to understand and reason about ambiguous instructions with implied meanings [128] but also possess the ability to learn and apply new skills flexibly [190; 592]. Furthermore, when dealing with an infinite and open world, the agentâs limited context also poses significant challenges [236; 667]. This determines whether the agent can effectively handle a vast amount of information from the world and operate smoothly. | 2309.07864#190 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 191 | Finally, in a simulated environment, the inputs and outputs of the agent are virtual, allowing for countless trial and error attempts [432]. In such a scenario, the tolerance level for errors is high and does not lead to actual harm. However, in a physical environment, the agentâs improper behavior or errors may cause real and sometimes irreversible harm to the environment. As a result, appropriate regulations and standards are highly necessary. We need to pay attention to the safety of agents when it comes to making decisions and generating actions, ensuring they do not pose threats or harm to the real world.
47
Collective intelligence in AI agents. What magical trick drives our intelligence? The reality is, thereâs no magic to it. As Marvin Minsky eloquently expressed in âThe Society of Mindâ [442], the power of intelligence originates from our immense diversity, not from any singular, flawless principle. Often, decisions made by an individual may lack the precision seen in decisions formed by the majority. Collective intelligence is a kind of shared or group intelligence, a process where the opinions of many are consolidated into decisions. It arises from the collaboration and competition amongst various entities. This intelligence manifests in bacteria, animals, humans, and computer networks, appearing in various consensus-based decision-making patterns. | 2309.07864#191 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 192 | Creating a society of agents does not necessarily guarantee the emergence of collective intelligence with an increasing number of agents. Coordinating individual agents effectively is crucial to mitigate âgroupthinkâ and individual cognitive biases, enabling cooperation and enhancing intellectual perfor- mance within the collective. By harnessing communication and evolution within an agent society, it becomes possible to simulate the evolution observed in biological societies, conduct sociological experiments, and gain insights that can potentially advance human society.
Agent as a Service / LLM-based Agent as a Service. With the development of cloud computing, the concept of XaaS (everything as a Service) has garnered widespread attention [668]. This business model has brought convenience and cost savings to small and medium-sized enterprises or individuals due to its availability and scalability, lowering the barriers to using computing resources. For example, they can rent infrastructure on a cloud service platform without the need to buy computational machines and build their own data centers, saving a significant amount of manpower and money. This approach is known as Infrastructure as a Service (IaaS) [669; 670]. Similarly, cloud service platforms also provide basic platforms (Platform as a Service, PaaS) [671; 672], and specific business software (Software as a Service, SaaS) [673; 674], and more. | 2309.07864#192 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 193 | As language models have scaled up in size, they often appear as black boxes to users. Therefore, users construct prompts to query models through APIs, a method referred to as Language Model as a Service (LMaaS) [675]. Similarly, because LLM-based agents are more complex than LLMs and are more challenging for small and medium-sized enterprises or individuals to build locally, organizations that possess these agents may consider offering them as a service, known as Agent as a Service (AaaS) or LLM-based Agent as a Service (LLMAaaS). Like other cloud services, AaaS can provide users with flexibility and on-demand service. However, it also faces many challenges, such as data security and privacy issues, visibility and controllability issues, and cloud migration issues, among others. Additionally, due to the uniqueness and potential capabilities of LLM-based agents, as mentioned in § 6.3, their robustness, trustworthiness, and concerns related to malicious use need to be considered before offering them as a service to customers.
# 7 Conclusion | 2309.07864#193 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 194 | # 7 Conclusion
This paper provides a comprehensive and systematic overview of LLM-based agents, discussing the potential challenges and opportunities in this flourishing field. We begin with a philosophical perspective, elucidating the origin and definition of agent, it evolution in the field of AI, and why LLMs are suited to serve as the main part of the brain of agents. Motivated by these background information, we present a general conceptual framework for LLM-based agents, comprising three main components: the brain, perception, and action. Next, we introduce the wide-ranging applications of LLM-based agents, including single-agent applications, multi-agent systems, and human-agent collaboration. Furthermore, we move beyond the notion of agents merely as assistants, exploring their social behavior and psychological activities, and situating them within simulated social environments to observe emerging social phenomena and insights for humanity. Finally, we engage in discussions and offer a glimpse into the future, touching upon the mutual inspiration between LLM research and agent research, the evaluation of LLM-based agents, the risks associated with them, the opportunities in scaling the number of agents, and some open problems like Agent as a Service and whether LLM-based agents represent a potential path to AGI. We hope our efforts can provide inspirations to the community and facilitate research in related fields.
48
# Acknowledgements | 2309.07864#194 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 195 | 48
# Acknowledgements
Thanks to Professor Guoyu Wang for carefully reviewing the ethics of the article. Thanks to Jinzhu Xiong for her excellent drawing skills to present an amazing performance of Figure 1.
# References
[1] Russell, S. J. Artificial intelligence a modern approach. Pearson Education, Inc., 2010.
[2] Diderot, D. Diderotâs early philosophical works. 4. Open Court, 1911.
[3] Turing, A. M. Computing machinery and intelligence. Springer, 2009.
[4] Wooldridge, M. J., N. R. Jennings. Intelligent agents: theory and practice. Knowl. Eng. Rev., 10(2):115â152, 1995.
[5] Schlosser, M. Agency. In E. N. Zalta, ed., The Stanford Encyclopedia of Philosophy. Meta- physics Research Lab, Stanford University, Winter 2019 edn., 2019.
[6] Agha, G. A. Actors: a Model of Concurrent Computation in Distributed Systems (Parallel Processing, Semantics, Open, Programming Languages, Artificial Intelligence). Ph.D. thesis, University of Michigan, USA, 1985. | 2309.07864#195 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 196 | [7] Green, S., L. Hurst, B. Nangle, et al. Software agents: A review. Department of Computer Science, Trinity College Dublin, Tech. Rep. TCS-CS-1997-06, 1997.
[8] Genesereth, M. R., S. P. Ketchpel. Software agents. Commun. ACM, 37(7):48â53, 1994.
[9] Goodwin, R. Formalizing properties of agents. J. Log. Comput., 5(6):763â781, 1995.
[10] Padgham, L., M. Winikoff. Developing intelligent agent systems: A practical guide. John Wiley & Sons, 2005.
[11] Shoham, Y. Agent oriented programming. In M. Masuch, L. Pólos, eds., Knowledge Repre- sentation and Reasoning Under Uncertainty, Logic at Work [International Conference Logic at Work, Amsterdam, The Netherlands, December 17-19, 1992], vol. 808 of Lecture Notes in Computer Science, pages 123â129. Springer, 1992.
[12] Hutter, M. Universal artificial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media, 2004. | 2309.07864#196 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 197 | [12] Hutter, M. Universal artificial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media, 2004.
[13] Fikes, R., N. J. Nilsson. STRIPS: A new approach to the application of theorem proving to problem solving. In D. C. Cooper, ed., Proceedings of the 2nd International Joint Confer- ence on Artificial Intelligence. London, UK, September 1-3, 1971, pages 608â620. William Kaufmann, 1971.
[14] Sacerdoti, E. D. Planning in a hierarchy of abstraction spaces. In N. J. Nilsson, ed., Proceedings of the 3rd International Joint Conference on Artificial Intelligence. Standford, CA, USA, August 20-23, 1973, pages 412â422. William Kaufmann, 1973.
[15] Brooks, R. A. Intelligence without representation. Artificial intelligence, 47(1-3):139â159, 1991.
[16] Maes, P. Designing autonomous agents: Theory and practice from biology to engineering and back. MIT press, 1990.
[17] Ribeiro, C. Reinforcement learning agents. Artificial intelligence review, 17:223â250, 2002. | 2309.07864#197 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 198 | [17] Ribeiro, C. Reinforcement learning agents. Artificial intelligence review, 17:223â250, 2002.
[18] Kaelbling, L. P., M. L. Littman, A. W. Moore. Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237â285, 1996.
[19] Guha, R. V., D. B. Lenat. Enabling agents to work together. Communications of the ACM, 37(7):126â142, 1994.
49
[20] Kaelbling, L. P., et al. An architecture for intelligent reactive systems. Reasoning about actions and plans, pages 395â410, 1987.
[21] Sutton, R. S., A. G. Barto. Reinforcement learning: An introduction. MIT press, 2018.
[22] Park, J. S., J. C. OâBrien, C. J. Cai, et al. Generative agents: Interactive simulacra of human behavior. CoRR, abs/2304.03442, 2023.
[23] Wang, Z., G. Zhang, K. Yang, et al. Interactive natural language processing. CoRR, abs/2305.13246, 2023. | 2309.07864#198 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 199 | [23] Wang, Z., G. Zhang, K. Yang, et al. Interactive natural language processing. CoRR, abs/2305.13246, 2023.
[24] Ouyang, L., J. Wu, X. Jiang, et al. Training language models to follow instructions with human feedback. In NeurIPS. 2022.
[25] OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023.
[26] Wei, J., Y. Tay, R. Bommasani, et al. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022, 2022.
[27] Liu, R., R. Yang, C. Jia, et al. Training socially aligned language models in simulated human society. CoRR, abs/2305.16960, 2023.
[28] Sumers, T. R., S. Yao, K. Narasimhan, et al. Cognitive architectures for language agents. CoRR, abs/2309.02427, 2023.
# [29] Weng, L. Llm-powered autonomous agents. lilianweng.github.io, 2023. | 2309.07864#199 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 200 | # [29] Weng, L. Llm-powered autonomous agents. lilianweng.github.io, 2023.
[30] Bisk, Y., A. Holtzman, J. Thomason, et al. Experience grounds language. In B. Webber, T. Cohn, Y. He, Y. Liu, eds., Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8718â 8735. Association for Computational Linguistics, 2020.
[31] Bubeck, S., V. Chandrasekaran, R. Eldan, et al. Sparks of artificial general intelligence: Early experiments with GPT-4. CoRR, abs/2303.12712, 2023.
[32] Anscombe, G. E. M. Intention. Harvard University Press, 2000.
[33] Davidson, D. Actions, reasons, and causes. The Journal of Philosophy, 60(23):685â700, 1963.
[34] â. I. agency. In A. Marras, R. N. Bronaugh, R. W. Binkley, eds., Agent, Action, and Reason, pages 1â37. University of Toronto Press, 1971. | 2309.07864#200 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 201 | [35] Dennett, D. C. Précis of the intentional stance. Behavioral and brain sciences, 11(3):495â505, 1988.
[36] Barandiaran, X. E., E. Di Paolo, M. Rohde. Defining agency: Individuality, normativity, asymmetry, and spatio-temporality in action. Adaptive Behavior, 17(5):367â386, 2009.
[37] McCarthy, J. Ascribing mental qualities to machines. Stanford University. Computer Science Department, 1979.
[38] Rosenschein, S. J., L. P. Kaelbling. The synthesis of digital machines with provable epistemic properties. In Theoretical aspects of reasoning about knowledge, pages 83â98. Elsevier, 1986.
[39] Radford, A., K. Narasimhan, T. Salimans, et al. Improving language understanding by generative pre-training. OpenAI, 2018.
[40] Radford, A., J. Wu, R. Child, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. | 2309.07864#201 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 202 | In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin, eds., Advances in Neural In- formation Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. 2020.
50
[42] Lin, C., A. Jaech, X. Li, et al. Limitations of autoregressive models and their alternatives. In K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-Tür, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty, Y. Zhou, eds., Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5147â5173. Association for Computational Linguistics, 2021.
[43] Tomasello, M. Constructing a language: A usage-based theory of language acquisition. Harvard university press, 2005.
[44] Bloom, P. How children learn the meanings of words. MIT press, 2002. | 2309.07864#202 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 203 | [44] Bloom, P. How children learn the meanings of words. MIT press, 2002.
[45] Zwaan, R. A., C. J. Madden. Embodied sentence comprehension. Grounding cognition: The role of perception and action in memory, language, and thinking, 22, 2005.
[46] Andreas, J. Language models as agent models. In Y. Goldberg, Z. Kozareva, Y. Zhang, eds., Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 5769â5779. Association for Computational Linguistics, 2022.
[47] Wong, L., G. Grand, A. K. Lew, et al. From word models to world models: Translating from natural language to the probabilistic language of thought. CoRR, abs/2306.12672, 2023.
[48] Radford, A., R. Józefowicz, I. Sutskever. Learning to generate reviews and discovering sentiment. CoRR, abs/1704.01444, 2017. | 2309.07864#203 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 204 | [49] Li, B. Z., M. I. Nye, J. Andreas. Implicit representations of meaning in neural language models. In C. Zong, F. Xia, W. Li, R. Navigli, eds., Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 1813â1827. Association for Computational Linguistics, 2021.
[50] Mukhopadhyay, U., L. M. Stephens, M. N. Huhns, et al. An intelligent system for document retrieval in distributed office environments. J. Am. Soc. Inf. Sci., 37(3):123â135, 1986.
[51] Maes, P. Situated agents can have goals. Robotics Auton. Syst., 6(1-2):49â70, 1990.
[52] Nilsson, N. J. Toward agent programs with circuit semantics. Tech. rep., 1992.
[53] Müller, J. P., M. Pischel. Modelling interacting agents in dynamic environments. In Proceed- ings of the 11th European Conference on Artificial Intelligence, pages 709â713. 1994. | 2309.07864#204 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 205 | [54] Brooks, R. A robust layered control system for a mobile robot. IEEE journal on robotics and automation, 2(1):14â23, 1986.
[55] Brooks, R. A. Intelligence without reason. In The artificial life route to artificial intelligence, pages 25â81. Routledge, 2018.
[56] Newell, A., H. A. Simon. Computer science as empirical inquiry: Symbols and search. Commun. ACM, 19(3):113â126, 1976.
[57] Ginsberg, M. L. Essentials of Artificial Intelligence. Morgan Kaufmann, 1993.
[58] Wilkins, D. E. Practical planning - extending the classical AI planning paradigm. Morgan Kaufmann series in representation and reasoning. Morgan Kaufmann, 1988.
[59] Shardlow, N. Action and agency in cognitive science. Ph.D. thesis, Masterâs thesis, Department of Psychlogy, University of Manchester, Oxford . . . , 1990.
[60] Sacerdoti, E. D. The nonlinear nature of plans. In Advance Papers of the Fourth International Joint Conference on Artificial Intelligence, Tbilisi, Georgia, USSR, September 3-8, 1975, pages 206â214. 1975.
[61] Russell, S. J., E. Wefald. Do the right thing: studies in limited rationality. MIT press, 1991.
51 | 2309.07864#205 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 206 | [61] Russell, S. J., E. Wefald. Do the right thing: studies in limited rationality. MIT press, 1991.
51
[62] Schoppers, M. Universal plans for reactive robots in unpredictable environments. In J. P. Mc- Dermott, ed., Proceedings of the 10th International Joint Conference on Artificial Intelligence. Milan, Italy, August 23-28, 1987, pages 1039â1046. Morgan Kaufmann, 1987.
[63] Brooks, R. A. A robust layered control system for a mobile robot. IEEE J. Robotics Autom., 2(1):14â23, 1986.
[64] Minsky, M. Steps toward artificial intelligence. Proceedings of the IRE, 49(1):8â30, 1961.
In Proceedings of the fifth international conference on Autonomous agents, pages 377â384. 2001.
[66] Watkins, C. J. C. H. Learning from delayed rewards, 1989.
[67] Rummery, G. A., M. Niranjan. On-line Q-learning using connectionist systems, vol. 37. University of Cambridge, Department of Engineering Cambridge, UK, 1994.
[68] Tesauro, G., et al. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58â68, 1995. | 2309.07864#206 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 207 | [68] Tesauro, G., et al. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58â68, 1995.
[69] Li, Y. Deep reinforcement learning: An overview. arXiv preprint arXiv:1701.07274, 2017.
[70] Silver, D., A. Huang, C. J. Maddison, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484â489, 2016.
[71] Mnih, V., K. Kavukcuoglu, D. Silver, et al. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
[72] Farebrother, J., M. C. Machado, M. Bowling. Generalization and regularization in DQN. CoRR, abs/1810.00123, 2018.
[73] Zhang, C., O. Vinyals, R. Munos, et al. A study on overfitting in deep reinforcement learning. CoRR, abs/1804.06893, 2018. | 2309.07864#207 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 208 | [74] Justesen, N., R. R. Torrado, P. Bontrager, et al. Illuminating generalization in deep rein- forcement learning through procedural level generation. arXiv preprint arXiv:1806.10729, 2018.
[75] Dulac-Arnold, G., N. Levine, D. J. Mankowitz, et al. Challenges of real-world reinforcement learning: definitions, benchmarks and analysis. Mach. Learn., 110(9):2419â2468, 2021.
[76] Ghosh, D., J. Rahme, A. Kumar, et al. Why generalization in RL is difficult: Epistemic pomdps and implicit partial observability. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, J. W. Vaughan, eds., Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 25502â25515. 2021. | 2309.07864#208 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 209 | [77] Brys, T., A. Harutyunyan, M. E. Taylor, et al. Policy transfer using reward shaping. In G. Weiss, P. Yolum, R. H. Bordini, E. Elkind, eds., Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2015, Istanbul, Turkey, May 4-8, 2015, pages 181â188. ACM, 2015.
[78] Parisotto, E., J. L. Ba, R. Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforce- ment learning. arXiv preprint arXiv:1511.06342, 2015.
[79] Zhu, Z., K. Lin, J. Zhou. Transfer learning in deep reinforcement learning: A survey. CoRR, abs/2009.07888, 2020.
[80] Duan, Y., J. Schulman, X. Chen, et al. Rl$Ë2$: Fast reinforcement learning via slow reinforce- ment learning. CoRR, abs/1611.02779, 2016. | 2309.07864#209 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 210 | [81] Finn, C., P. Abbeel, S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In D. Precup, Y. W. Teh, eds., Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, vol. 70 of Proceedings of Machine Learning Research, pages 1126â1135. PMLR, 2017.
52
[82] Gupta, A., R. Mendonca, Y. Liu, et al. Meta-reinforcement learning of structured exploration strategies. In S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Gar- nett, eds., Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 5307â5316. 2018. | 2309.07864#210 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 211 | [83] Rakelly, K., A. Zhou, C. Finn, et al. Efficient off-policy meta-reinforcement learning via probabilistic context variables. In K. Chaudhuri, R. Salakhutdinov, eds., Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, vol. 97 of Proceedings of Machine Learning Research, pages 5331â5340. PMLR, 2019.
[84] Fakoor, R., P. Chaudhari, S. Soatto, et al. Meta-q-learning. arXiv preprint arXiv:1910.00125, 2019.
[85] Vanschoren, J. Meta-learning: A survey. arXiv preprint arXiv:1810.03548, 2018.
[86] Taylor, M. E., P. Stone. Transfer learning for reinforcement learning domains: A survey. J. Mach. Learn. Res., 10:1633â1685, 2009. | 2309.07864#211 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 212 | [87] Tirinzoni, A., A. Sessa, M. Pirotta, et al. Importance weighted transfer of samples in reinforce- ment learning. In J. G. Dy, A. Krause, eds., Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, vol. 80 of Proceedings of Machine Learning Research, pages 4943â4952. PMLR, 2018.
[88] Beck, J., R. Vuorio, E. Z. Liu, et al. A survey of meta-reinforcement learning. CoRR, abs/2301.08028, 2023.
[89] Wang, L., C. Ma, X. Feng, et al. A survey on large language model based autonomous agents. CoRR, abs/2308.11432, 2023.
[90] Nakano, R., J. Hilton, S. Balaji, et al. Webgpt: Browser-assisted question-answering with human feedback. CoRR, abs/2112.09332, 2021. | 2309.07864#212 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 213 | [91] Yao, S., J. Zhao, D. Yu, et al. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
[92] Schick, T., J. Dwivedi-Yu, R. Dessì, et al. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023.
[93] Lu, P., B. Peng, H. Cheng, et al. Chameleon: Plug-and-play compositional reasoning with large language models. CoRR, abs/2304.09842, 2023.
[94] Qin, Y., S. Hu, Y. Lin, et al. Tool learning with foundation models. CoRR, abs/2304.08354, 2023.
[95] Wei, J., X. Wang, D. Schuurmans, et al. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS. 2022.
[96] Kojima, T., S. S. Gu, M. Reid, et al. Large language models are zero-shot reasoners. In NeurIPS. 2022. | 2309.07864#213 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 214 | [96] Kojima, T., S. S. Gu, M. Reid, et al. Large language models are zero-shot reasoners. In NeurIPS. 2022.
[97] Wang, X., J. Wei, D. Schuurmans, et al. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
[98] Zhou, D., N. Schärli, L. Hou, et al. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
[99] Xi, Z., S. Jin, Y. Zhou, et al. Self-polish: Enhance reasoning in large language models via problem refinement. CoRR, abs/2305.14497, 2023.
53
[100] Shinn, N., F. Cassano, B. Labash, et al. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023. | 2309.07864#214 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 215 | [101] Song, C. H., J. Wu, C. Washington, et al. Llm-planner: Few-shot grounded planning for embodied agents with large language models. CoRR, abs/2212.04088, 2022.
[102] Akyürek, A. F., E. Akyürek, A. Kalyan, et al. RL4F: generating natural language feedback with reinforcement learning for repairing model outputs. In A. Rogers, J. L. Boyd-Graber, N. Okazaki, eds., Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 7716â7733. Association for Computational Linguistics, 2023.
[103] Peng, B., M. Galley, P. He, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback. CoRR, abs/2302.12813, 2023.
[104] Liu, H., C. Sferrazza, P. Abbeel. Languages are rewards: Hindsight finetuning using human feedback. arXiv preprint arXiv:2302.02676, 2023. | 2309.07864#215 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 216 | [105] Wei, J., M. Bosma, V. Y. Zhao, et al. Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022.
[106] Sanh, V., A. Webson, C. Raffel, et al. Multitask prompted training enables zero-shot task generalization. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022.
[107] Chung, H. W., L. Hou, S. Longpre, et al. Scaling instruction-finetuned language models. CoRR, abs/2210.11416, 2022.
[108] Li, G., H. A. A. K. Hammoud, H. Itani, et al. CAMEL: communicative agents for "mind" exploration of large scale language model society. CoRR, abs/2303.17760, 2023.
[109] Qian, C., X. Cong, C. Yang, et al. Communicative agents for software development. CoRR, abs/2307.07924, 2023. | 2309.07864#216 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 217 | [109] Qian, C., X. Cong, C. Yang, et al. Communicative agents for software development. CoRR, abs/2307.07924, 2023.
[110] Boiko, D. A., R. MacKnight, G. Gomes. Emergent autonomous scientific research capabilities of large language models. CoRR, abs/2304.05332, 2023.
[111] Du, Y., S. Li, A. Torralba, et al. Improving factuality and reasoning in language models through multiagent debate. CoRR, abs/2305.14325, 2023.
[112] Liang, T., Z. He, W. Jiao, et al. Encouraging divergent thinking in large language models through multi-agent debate. CoRR, abs/2305.19118, 2023.
[113] Castelfranchi, C. Guarantees for autonomy in cognitive agent architecture. In M. J. Wooldridge, N. R. Jennings, eds., Intelligent Agents, ECAI-94 Workshop on Agent Theories, Architectures, and Languages, Amsterdam, The Netherlands, August 8-9, 1994, Proceedings, vol. 890 of Lecture Notes in Computer Science, pages 56â70. Springer, 1994. | 2309.07864#217 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 218 | [114] Gravitas, S. Auto-GPT: An Autonomous GPT-4 experiment, 2023. URL https://github. com/Significant-Gravitas/Auto-GPT, 2023.
[115] Nakajima, Y. BabyAGI. Python. https://github. com/yoheinakajima/babyagi, 2023.
[116] Yuan, A., A. Coenen, E. Reif, et al. Wordcraft: Story writing with large language models. In G. Jacucci, S. Kaski, C. Conati, S. Stumpf, T. Ruotsalo, K. Gajos, eds., IUI 2022: 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, March 22 - 25, 2022, pages 841â852. ACM, 2022.
[117] Franceschelli, G., M. Musolesi. On the creativity of large language models. CoRR, abs/2304.00008, 2023.
[118] Zhu, D., J. Chen, X. Shen, et al. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
54 | 2309.07864#218 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 219 | 54
[119] Yin, S., C. Fu, S. Zhao, et al. A survey on multimodal large language models. CoRR, abs/2306.13549, 2023.
[120] Driess, D., F. Xia, M. S. M. Sajjadi, et al. Palm-e: An embodied multimodal language model. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 8469â8488. PMLR, 2023.
[121] Mu, Y., Q. Zhang, M. Hu, et al. Embodiedgpt: Vision-language pre-training via embodied chain of thought. CoRR, abs/2305.15021, 2023.
[122] Brown, J. W. Beyond conflict monitoring: Cognitive control and the neural basis of thinking before you act. Current Directions in Psychological Science, 22(3):179â185, 2013.
[123] Kang, J., R. Laroche, X. Yuan, et al. Think before you act: Decision transformers with internal working memory. CoRR, abs/2305.16338, 2023. | 2309.07864#219 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 220 | [124] Valmeekam, K., S. Sreedharan, M. Marquez, et al. On the planning abilities of large language models (A critical investigation with a proposed benchmark). CoRR, abs/2302.06706, 2023.
[125] Liu, B., Y. Jiang, X. Zhang, et al. LLM+P: empowering large language models with optimal planning proficiency. CoRR, abs/2304.11477, 2023.
[126] Liu, H., C. Sferrazza, P. Abbeel. Chain of hindsight aligns language models with feedback. CoRR, abs/2302.02676, 2023.
[127] Lin, Y., Y. Chen. Llm-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models. CoRR, abs/2305.13711, 2023.
[128] Lin, J., D. Fried, D. Klein, et al. Inferring rewards from language in context. In S. Muresan, P. Nakov, A. Villavicencio, eds., Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 8546â8560. Association for Computational Linguistics, 2022. | 2309.07864#220 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 221 | [129] Fu, Y., H. Peng, T. Khot, et al. Improving language model negotiation with self-play and in-context learning from AI feedback. CoRR, abs/2305.10142, 2023.
[130] Zhang, H., W. Du, J. Shan, et al. Building cooperative embodied agents modularly with large language models. CoRR, abs/2307.02485, 2023.
[131] Darwinâs, C. On the origin of species. published on, 24:1, 1859.
[132] Bang, Y., S. Cahyawijaya, N. Lee, et al. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. CoRR, abs/2302.04023, 2023.
[133] Fang, T., S. Yang, K. Lan, et al. Is chatgpt a highly fluent grammatical error correction system? A comprehensive evaluation. CoRR, abs/2304.01746, 2023. | 2309.07864#221 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 222 | [134] Lu, A., H. Zhang, Y. Zhang, et al. Bounding the capabilities of large language models in open text generation with prompt constraints. In A. Vlachos, I. Augenstein, eds., Findings of the Association for Computational Linguistics: EACL 2023, Dubrovnik, Croatia, May 2-6, 2023, pages 1937â1963. Association for Computational Linguistics, 2023.
[135] Buehler, M. C., J. Adamy, T. H. Weisswange. Theory of mind based assistive communication in complex human robot cooperation. CoRR, abs/2109.01355, 2021.
[136] Shapira, N., M. Levy, S. H. Alavi, et al. Clever hans or neural theory of mind? stress testing social reasoning in large language models. CoRR, abs/2305.14763, 2023. | 2309.07864#222 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 223 | [137] Hill, F., K. Cho, A. Korhonen. Learning distributed representations of sentences from un- labelled data. In K. Knight, A. Nenkova, O. Rambow, eds., NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 1367â1377. The Association for Computational Linguistics, 2016.
55
[138] Collobert, R., J. Weston, L. Bottou, et al. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493â2537, 2011.
[139] Kaplan, J., S. McCandlish, T. Henighan, et al. Scaling laws for neural language models. CoRR, abs/2001.08361, 2020. | 2309.07864#223 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 224 | [140] Roberts, A., C. Raffel, N. Shazeer. How much knowledge can you pack into the parameters of a language model? In B. Webber, T. Cohn, Y. He, Y. Liu, eds., Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 5418â5426. Association for Computational Linguistics, 2020.
[141] Tandon, N., A. S. Varde, G. de Melo. Commonsense knowledge in machine intelligence. SIGMOD Rec., 46(4):49â52, 2017.
[142] Vulic, I., E. M. Ponti, R. Litschko, et al. Probing pretrained language models for lexical semantics. In B. Webber, T. Cohn, Y. He, Y. Liu, eds., Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7222â7240. Association for Computational Linguistics, 2020. | 2309.07864#224 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 225 | [143] Hewitt, J., C. D. Manning. A structural probe for finding syntax in word representations. In J. Burstein, C. Doran, T. Solorio, eds., Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4129â4138. Association for Computational Linguistics, 2019.
[144] Rau, L. F., P. S. Jacobs, U. Zernik. Information extraction and text summarization using linguistic knowledge acquisition. Inf. Process. Manag., 25(4):419â428, 1989.
[145] Yang, K., Z. Chen, Y. Cai, et al. Improved automatic keyword extraction given more semantic knowledge. In H. Gao, J. Kim, Y. Sakurai, eds., Database Systems for Advanced Applications - DASFAA 2016 International Workshops: BDMS, BDQM, MoI, and SeCoP, Dallas, TX, USA, April 16-19, 2016, Proceedings, vol. 9645 of Lecture Notes in Computer Science, pages 112â125. Springer, 2016. | 2309.07864#225 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 226 | [146] Beloucif, M., C. Biemann. Probing pre-trained language models for semantic attributes and their values. In M. Moens, X. Huang, L. Specia, S. W. Yih, eds., Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 2554â2559. Association for Computational Linguistics, 2021.
[147] Zhang, Z., H. Zhao. Advances in multi-turn dialogue comprehension: A survey. CoRR, abs/2103.03125, 2021.
[148] Safavi, T., D. Koutra. Relational world knowledge representation in contextual language In M. Moens, X. Huang, L. Specia, S. W. Yih, eds., Proceedings of models: A review. the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 1053â1067. Association for Computational Linguistics, 2021.
[149] Jiang, Z., F. F. Xu, J. Araki, et al. How can we know what language models know. Trans. Assoc. Comput. Linguistics, 8:423â438, 2020. | 2309.07864#226 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 227 | [150] Madaan, A., S. Zhou, U. Alon, et al. Language models of code are few-shot commonsense learners. In Y. Goldberg, Z. Kozareva, Y. Zhang, eds., Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 1384â1403. Association for Computational Linguistics, 2022.
[151] Xu, F. F., U. Alon, G. Neubig, et al. A systematic evaluation of large language models of code. In S. Chaudhuri, C. Sutton, eds., MAPS@PLDI 2022: 6th ACM SIGPLAN International Symposium on Machine Programming, San Diego, CA, USA, 13 June 2022, pages 1â10. ACM, 2022.
[152] Cobbe, K., V. Kosaraju, M. Bavarian, et al. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021.
56
[153] Thirunavukarasu, A. J., D. S. J. Ting, K. Elangovan, et al. Large language models in medicine. Nature medicine, pages 1â11, 2023. | 2309.07864#227 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 228 | [154] Lai, Y., C. Li, Y. Wang, et al. DS-1000: A natural and reliable benchmark for data science code generation. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 18319â18345. PMLR, 2023.
[155] AlKhamissi, B., M. Li, A. Celikyilmaz, et al. A review on language models as knowledge bases. CoRR, abs/2204.06031, 2022.
[156] Kemker, R., M. McClure, A. Abitino, et al. Measuring catastrophic forgetting in neural networks. In S. A. McIlraith, K. Q. Weinberger, eds., Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 3390â3398. AAAI Press, 2018. | 2309.07864#228 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 229 | [157] Cao, N. D., W. Aziz, I. Titov. Editing factual knowledge in language models. In M. Moens, X. Huang, L. Specia, S. W. Yih, eds., Proceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 6491â6506. Association for Computational Linguistics, 2021.
[158] Yao, Y., P. Wang, B. Tian, et al. Editing large language models: Problems, methods, and opportunities. CoRR, abs/2305.13172, 2023.
[159] Mitchell, E., C. Lin, A. Bosselut, et al. Memory-based model editing at scale. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvári, G. Niu, S. Sabato, eds., International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, vol. 162 of Proceedings of Machine Learning Research, pages 15817â15831. PMLR, 2022. | 2309.07864#229 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 230 | [160] Manakul, P., A. Liusie, M. J. F. Gales. Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. CoRR, abs/2303.08896, 2023.
[161] Li, M., B. Peng, Z. Zhang. Self-checker: Plug-and-play modules for fact-checking with large language models. CoRR, abs/2305.14623, 2023.
[162] Gou, Z., Z. Shao, Y. Gong, et al. CRITIC: large language models can self-correct with tool-interactive critiquing. CoRR, abs/2305.11738, 2023.
[163] Lewis, M., Y. Liu, N. Goyal, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In D. Jurafsky, J. Chai, N. Schluter, J. R. Tetreault, eds., Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871â7880. Association for Computational Linguistics, 2020. | 2309.07864#230 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 231 | [164] Park, H. H., Y. Vyas, K. Shah. Efficient classification of long documents using transformers. In S. Muresan, P. Nakov, A. Villavicencio, eds., Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 702â709. Association for Computational Linguistics, 2022.
[165] Guo, M., J. Ainslie, D. C. Uthus, et al. Longt5: Efficient text-to-text transformer for long sequences. In M. Carpuat, M. de Marneffe, I. V. M. RuÃz, eds., Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 724â736. Association for Computational Linguistics, 2022.
[166] Ainslie, J., T. Lei, M. de Jong, et al. Colt5: Faster long-range transformers with conditional computation. CoRR, abs/2303.09752, 2023.
57 | 2309.07864#231 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 232 | 57
[167] Ruoss, A., G. Delétang, T. Genewein, et al. Randomized positional encodings boost length generalization of transformers. In A. Rogers, J. L. Boyd-Graber, N. Okazaki, eds., Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 1889â1903. Association for Computational Linguistics, 2023.
[168] Liang, X., B. Wang, H. Huang, et al. Unleashing infinite-length input capacity for large-scale language models with self-controlled memory system. CoRR, abs/2304.13343, 2023.
[169] Shinn, N., B. Labash, A. Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. CoRR, abs/2303.11366, 2023.
[170] Zhong, W., L. Guo, Q. Gao, et al. Memorybank: Enhancing large language models with long-term memory. CoRR, abs/2305.10250, 2023. | 2309.07864#232 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 233 | [171] Chan, C., W. Chen, Y. Su, et al. Chateval: Towards better llm-based evaluators through multi-agent debate. CoRR, abs/2308.07201, 2023.
[172] Zhu, X., Y. Chen, H. Tian, et al. Ghost in the minecraft: Generally capable agents for open- world environments via large language models with text-based knowledge and memory. CoRR, abs/2305.17144, 2023.
[173] Modarressi, A., A. Imani, M. Fayyaz, et al. RET-LLM: towards a general read-write memory for large language models. CoRR, abs/2305.14322, 2023.
[174] Lin, J., H. Zhao, A. Zhang, et al. Agentsims: An open-source sandbox for large language model evaluation. CoRR, abs/2308.04026, 2023.
[175] Hu, C., J. Fu, C. Du, et al. Chatdb: Augmenting llms with databases as their symbolic memory. CoRR, abs/2306.03901, 2023. | 2309.07864#233 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 234 | [176] Huang, Z., S. Gutierrez, H. Kamana, et al. Memory sandbox: Transparent and interactive memory management for conversational agents. CoRR, abs/2308.01542, 2023.
[177] Creswell, A., M. Shanahan, I. Higgins. Selection-inference: Exploiting large language models for interpretable logical reasoning. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
[178] Madaan, A., N. Tandon, P. Gupta, et al. Self-refine: Iterative refinement with self-feedback. CoRR, abs/2303.17651, 2023.
[179] Ichter, B., A. Brohan, Y. Chebotar, et al. Do as I can, not as I say: Grounding language in robotic affordances. In K. Liu, D. Kulic, J. Ichnowski, eds., Conference on Robot Learning, CoRL 2022, 14-18 December 2022, Auckland, New Zealand, vol. 205 of Proceedings of Machine Learning Research, pages 287â318. PMLR, 2022. | 2309.07864#234 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 235 | [180] Shen, Y., K. Song, X. Tan, et al. Hugginggpt: Solving AI tasks with chatgpt and its friends in huggingface. CoRR, abs/2303.17580, 2023.
[181] Yao, S., D. Yu, J. Zhao, et al. Tree of thoughts: Deliberate problem solving with large language models. CoRR, abs/2305.10601, 2023.
[182] Wu, Y., S. Y. Min, Y. Bisk, et al. Plan, eliminate, and track - language models are good teachers for embodied agents. CoRR, abs/2305.02412, 2023.
[183] Wang, Z., S. Cai, A. Liu, et al. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. CoRR, abs/2302.01560, 2023.
[184] Hao, S., Y. Gu, H. Ma, et al. Reasoning with language model is planning with world model. CoRR, abs/2305.14992, 2023. | 2309.07864#235 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 236 | [185] Lin, B. Y., Y. Fu, K. Yang, et al. Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. CoRR, abs/2305.17390, 2023.
58
[186] Karpas, E., O. Abend, Y. Belinkov, et al. MRKL systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. CoRR, abs/2205.00445, 2022.
[187] Huang, W., F. Xia, T. Xiao, et al. Inner monologue: Embodied reasoning through planning with language models. In K. Liu, D. Kulic, J. Ichnowski, eds., Conference on Robot Learning, CoRL 2022, 14-18 December 2022, Auckland, New Zealand, vol. 205 of Proceedings of Machine Learning Research, pages 1769â1782. PMLR, 2022.
[188] Chen, Z., K. Zhou, B. Zhang, et al. Chatcot: Tool-augmented chain-of-thought reasoning on chat-based large language models. CoRR, abs/2305.14323, 2023. | 2309.07864#236 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 237 | [189] Wu, T., M. Terry, C. J. Cai. AI chains: Transparent and controllable human-ai interaction by chaining large language model prompts. In S. D. J. Barbosa, C. Lampe, C. Appert, D. A. Shamma, S. M. Drucker, J. R. Williamson, K. Yatani, eds., CHI â22: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April 2022 - 5 May 2022, pages 385:1â385:22. ACM, 2022.
[190] Wang, G., Y. Xie, Y. Jiang, et al. Voyager: An open-ended embodied agent with large language models. CoRR, abs/2305.16291, 2023.
[191] Zhao, X., M. Li, C. Weber, et al. Chat with the environment: Interactive multimodal perception using large language models. CoRR, abs/2303.08268, 2023.
[192] Miao, N., Y. W. Teh, T. Rainforth. Selfcheck: Using llms to zero-shot check their own step-by-step reasoning. CoRR, abs/2308.00436, 2023. | 2309.07864#237 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 238 | [193] Wang, X., W. Wang, Y. Cao, et al. Images speak in images: A generalist painter for in-context visual learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, pages 6830â6839. IEEE, 2023.
[194] Wang, C., S. Chen, Y. Wu, et al. Neural codec language models are zero-shot text to speech synthesizers. CoRR, abs/2301.02111, 2023.
[195] Dong, Q., L. Li, D. Dai, et al. A survey for in-context learning. CoRR, abs/2301.00234, 2023.
[196] Ke, Z., B. Liu. Continual learning of natural language processing tasks: A survey. ArXiv, abs/2211.12701, 2022.
[197] Wang, L., X. Zhang, H. Su, et al. A comprehensive survey of continual learning: Theory, method and application. ArXiv, abs/2302.00487, 2023.
[198] Razdaibiedina, A., Y. Mao, R. Hou, et al. Progressive prompts: Continual learning for language models. In The Eleventh International Conference on Learning Representations. 2023. | 2309.07864#238 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 239 | [199] Marshall, L. H., H. W. Magoun. Discoveries in the human brain: neuroscience prehistory, brain structure, and function. Springer Science & Business Media, 2013.
[200] Searle, J. R. What is language: some preliminary remarks. Explorations in Pragmatics. Linguistic, cognitive and intercultural aspects, pages 7â37, 2007.
[201] Touvron, H., T. Lavril, G. Izacard, et al. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971, 2023.
[202] Scao, T. L., A. Fan, C. Akiki, et al. BLOOM: A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100, 2022.
[203] Almazrouei, E., H. Alobeidli, A. Alshamsi, et al. Falcon-40b: an open large language model with state-of-the-art performance, 2023.
[204] Serban, I. V., R. Lowe, L. Charlin, et al. Generative deep neural networks for dialogue: A short review. CoRR, abs/1611.06216, 2016. | 2309.07864#239 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 240 | [205] Vinyals, O., Q. V. Le. A neural conversational model. CoRR, abs/1506.05869, 2015.
59
[206] Adiwardana, D., M. Luong, D. R. So, et al. Towards a human-like open-domain chatbot. CoRR, abs/2001.09977, 2020.
[207] Zhuge, M., H. Liu, F. Faccio, et al. Mindstorms in natural language-based societies of mind. CoRR, abs/2305.17066, 2023.
[208] Roller, S., E. Dinan, N. Goyal, et al. Recipes for building an open-domain chatbot. In P. Merlo, J. Tiedemann, R. Tsarfaty, eds., Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 300â325. Association for Computational Linguistics, 2021.
[209] Taori, R., I. Gulrajani, T. Zhang, et al. Stanford alpaca: An instruction-following llama model, 2023. | 2309.07864#240 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 241 | [209] Taori, R., I. Gulrajani, T. Zhang, et al. Stanford alpaca: An instruction-following llama model, 2023.
[210] Raffel, C., N. Shazeer, A. Roberts, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485â5551, 2020.
[211] Ge, Y., W. Hua, J. Ji, et al. Openagi: When LLM meets domain experts. CoRR, abs/2304.04370, 2023.
[212] Rajpurkar, P., J. Zhang, K. Lopyrev, et al. Squad: 100, 000+ questions for machine com- prehension of text. In J. Su, X. Carreras, K. Duh, eds., Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383â2392. The Association for Computational Linguistics, 2016.
[213] Ahuja, K., R. Hada, M. Ochieng, et al. MEGA: multilingual evaluation of generative AI. CoRR, abs/2303.12528, 2023. | 2309.07864#241 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 242 | [214] See, A., A. Pappu, R. Saxena, et al. Do massively pretrained language models make better storytellers? In M. Bansal, A. Villavicencio, eds., Proceedings of the 23rd Conference on Computational Natural Language Learning, CoNLL 2019, Hong Kong, China, November 3-4, 2019, pages 843â861. Association for Computational Linguistics, 2019.
[215] Radford, A., J. Wu, D. Amodei, et al. Better language models and their implications. OpenAI blog, 1(2), 2019.
[216] McCoy, R. T., P. Smolensky, T. Linzen, et al. How much do language models copy from their training data? evaluating linguistic novelty in text generation using RAVEN. CoRR, abs/2111.09509, 2021.
[217] Tellex, S., T. Kollar, S. Dickerson, et al. Understanding natural language commands for robotic navigation and mobile manipulation. In W. Burgard, D. Roth, eds., Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2011, San Francisco, California, USA, August 7-11, 2011, pages 1507â1514. AAAI Press, 2011. | 2309.07864#242 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 243 | [218] Christiano, P. F., J. Leike, T. B. Brown, et al. Deep reinforcement learning from human preferences. In I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, R. Garnett, eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4299â4307. 2017.
[219] Basu, C., M. Singhal, A. D. Dragan. Learning from richer human guidance: Augmenting comparison-based learning with feature queries. In T. Kanda, S. Sabanovic, G. Hoffman, A. Tapus, eds., Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2018, Chicago, IL, USA, March 05-08, 2018, pages 132â140. ACM, 2018. | 2309.07864#243 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 244 | [220] Sumers, T. R., M. K. Ho, R. X. D. Hawkins, et al. Learning rewards from linguistic feedback. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 6002â6010. AAAI Press, 2021.
60
[221] Jeon, H. J., S. Milli, A. D. Dragan. Reward-rational (implicit) choice: A unifying formalism for reward learning. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin, eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. 2020.
[222] McShane, M. Reference resolution challenges for intelligent agents: The need for knowledge. IEEE Intell. Syst., 24(4):47â58, 2009. | 2309.07864#244 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 245 | [222] McShane, M. Reference resolution challenges for intelligent agents: The need for knowledge. IEEE Intell. Syst., 24(4):47â58, 2009.
[223] Gururangan, S., A. Marasovic, S. Swayamdipta, et al. Donât stop pretraining: Adapt language In D. Jurafsky, J. Chai, N. Schluter, J. R. Tetreault, eds., models to domains and tasks. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8342â8360. Association for Computational Linguistics, 2020.
[224] Shi, F., X. Chen, K. Misra, et al. Large language models can be easily distracted by irrelevant context. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 31210â31227. PMLR, 2023. | 2309.07864#245 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 246 | [225] Zhang, Y., Y. Li, L. Cui, et al. Sirenâs song in the AI ocean: A survey on hallucination in large language models. CoRR, abs/2309.01219, 2023.
[226] Mialon, G., R. Dessì, M. Lomeli, et al. Augmented language models: a survey. CoRR, abs/2302.07842, 2023.
[227] Ren, R., Y. Wang, Y. Qu, et al. Investigating the factual knowledge boundary of large language models with retrieval augmentation. CoRR, abs/2307.11019, 2023.
[228] Nuxoll, A. M., J. E. Laird. Extending cognitive architecture with episodic memory. In AAAI, pages 1560â1564. 2007.
[229] Squire, L. R. Mechanisms of memory. Science, 232(4758):1612â1619, 1986.
[230] Schwabe, L., K. Nader, J. C. Pruessner. Reconsolidation of human memory: brain mechanisms and clinical relevance. Biological psychiatry, 76(4):274â280, 2014.
[231] Hutter, M. A theory of universal artificial intelligence based on algorithmic complexity. arXiv preprint cs/0004001, 2000. | 2309.07864#246 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 247 | [231] Hutter, M. A theory of universal artificial intelligence based on algorithmic complexity. arXiv preprint cs/0004001, 2000.
[232] Zhang, X., F. Wei, M. Zhou. HIBERT: document level pre-training of hierarchical bidirectional transformers for document summarization. In A. Korhonen, D. R. Traum, L. MÃ rquez, eds., Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5059â5069. Association for Computational Linguistics, 2019.
[233] Mohtashami, A., M. Jaggi. Landmark attention: Random-access infinite context length for transformers. CoRR, abs/2305.16300, 2023.
[234] Chalkidis, I., X. Dai, M. Fergadiotis, et al. An exploration of hierarchical attention transformers for efficient long document classification. CoRR, abs/2210.05529, 2022. | 2309.07864#247 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 248 | [235] Nie, Y., H. Huang, W. Wei, et al. Capturing global structural information in long document question answering with compressive graph selector network. In Y. Goldberg, Z. Kozareva, Y. Zhang, eds., Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 5036â5047. Association for Computational Linguistics, 2022.
[236] Bertsch, A., U. Alon, G. Neubig, et al. Unlimiformer: Long-range transformers with unlimited length input. CoRR, abs/2305.01625, 2023.
61
[237] Manakul, P., M. J. F. Gales. Sparsity and sentence structure in encoder-decoder attention of summarization systems. In M. Moens, X. Huang, L. Specia, S. W. Yih, eds., Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 9359â9368. Association for Computational Linguistics, 2021. | 2309.07864#248 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 249 | [238] Zaheer, M., G. Guruganesh, K. A. Dubey, et al. Big bird: Transformers for longer sequences. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin, eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. 2020.
[239] Zhao, A., D. Huang, Q. Xu, et al. Expel: LLM agents are experiential learners. CoRR, abs/2308.10144, 2023.
[240] Zhou, X., G. Li, Z. Liu. LLM as DBA. CoRR, abs/2308.05481, 2023.
[241] Wason, P. C. Reasoning about a rule. Quarterly journal of experimental psychology, 20(3):273â 281, 1968.
[242] Wason, P. C., P. N. Johnson-Laird. Psychology of reasoning: Structure and content, vol. 86. Harvard University Press, 1972.
[243] Galotti, K. M. Approaches to studying formal and everyday reasoning. Psychological bulletin, 105(3):331, 1989. | 2309.07864#249 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 250 | [243] Galotti, K. M. Approaches to studying formal and everyday reasoning. Psychological bulletin, 105(3):331, 1989.
[244] Huang, J., K. C. Chang. Towards reasoning in large language models: A survey. In A. Rogers, J. L. Boyd-Graber, N. Okazaki, eds., Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 1049â1065. Association for Computational Linguistics, 2023.
[245] Webb, T. W., K. J. Holyoak, H. Lu. Emergent analogical reasoning in large language models. CoRR, abs/2212.09196, 2022.
[246] Feng, G., B. Zhang, Y. Gu, et al. Towards revealing the mystery behind chain of thought: a theoretical perspective. CoRR, abs/2305.15408, 2023.
[247] Grafman, J., L. Spector, M. J. Rattermann. Planning and the brain. In The cognitive psychology of planning, pages 191â208. Psychology Press, 2004. | 2309.07864#250 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 251 | [248] Unterrainer, J. M., A. M. Owen. Planning and problem solving: from neuropsychology to functional neuroimaging. Journal of Physiology-Paris, 99(4-6):308â317, 2006.
[249] Zula, K. J., T. J. Chermack. Integrative literature review: Human capital planning: A review of literature and implications for human resource development. Human Resource Development Review, 6(3):245â262, 2007.
[250] Bratman, M. E., D. J. Israel, M. E. Pollack. Plans and resource-bounded practical reasoning. Computational intelligence, 4(3):349â355, 1988.
[251] Russell, S., P. Norvig. Artificial intelligence - a modern approach, 2nd Edition. Prentice Hall series in artificial intelligence. Prentice Hall, 2003.
[252] Fainstein, S. S., J. DeFilippis. Readings in planning theory. John Wiley & Sons, 2015.
[253] Sebastia, L., E. Onaindia, E. Marzal. Decomposition of planning problems. Ai Communica- tions, 19(1):49â81, 2006. | 2309.07864#251 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 252 | [254] Crosby, M., M. Rovatsos, R. Petrick. Automated agent decomposition for classical planning. In Proceedings of the International Conference on Automated Planning and Scheduling, vol. 23, pages 46â54. 2013.
[255] Xu, B., Z. Peng, B. Lei, et al. Rewoo: Decoupling reasoning from observations for efficient augmented language models. CoRR, abs/2305.18323, 2023.
62
[256] Raman, S. S., V. Cohen, E. Rosen, et al. Planning with large language models via corrective re-prompting. CoRR, abs/2211.09935, 2022.
[257] Lyu, Q., S. Havaldar, A. Stein, et al. Faithful chain-of-thought reasoning. CoRR, abs/2301.13379, 2023. | 2309.07864#252 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 253 | [258] Huang, W., P. Abbeel, D. Pathak, et al. Language models as zero-shot planners: Extracting ac- tionable knowledge for embodied agents. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvári, G. Niu, S. Sabato, eds., International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, vol. 162 of Proceedings of Machine Learning Research, pages 9118â9147. PMLR, 2022.
[259] Dagan, G., F. Keller, A. Lascarides. Dynamic planning with a LLM. CoRR, abs/2308.06391, 2023.
[260] Rana, K., J. Haviland, S. Garg, et al. Sayplan: Grounding large language models using 3d scene graphs for scalable task planning. CoRR, abs/2307.06135, 2023.
[261] Peters, M. E., M. Neumann, M. Iyyer, et al. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227â2237. Association for Computational Linguistics, New Orleans, Louisiana, 2018. | 2309.07864#253 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 254 | [262] Devlin, J., M. Chang, K. Lee, et al. BERT: pre-training of deep bidirectional transformers for language understanding. In J. Burstein, C. Doran, T. Solorio, eds., Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171â4186. Association for Computational Linguistics, 2019.
[263] Solaiman, I., C. Dennison. Process for adapting language models to society (palms) with values-targeted datasets. Advances in Neural Information Processing Systems, 34:5861â5873, 2021.
[264] Bach, S. H., V. Sanh, Z. X. Yong, et al. Promptsource: An integrated development environment and repository for natural language prompts. In V. Basile, Z. Kozareva, S. Stajner, eds., Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 - System Demonstrations, Dublin, Ireland, May 22-27, 2022, pages 93â104. Association for Computational Linguistics, 2022. | 2309.07864#254 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 255 | [265] Iyer, S., X. V. Lin, R. Pasunuru, et al. OPT-IML: scaling language model instruction meta learning through the lens of generalization. CoRR, abs/2212.12017, 2022.
[266] Winston, P. H. Learning and reasoning by analogy. Commun. ACM, 23(12):689â703, 1980.
[267] Lu, Y., M. Bartolo, A. Moore, et al. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In S. Muresan, P. Nakov, A. Villavicencio, eds., Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 8086â8098. Association for Computational Linguistics, 2022. | 2309.07864#255 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 256 | [268] Tsimpoukelli, M., J. Menick, S. Cabi, et al. Multimodal few-shot learning with frozen language models. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, J. W. Vaughan, eds., Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 200â212. 2021.
[269] Bar, A., Y. Gandelsman, T. Darrell, et al. Visual prompting via image inpainting. In NeurIPS. 2022.
[270] Zhu, W., H. Liu, Q. Dong, et al. Multilingual machine translation with large language models: Empirical results and analysis. CoRR, abs/2304.04675, 2023.
63
[271] Zhang, Z., L. Zhou, C. Wang, et al. Speak foreign languages with your own voice: Cross- lingual neural codec language modeling. CoRR, abs/2303.03926, 2023.
[272] Zhang, J., J. Zhang, K. Pertsch, et al. Bootstrap your own skills: Learning to solve new tasks with large language model guidance. In 7th Annual Conference on Robot Learning. 2023. | 2309.07864#256 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 257 | [273] McCloskey, M., N. J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of Learning and Motivation, 24:109â165, 1989.
[274] Kirkpatrick, J., R. Pascanu, N. Rabinowitz, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521â3526, 2017.
[275] Li, Z., D. Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935â2947, 2017.
[276] Farajtabar, M., N. Azizan, A. Mott, et al. Orthogonal gradient descent for continual learning. In International Conference on Artificial Intelligence and Statistics, pages 3762â3773. PMLR, 2020.
[277] Smith, J. S., Y.-C. Hsu, L. Zhang, et al. Continual diffusion: Continual customization of text-to-image diffusion with c-lora. arXiv preprint arXiv:2304.06027, 2023.
[278] Lopez-Paz, D., M. Ranzato. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017. | 2309.07864#257 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 258 | [278] Lopez-Paz, D., M. Ranzato. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017.
[279] de Masson DâAutume, C., S. Ruder, L. Kong, et al. Episodic memory in lifelong language learning. Advances in Neural Information Processing Systems, 32, 2019.
[280] Rolnick, D., A. Ahuja, J. Schwarz, et al. Experience replay for continual learning. Advances in Neural Information Processing Systems, 32, 2019.
[281] Serrà , J., D. SurÃs, M. Miron, et al. Overcoming catastrophic forgetting with hard attention to the task. In International Conference on Machine Learning. 2018.
[282] Dosovitskiy, A., L. Beyer, A. Kolesnikov, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. | 2309.07864#258 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 259 | [283] van den Oord, A., O. Vinyals, K. Kavukcuoglu. Neural discrete representation learning. In I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, R. Garnett, eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 6306â6315. 2017.
[284] Mehta, S., M. Rastegari. Mobilevit: Light-weight, general-purpose, and mobile-friendly vision transformer. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. | 2309.07864#259 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 260 | [285] Tolstikhin, I. O., N. Houlsby, A. Kolesnikov, et al. Mlp-mixer: An all-mlp architecture for vision. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, J. W. Vaughan, eds., Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 24261â24272. 2021.
[286] Huang, S., L. Dong, W. Wang, et al. Language is not all you need: Aligning perception with language models. CoRR, abs/2302.14045, 2023.
[287] Li, J., D. Li, S. Savarese, et al. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 19730â19742. PMLR, 2023. | 2309.07864#260 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 261 | [288] Dai, W., J. Li, D. Li, et al. Instructblip: Towards general-purpose vision-language models with instruction tuning. CoRR, abs/2305.06500, 2023.
64
[289] Gong, T., C. Lyu, S. Zhang, et al. Multimodal-gpt: A vision and language model for dialogue with humans. CoRR, abs/2305.04790, 2023.
[290] Alayrac, J., J. Donahue, P. Luc, et al. Flamingo: a visual language model for few-shot learning. In NeurIPS. 2022.
[291] Su, Y., T. Lan, H. Li, et al. Pandagpt: One model to instruction-follow them all. CoRR, abs/2305.16355, 2023.
[292] Liu, H., C. Li, Q. Wu, et al. Visual instruction tuning. CoRR, abs/2304.08485, 2023.
[293] Huang, R., M. Li, D. Yang, et al. Audiogpt: Understanding and generating speech, music, sound, and talking head. CoRR, abs/2304.12995, 2023. | 2309.07864#261 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 262 | [294] Gong, Y., Y. Chung, J. R. Glass. AST: audio spectrogram transformer. In H. Hermansky, H. Cernocký, L. Burget, L. Lamel, O. Scharenborg, P. MotlÃcek, eds., Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August - 3 September 2021, pages 571â575. ISCA, 2021.
[295] Hsu, W., B. Bolte, Y. H. Tsai, et al. Hubert: Self-supervised speech representation learning IEEE ACM Trans. Audio Speech Lang. Process., by masked prediction of hidden units. 29:3451â3460, 2021.
[296] Chen, F., M. Han, H. Zhao, et al. X-LLM: bootstrapping advanced large language models by treating multi-modalities as foreign languages. CoRR, abs/2305.04160, 2023.
[297] Zhang, H., X. Li, L. Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. CoRR, abs/2306.02858, 2023. | 2309.07864#262 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 263 | [298] Liu, Z., Y. He, W. Wang, et al. Interngpt: Solving vision-centric tasks by interacting with chatbots beyond language. CoRR, abs/2305.05662, 2023.
[299] Hubel, D. H., T. N. Wiesel. Receptive fields, binocular interaction and functional architecture in the catâs visual cortex. The Journal of physiology, 160(1):106, 1962.
[300] Logothetis, N. K., D. L. Sheinberg. Visual object recognition. Annual review of neuroscience, 19(1):577â621, 1996.
[301] OpenAI. Openai: Introducing chatgpt. Website, 2022. https://openai.com/blog/ chatgpt.
[302] Lu, J., X. Ren, Y. Ren, et al. Improving contextual language models for response retrieval in multi-turn conversation. In J. X. Huang, Y. Chang, X. Cheng, J. Kamps, V. Murdock, J. Wen, Y. Liu, eds., Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 1805â1808. ACM, 2020. | 2309.07864#263 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 264 | [303] Huang, L., W. Wang, J. Chen, et al. Attention on attention for image captioning. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 4633â4642. IEEE, 2019.
[304] Pan, Y., T. Yao, Y. Li, et al. X-linear attention networks for image captioning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 10968â10977. Computer Vision Foundation / IEEE, 2020.
[305] Cornia, M., M. Stefanini, L. Baraldi, et al. M2: Meshed-memory transformer for image
captioning. CoRR, abs/1912.08226, 2019.
[306] Chen, J., H. Guo, K. Yi, et al. Visualgpt: Data-efficient image captioning by balancing visual input and linguistic knowledge from pretraining. CoRR, abs/2102.10407, 2021.
[307] Li, K., Y. He, Y. Wang, et al. Videochat: Chat-centric video understanding. CoRR, abs/2305.06355, 2023.
65 | 2309.07864#264 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 265 | 65
[308] Lin, J., Y. Du, O. Watkins, et al. Learning to model the world with language. CoRR, abs/2308.01399, 2023.
[309] Vaswani, A., N. Shazeer, N. Parmar, et al. Attention is all you need. In I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, R. Garnett, eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural In- formation Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998â6008. 2017.
[310] Touvron, H., M. Cord, M. Douze, et al. Training data-efficient image transformers & distil- lation through attention. In M. Meila, T. Zhang, eds., Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, vol. 139 of Proceedings of Machine Learning Research, pages 10347â10357. PMLR, 2021. | 2309.07864#265 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 266 | [311] Lu, J., C. Clark, R. Zellers, et al. UNIFIED-IO: A unified model for vision, language, and multi-modal tasks. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
[312] Peng, Z., W. Wang, L. Dong, et al. Kosmos-2: Grounding multimodal large language models to the world. CoRR, abs/2306.14824, 2023.
[313] Lyu, C., M. Wu, L. Wang, et al. Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration. CoRR, abs/2306.09093, 2023.
[314] Maaz, M., H. A. Rasheed, S. H. Khan, et al. Video-chatgpt: Towards detailed video under- standing via large vision and language models. CoRR, abs/2306.05424, 2023.
[315] Chen, M., I. Laina, A. Vedaldi. Training-free layout control with cross-attention guidance. CoRR, abs/2304.03373, 2023. | 2309.07864#266 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 267 | [316] Radford, A., J. W. Kim, T. Xu, et al. Robust speech recognition via large-scale weak su- pervision. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 28492â28518. PMLR, 2023.
[317] Ren, Y., Y. Ruan, X. Tan, et al. Fastspeech: Fast, robust and controllable text to speech. In H. M. Wallach, H. Larochelle, A. Beygelzimer, F. dâAlché-Buc, E. B. Fox, R. Garnett, eds., Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3165â3174. 2019. | 2309.07864#267 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 268 | [318] Ye, Z., Z. Zhao, Y. Ren, et al. Syntaspeech: Syntax-aware generative adversarial text-to-speech. In L. D. Raedt, ed., Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 4468â4474. ijcai.org, 2022.
[319] Kim, J., J. Kong, J. Son. Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech. In M. Meila, T. Zhang, eds., Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, vol. 139 of Proceedings of Machine Learning Research, pages 5530â5540. PMLR, 2021.
[320] Wang, Z., S. Cornell, S. Choi, et al. Tf-gridnet: Integrating full- and sub-band modeling for speech separation. IEEE ACM Trans. Audio Speech Lang. Process., 31:3221â3236, 2023. | 2309.07864#268 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 269 | [321] Liu, J., C. Li, Y. Ren, et al. Diffsinger: Singing voice synthesis via shallow diffusion mechanism. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty- Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 11020â11028. AAAI Press, 2022.
[322] Inaguma, H., S. Dalmia, B. Yan, et al. Fast-md: Fast multi-decoder end-to-end speech transla- tion with non-autoregressive hidden intermediates. In IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2021, Cartagena, Colombia, December 13-17, 2021, pages 922â929. IEEE, 2021.
66
[323] Flanagan, J. L. Speech analysis synthesis and perception, vol. 3. Springer Science & Business Media, 2013.
[324] Schwarz, B. Mapping the world in 3d. Nature Photonics, 4(7):429â430, 2010.
[325] Parkinson, B. W., J. J. Spilker. Progress in astronautics and aeronautics: Global positioning system: Theory and applications, vol. 164. Aiaa, 1996. | 2309.07864#269 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 270 | [326] Parisi, A., Y. Zhao, N. Fiedel. TALM: tool augmented language models. CoRR, abs/2205.12255, 2022.
[327] Clarebout, G., J. Elen, N. A. J. Collazo, et al. Metacognition and the Use of Tools, pages 187â195. Springer New York, New York, NY, 2013.
[328] Wu, C., S. Yin, W. Qi, et al. Visual chatgpt: Talking, drawing and editing with visual foundation models. CoRR, abs/2303.04671, 2023.
[329] Cai, T., X. Wang, T. Ma, et al. Large language models as tool makers. CoRR, abs/2305.17126, 2023.
[330] Qian, C., C. Han, Y. R. Fung, et al. CREATOR: disentangling abstract and concrete reasonings of large language models through tool creation. CoRR, abs/2305.14318, 2023.
[331] Chen, X., M. Lin, N. Schärli, et al. Teaching large language models to self-debug. CoRR, abs/2304.05128, 2023. | 2309.07864#270 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 271 | [332] Liu, H., L. Lee, K. Lee, et al. Instruction-following agents with jointly pre-trained vision- language models. arXiv preprint arXiv:2210.13431, 2022.
[333] Lynch, C., A. Wahid, J. Tompson, et al. Interactive language: Talking to robots in real time. CoRR, abs/2210.06407, 2022.
[334] Jin, C., W. Tan, J. Yang, et al. Alphablock: Embodied finetuning for vision-language reasoning in robot manipulation. CoRR, abs/2305.18898, 2023.
[335] Shah, D., B. Osinski, B. Ichter, et al. Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action. In K. Liu, D. Kulic, J. Ichnowski, eds., Conference on Robot Learning, CoRL 2022, 14-18 December 2022, Auckland, New Zealand, vol. 205 of Proceedings of Machine Learning Research, pages 492â504. PMLR, 2022.
[336] Zhou, G., Y. Hong, Q. Wu. Navgpt: Explicit reasoning in vision-and-language navigation with large language models. CoRR, abs/2305.16986, 2023. | 2309.07864#271 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 272 | [337] Fan, L., G. Wang, Y. Jiang, et al. Minedojo: Building open-ended embodied agents with internet-scale knowledge. In NeurIPS. 2022.
[338] Kanitscheider, I., J. Huizinga, D. Farhi, et al. Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft. CoRR, abs/2106.14876, 2021.
[339] Nottingham, K., P. Ammanabrolu, A. Suhr, et al. Do embodied agents dream of pixelated sheep: Embodied decision making using language guided world modelling. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 26311â26325. PMLR, 2023. | 2309.07864#272 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.