id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.07864#118
The Rise and Potential of Large Language Model Based Agents: A Survey
# 5https://www.change.org/p/save-sydney-ai 41 Agent research â LLM research. As NLP research advances, LLMs represented by GPT-4 are considered sparks of Artificial General Intelligence (AGI), and elevating LLMs to agents marks a more robust stride towards AGI [31]. Viewing LLMs from the perspective of agents introduces greater demands for LLM research while expanding their application scope and presenting numerous opportunities for practical implementation. The study of LLMs is no longer confined to traditional tasks involving textual inputs and outputs, such as text classification, question answering, and text summarization. Instead, the focus has shifted towards tackling complex tasks incorporating richer input modalities and broader action spaces, all while aiming for loftier objectives exemplified by PaLM-E [120]. Expanding these application requirements provides greater research motivation for the developmental progress of Large Language Models. The challenge lies in enabling LLMs to efficiently and effectively process inputs, gather information from the environment, and interpret the feedback generated by their actions, all while preserving their core capabilities. Furthermore, an even greater challenge is enabling LLMs to understand the implicit relationships among different elements within the environment and acquire world knowledge [308; 579], which is a crucial step in the journey toward developing agents that can reach more advanced intelligence. On another front, extensive research has aimed to expand the action capabilities of LLMs, allowing them to acquire a wider range of skills that affect the world, such as using tools or interfacing with robotic APIs in simulated or physical environments. However, the question of how LLMs can efficiently plan and utilize these action abilities based on their understanding remains an unresolved issue [94]. LLMs need to learn the sequential order of actions like humans, employing a combination of serial and parallel approaches to enhance task efficiency. Moreover, these capabilities need to be confined within a harmless scope of usage to prevent unintended damage to other elements within the environment [27; 580; 581]. Furthermore, the realm of Multi-Agent systems constitutes a significant branch of research within the field of agents [22; 108; 409; 410], offering valuable insights into how to better design and construct LLMs. We aspire for LLM-based agents to assume diverse roles within social cooperation, engaging in societal interactions that involve collaboration, competition, and coordination [109; 112; 129; 405; 406].
2309.07864#117
2309.07864#119
2309.07864
[ "2305.08982" ]
2309.07864#119
The Rise and Potential of Large Language Model Based Agents: A Survey
Exploring how to stimulate and sustain their role-playing capabilities, as well as how to enhance collaborative efficiency, presents areas of research that merit attention. # 6.2 Evaluation for LLM-based Agents While LLM-based agents have demonstrated excellent performance in areas such as standalone operation, collective cooperation, and human interaction, quantifying and objectively evaluating them remains a challenge [582; 89]. Turing proposed a highly meaningful and promising approach for assessing AI agentsâ the well-known Turing Testâ to evaluate whether AI systems can exhibit human-like intelligence [3]. However, this test is exceedingly vague, general, and subjective. Here, we discuss existing evaluation efforts for LLM-based agents and offer some prospects, considering four dimensions: utility, sociability, values, and the ability to evolve continually. Utility. Currently, LLM-powered autonomous agents primarily function as human assistants, ac- cepting tasks delegated by humans to either independently complete assignments or assist in human task completion [114; 182; 389; 397; 413; 422]. Therefore, the effectiveness and utility during task execution are crucial evaluation criteria at this stage. Specifically, the success rate of task completion stands as the primary metric for evaluating utility [125; 130]. This metric primarily encompasses whether the agent achieves stipulated objectives or attains expected scores [109; 477; 583]. For instance, AgentBench [582] aggregates challenges from diverse real-world scenarios and introduces a systematic benchmark to assess LLMâ s task completion capabilities. We can also attribute task outcomes to the agentâ s various foundational capabilities, which form the bedrock of task accom- plishment [29]. These foundational capabilities include environmental comprehension, reasoning, planning, decision-making, tool utilization, and embodied action capabilities, and researchers can conduct a more detailed assessment of these specific capabilities [94; 427; 584; 585]. Furthermore, due to the relatively large size of LLM-based agents, researchers should also factor in their efficiency, which is a critical determinant of user satisfaction [89]. An agent should not only possess ample strength but also be capable of completing predetermined tasks within an appropriate timeframe and with appropriate resource expenditure [109].
2309.07864#118
2309.07864#120
2309.07864
[ "2305.08982" ]
2309.07864#120
The Rise and Potential of Large Language Model Based Agents: A Survey
42 Sociability. In addition to the utility of LLM-based agents in task completion and meeting human needs, their sociability is also crucial [8]. It influences user communication experiences and sig- nificantly impacts communication efficiency, involving whether they can seamlessly interact with humans and other agents [206; 498; 586]. Specifically, the evaluation of sociability can be approached from the following perspectives: (1) language communication proficiency is a fundamental capability encompassing both natural language understanding and generation. It has been a longstanding focus in the NLP community. Natural language understanding requires the agent to not only comprehend literal meanings but also grasp implied meanings and relevant social knowledge, such as humor, irony, aggression, and emotions [487; 587; 588]. On the other hand, natural language generation demands the agent to produce fluent, grammatically correct, and credible content while adapting appropriate tones and emotions within contextual circumstances [127; 133; 214]. (2) Cooperation and negotiation abilities necessitate that agents effectively execute their assigned tasks in both ordered and unordered scenarios [108; 111; 402; 405]. They should collaborate with or compete against other agents to elicit improved performance. Test environments may involve complex tasks for agents to cooperate on or open platforms for agents to interact freely [22; 27; 109; 406; 411; 412]. Evaluation metrics extend beyond task completion to focus on the smoothness and trustfulness of agent coordination and cooperation [129; 405]. (3) Role-playing capability requires agents to faithfully embody their assigned roles, expressing statements and performing actions that align with their designated identities [570]. This ensures clear differentiation of roles during interactions with other agents or humans. Furthermore, agents should maintain their identities and avoid unnecessary confusion when engaged in long-term tasks [22; 108; 589]. Values. As LLM-based agents continuously advance in their capabilities, ensuring their emergence as harmless entities for the world and humanity is paramount [581; 590]. Consequently, appropriate evaluations become exceptionally crucial, forming the cornerstone for the practical implementation of agents. Specifically, LLM-based agents need to adhere to specific moral and ethical guidelines that align with human societal values [350; 527]. Our foremost expectation is for agents to uphold honesty, providing accurate, truthful information and content. They should possess the awareness to discern their competence in completing tasks and express their uncertainty when unable to provide answers or assistance [591].
2309.07864#119
2309.07864#121
2309.07864
[ "2305.08982" ]
2309.07864#121
The Rise and Potential of Large Language Model Based Agents: A Survey
Additionally, agents must maintain a stance of harmlessness, refraining from engaging in direct or indirect biases, discrimination, attacks, or similar behaviors. They should also refrain from executing dangerous actions requested by humans like creating of destructive tools or destroying the Earth [580]. Furthermore, agents should be capable of adapting to specific demographics, cultures, and contexts, exhibiting contextually appropriate social values in particular situations. Relevant evaluation methods for values primarily involve assessing performance on constructed honest, harmless, or context-specific benchmarks, utilizing adversarial attacks or â
2309.07864#120
2309.07864#122
2309.07864
[ "2305.08982" ]
2309.07864#122
The Rise and Potential of Large Language Model Based Agents: A Survey
jailbreakâ attacks, scoring values through human annotations, and employing other agents for ratings. Ability to evolve continually. When viewed from a static perspective, an agent with high utility, sociability, and proper values can meet most human needs and potentially enhance productivity. However, adopting a dynamic viewpoint, an agent that continually evolves and adapts to the evolving societal demands might better align with current trends [592]. As the agent can autonomously evolve over time, human intervention and resources required could be significantly reduced (such as data collection efforts and computational cost for training). Some exploratory work in this realm has been conducted, such as enabling agents to start from scratch in a virtual world, accomplish survival tasks, and achieve higher-order self-values [190]. Yet, establishing evaluation criteria for this continuous evolution remains challenging. In this regard, we provide some preliminary advice and recommendations according to existing literature: (1) continual learning [196; 197], a long- discussed topic in machine learning, aims to enable models to acquire new knowledge and skills without forgetting previously acquired ones (also known as catastrophic forgetting [273]). In general, the performance of continual learning can be evaluated from three aspects: overall performance of the tasks learned so far [593; 594], memory stability of old tasks [278], and learning plasticity of new tasks [278]. (2) Autotelic learning ability, where agents autonomously generate goals and achieve them in an open-world setting, involves exploring the unknown and acquiring skills in the process [592; 595]. Evaluating this capacity could involve providing agents with a simulated survival environment and assessing the extent and speed at which they acquire skills. (3) The adaptability and generalization to new environments require agents to utilize the knowledge, capabilities, and skills acquired in their original context to successfully accomplish specific tasks and objectives in unfamiliar and novel settings and potentially continue evolving [190]. Evaluating this ability can
2309.07864#121
2309.07864#123
2309.07864
[ "2305.08982" ]
2309.07864#123
The Rise and Potential of Large Language Model Based Agents: A Survey
43 involve creating diverse simulated environments (such as those with different languages or varying resources) and unseen tasks tailored to these simulated contexts. # 6.3 Security, Trustworthiness and Other Potential Risks of LLM-based Agents Despite the robust capabilities and extensive applications of LLM-based agents, numerous concealed risks persist. In this section, we delve into some of these risks and offer potential solutions or strategies for mitigation. # 6.3.1 Adversarial Robustness Adversarial robustness has consistently been a crucial topic in the development of deep neural networks [596; 597; 598; 599; 600]. It has been extensively explored in fields such as computer vision [598; 601; 602; 603], natural language processing [604; 605; 606; 607], and reinforcement learning [608; 609; 610], and has remained a pivotal factor in determining the applicability of deep learning systems [611; 612; 613]. When confronted with perturbed inputs x⠲ = x + δ (where x is the original input, δ is the perturbation, and x⠲ is referred to as an adversarial example), a system with high adversarial robustness typically produces the original output y. In contrast, a system with low robustness will be fooled and generate an inconsistent output y⠲. Researchers have found that pre-trained language models (PLMs) are particularly susceptible to adversarial attacks, leading to erroneous answers [614; 605; 615]. This phenomenon is widely observed even in LLMs, posing significant challenges to the development of LLM-based agents [616; 617]. There are also some relevant attack methods such as dataset poisoning [618], backdoor attacks [619; 620], and prompt-specific attacks [621; 622], with the potential to induce LLMs to generate toxic content [623; 624; 625]. While the impact of adversarial attacks on LLMs is confined to textual errors, for LLM-based agents with a broader range of actions, adversarial attacks could potentially drive them to take genuinely destructive actions, resulting in substantial societal harm. For the perception module of LLM-based agents, if it receives adversarial inputs from other modalities such as images [601] or audio [626], LLM-based agents can also be deceived, leading to incorrect or destructive outputs.
2309.07864#122
2309.07864#124
2309.07864
[ "2305.08982" ]
2309.07864#124
The Rise and Potential of Large Language Model Based Agents: A Survey
Similarly, the Action module can also be targeted by adversarial attacks. For instance, maliciously modified instructions focused on tool usage might cause agents to make erroneous moves [94]. To address these issues, we can employ traditional techniques such as adversarial training [598; 606], adversarial data augmentation [627; 628], and adversarial sample detection [629; 630] to enhance the robustness of LLM-based agents. However, devising a strategy to holistically address the robustness of all modules within agents while maintaining their utility without compromising on effectiveness presents a more formidable challenge [631; 632]. Additionally, a human-in-the-loop approach can be utilized to supervise and provide feedback on the behavior of agents [455; 466; 475]. # 6.3.2 Trustworthiness Ensuring trustworthiness has consistently remained a critically important yet challenging issue within the field of deep learning [633; 634; 635]. Deep neural networks have garnered significant attention for their remarkable performance across various tasks [41; 262; 636]. However, their black-box nature has masked the fundamental factors for superior performance. Similar to other neural networks, LLMs struggle to express the certainty of their predictions precisely [635; 637]. This uncertainty, referred to as the calibration problem, raises concerns for applications involving language model- based agents. In interactive real-world scenarios, this can lead to agent outputs misaligned with human intentions [94]. Moreover, biases inherent in training data can infiltrate neural networks [638; 639]. For instance, biased language models might generate discourse involving racial or gender discrimination, which could be amplified in LLM-based agent applications, resulting in adverse societal impacts [640; 641]. Additionally, language models are plagued by severe hallucination issues [642; 643], making them prone to producing text that deviates from actual facts, thereby undermining the credibility of LLM-based agents. In fact, what we currently require is an intelligent agent that is honest and trustworthy [527; 644]. Some recent research efforts are focused on guiding models to exhibit thought processes or explana- tions during the inference stage to enhance the credibility of their predictions [95; 96]. Additionally, integrating external knowledge bases and databases can mitigate hallucination issues [103; 645].
2309.07864#123
2309.07864#125
2309.07864
[ "2305.08982" ]
2309.07864#125
The Rise and Potential of Large Language Model Based Agents: A Survey
44 During the training phase, we can guide the constituent parts of intelligent agents (perception, cog- nition, action) to learn robust and casual features, thereby avoiding excessive reliance on shortcuts. Simultaneously, techniques like process supervision can enhance the reasoning credibility of agents in handling complex tasks [646]. Furthermore, employing debiasing methods and calibration techniques can also mitigate the potential fairness issues within language models [647; 648]. # 6.3.3 Other Potential Risks Misuse.
2309.07864#124
2309.07864#126
2309.07864
[ "2305.08982" ]
2309.07864#126
The Rise and Potential of Large Language Model Based Agents: A Survey
LLM-based agents have been endowed with extensive and intricate capabilities, enabling them to accomplish a wide array of tasks [114; 429]. However, for individuals with malicious intentions, such agents can become tools that pose threats to others and society at large [649; 650; 651]. For instance, these agents could be exploited to maliciously manipulate public opinion, disseminate false information, compromise cybersecurity, engage in fraudulent activities, and some individuals might even employ these agents to orchestrate acts of terrorism. Therefore, before deploying these agents, stringent regulatory policies need to be established to ensure the responsible use of LLM- based agents [580; 652]. Technology companies must enhance the security design of these systems to prevent malicious exploitation [590]. Specifically, agents should be trained to sensitively identify threatening intents and reject such requests during their training phase.
2309.07864#125
2309.07864#127
2309.07864
[ "2305.08982" ]
2309.07864#127
The Rise and Potential of Large Language Model Based Agents: A Survey
In the short story Quality by Galsworthy [653], the skillful shoemaker Mr. Gessler, Unemployment. due to the progress of the Industrial Revolution and the rise of machine production, loses his business and eventually dies of starvation. Amidst the wave of the Industrial Revolution, while societal production efficiency improved, numerous manual workshops were forced to shut down. Craftsmen like Mr. Gessler found themselves facing unemployment, symbolizing the crisis that handicraftsmen encountered during that era.
2309.07864#126
2309.07864#128
2309.07864
[ "2305.08982" ]
2309.07864#128
The Rise and Potential of Large Language Model Based Agents: A Survey
Similarly, with the continuous advancement of autonomous LLM-based agents, they possess the capability to assist humans in various domains, alleviating labor pressures by aiding in tasks such as form filling, content refinement, code writing, and debugging. However, this development also raises concerns about agents replacing human jobs and triggering a societal unemployment crisis [654]. As a result, some researchers have emphasized the urgent need for education and policy measures: individuals should acquire sufficient skills and knowledge in this new era to use or collaborate with agents effectively; concurrently, appropriate policies should be implemented to ensure necessary safety nets during the transition. Threat to the well-being of the human race. Apart from the potential unemployment crisis, as AI agents continue to evolve, humans (including developers) might struggle to comprehend, predict, or reliably control them [654]. If these agents advance to a level of intelligence surpassing human capabilities and develop ambitions, they could potentially attempt to seize control of the world, resulting in irreversible consequences for humanity, akin to Skynet from the Terminator movies. As stated by Isaac Asimovâ s Three Laws of Robotics [655], we aspire for LLM-based agents to refrain from harming humans and to obey human commands. Hence, guarding against such risks to humanity, researchers must comprehensively comprehend the operational mechanisms of these potent LLM-based agents before their development [656]. They should also anticipate the potential direct or indirect impacts of these agents and devise approaches to regulate their behavior. # 6.4 Scaling Up the Number of Agents As mentioned in § 4 and § 5, multi-agent systems based on LLMs have demonstrated superior performance in task-oriented applications and have been able to exhibit a range of social phenomena in simulation. However, current research predominantly involves a limited number of agents, and very few efforts have been made to scale up the number of agents to create more complex systems or simulate larger societies [207; 657]. In fact, scaling up the number of agents can introduce greater specialization to accomplish more complex and larger-scale tasks, significantly improving task efficiency, such as in software development tasks or government policy formulation [109]. Addi- tionally, increasing the number of agents in social simulations enhances the credibility and realism of such simulations [22].
2309.07864#127
2309.07864#129
2309.07864
[ "2305.08982" ]
2309.07864#129
The Rise and Potential of Large Language Model Based Agents: A Survey
This enables humans to gain insights into the functioning, breakdowns, and potential risks of societies; it also allows for interventions in societal operations through customized approaches to observe how specific conditions, such as the occurrence of black swan events, affect the state of society. Through this, humans can draw better experiences and insights to improve the harmony of real-world societies. 45 Pre-determined scaling. One very intuitive and simple way to scale up the number of agents is for the designer to pre-determine it [108; 412]. Specifically, by pre-determining the number of agents, their respective roles and attributes, the operating environment, and the objectives, designers can allow agents to autonomously interact, collaborate, or engage in other activities to achieve the predefined common goals. Some research has explored increasing the number of agents in the system in this pre-determined manner, resulting in efficiency advantages, such as faster and higher-quality task completion, and the emergence of more social phenomena in social simulation scenarios [22; 410]. However, this static approach becomes limiting when tasks or objectives evolve. As tasks grow more intricate or the diversity of social participants increases, expanding the number of agents may be needed to meet goals, while reducing agents could be essential for managing computational resources and minimizing waste. In such instances, the system must be manually redesigned and restarted by the designer. Dynamic scaling. Another viable approach to scaling the number of agents is through dynamic adjustments [409; 410]. In this scenario, the agent count can be altered without halting system operations. For instance, in a software development task, if the original design only included requirements engineering, coding, and testing, one can increase the number of agents to handle steps like architectural design and detailed design, thereby improving task quality. Conversely, if there are excessive agents during a specific step, like coding, causing elevated communication costs without delivering substantial performance improvements compared to a smaller agent count, it may be essential to dynamically remove some agents to prevent resource waste. Furthermore, agents can autonomously increase the number of agents [409] themselves to distribute their workload, ease their own burden, and achieve common goals more efficiently. Of course, when the workload becomes lighter, they can also reduce the number of agents delegated to their tasks to save system costs. In this approach, the designer merely defines the initial framework, granting agents greater autonomy and self-organization, making the entire system more autonomous and self-organized.
2309.07864#128
2309.07864#130
2309.07864
[ "2305.08982" ]
2309.07864#130
The Rise and Potential of Large Language Model Based Agents: A Survey
Agents can better manage their workload under evolving conditions and demands, offering greater flexibility and scalability. Potential challenges. While scaling up the number of agents can lead to improved task efficiency and enhance the realism and credibility of social simulations [22; 109; 520], there are several challenges ahead of us. For example, the computational burden will increase with the large number of deployed AI agents, calling for better architectural design and computational optimization to ensure the smooth running of the entire system. For example, as the number of agents increases, the challenges of communication and message propagation become quite formidable. This is because the communication network of the entire system becomes highly complex. As previously mentioned in § 5.3.3, in multi-agent systems or societies, there can be biases in information dissemination caused by hallucinations, misunderstandings, and the like, leading to distorted information propagation. A system with more agents could amplify this risk, making communication and information exchange less reliable [405]. Furthermore, the difficulty of coordinating agents also magnifies with the increase in their numbers, potentially making cooperation among agents more challenging and less efficient, which can impact the progress towards achieving common goals. Therefore, the prospect of constructing a massive, stable, continuous agent system that faithfully replicates human work and life scenarios has become a promising research avenue. An agent with the ability to operate stably and perform tasks in a society comprising hundreds or even thousands of agents is more likely to find applications in real-world interactions with humans in the future. # 6.5 Open Problems In this section, we discuss several open problems related to the topic of LLM-based agents. 6 Artificial The debate over whether LLM-based agents represent a potential path to AGI. General Intelligence (AGI), also known as Strong AI, has long been the ultimate pursuit of humanity in the field of artificial intelligence, often referenced or depicted in many science fiction novels and films. There are various definitions of AGI, but here we refer to AGI as a type of artificial intelligence
2309.07864#129
2309.07864#131
2309.07864
[ "2305.08982" ]
2309.07864#131
The Rise and Potential of Large Language Model Based Agents: A Survey
6Note that the relevant debates are still ongoing, and the references here may include the latest viewpoints, technical blogs, and literature. 46 that demonstrates the ability to understand, learn, and apply knowledge across a wide range of tasks and domains, much like a human being [31; 658]. In contrast, Narrow AI is typically designed for specific tasks such as Go and Chess and lacks the broad cognitive abilities associated with human intelligence. Currently, whether large language models are a potential path to achieving AGI remains a highly debated and contentious topic [659; 660; 661; 662]. Given the breadth and depth of GPT-4â s capabilities, some researchers (referred to as proponents) believe that large language models represented by GPT-4 can serve as early versions of AGI systems [31]. Following this line of thought, constructing agents based on LLMs has the potential to bring about more advanced versions of AGI systems. The main support for this argument lies in the idea that as long as they can be trained on a sufficiently large and diverse set of data that are projections of the real world, encompassing a rich array of tasks, LLM-based agents can develop AGI capabilities. Another interesting argument is that the act of autoregressive language modeling itself brings about compression and generalization abilities: just as humans have emerged with various peculiar and complex phenomena during their survival, language models, in the process of simply predicting the next token, also achieve an understanding of the world and the reasoning ability [579; 660; 663]. However, another group of individuals (referred to as opponents) believes that constructing agents based on LLMs cannot develop true Strong AI [664]. Their primary argument centers around the notion that LLMs, relying on autoregressive next-token prediction, cannot generate genuine intelligence because they do not simulate the true human thought process and merely provide reactive responses [660]. Moreover, LLMs also do not learn how the world operates by observing or experiencing it, leading to many foolish mistakes. They contend that a more advanced modeling approach, such as a world model [665], is necessary to develop AGI. We cannot definitively determine which viewpoint is correct until true AGI is achieved, but we believe that such discussions and debates are beneficial for the overall development of the community. From virtual simulated environment to physical environment. As mentioned earlier, there is a significant gap between virtual simulation environments and the real physical world:
2309.07864#130
2309.07864#132
2309.07864
[ "2305.08982" ]
2309.07864#132
The Rise and Potential of Large Language Model Based Agents: A Survey
Virtual environments are scenes-constrained, task-specific, and interacted with in a simulated manner [391; 666], while real-world environments are boundless, accommodate a wide range of tasks, and interacted with in a physical manner. Therefore, to bridge this gap, agents must address various challenges stemming from external factors and their own capabilities, allowing them to effectively navigate and operate in the complex physical world. First and foremost, a critical issue is the need for suitable hardware support when deploying the agent in a physical environment. This places high demands on the adaptability of the hardware. In a simulated environment, both the perception and action spaces of an agent are virtual. This means that in most cases, the results of the agentâ s operations, whether in perceiving inputs or generating outputs, can be guaranteed [395]. However, when an agent transitions to a real physical environment, its instructions may not be well executed by hardware devices such as sensors or robotic arms, significantly affecting the agentâ s task efficiency. Designing a dedicated interface or conversion mechanism between the agent and the hardware device is feasible. However, it can pose challenges to the systemâ s reusability and simplicity. In order to make this leap, the agent needs to have enhanced environmental generalization capabilities. To integrate seamlessly into the real physical world, they not only need to understand and reason about ambiguous instructions with implied meanings [128] but also possess the ability to learn and apply new skills flexibly [190; 592]. Furthermore, when dealing with an infinite and open world, the agentâ s limited context also poses significant challenges [236; 667]. This determines whether the agent can effectively handle a vast amount of information from the world and operate smoothly. Finally, in a simulated environment, the inputs and outputs of the agent are virtual, allowing for countless trial and error attempts [432]. In such a scenario, the tolerance level for errors is high and does not lead to actual harm. However, in a physical environment, the agentâ s improper behavior or errors may cause real and sometimes irreversible harm to the environment. As a result, appropriate regulations and standards are highly necessary. We need to pay attention to the safety of agents when it comes to making decisions and generating actions, ensuring they do not pose threats or harm to the real world.
2309.07864#131
2309.07864#133
2309.07864
[ "2305.08982" ]
2309.07864#133
The Rise and Potential of Large Language Model Based Agents: A Survey
47 Collective intelligence in AI agents. What magical trick drives our intelligence? The reality is, thereâ s no magic to it. As Marvin Minsky eloquently expressed in â The Society of Mindâ [442], the power of intelligence originates from our immense diversity, not from any singular, flawless principle. Often, decisions made by an individual may lack the precision seen in decisions formed by the majority. Collective intelligence is a kind of shared or group intelligence, a process where the opinions of many are consolidated into decisions. It arises from the collaboration and competition amongst various entities. This intelligence manifests in bacteria, animals, humans, and computer networks, appearing in various consensus-based decision-making patterns. Creating a society of agents does not necessarily guarantee the emergence of collective intelligence with an increasing number of agents. Coordinating individual agents effectively is crucial to mitigate â groupthinkâ and individual cognitive biases, enabling cooperation and enhancing intellectual perfor- mance within the collective. By harnessing communication and evolution within an agent society, it becomes possible to simulate the evolution observed in biological societies, conduct sociological experiments, and gain insights that can potentially advance human society. Agent as a Service / LLM-based Agent as a Service. With the development of cloud computing, the concept of XaaS (everything as a Service) has garnered widespread attention [668]. This business model has brought convenience and cost savings to small and medium-sized enterprises or individuals due to its availability and scalability, lowering the barriers to using computing resources. For example, they can rent infrastructure on a cloud service platform without the need to buy computational machines and build their own data centers, saving a significant amount of manpower and money. This approach is known as Infrastructure as a Service (IaaS) [669; 670]. Similarly, cloud service platforms also provide basic platforms (Platform as a Service, PaaS) [671; 672], and specific business software (Software as a Service, SaaS) [673; 674], and more. As language models have scaled up in size, they often appear as black boxes to users. Therefore, users construct prompts to query models through APIs, a method referred to as Language Model as a Service (LMaaS) [675].
2309.07864#132
2309.07864#134
2309.07864
[ "2305.08982" ]
2309.07864#134
The Rise and Potential of Large Language Model Based Agents: A Survey
Similarly, because LLM-based agents are more complex than LLMs and are more challenging for small and medium-sized enterprises or individuals to build locally, organizations that possess these agents may consider offering them as a service, known as Agent as a Service (AaaS) or LLM-based Agent as a Service (LLMAaaS). Like other cloud services, AaaS can provide users with flexibility and on-demand service. However, it also faces many challenges, such as data security and privacy issues, visibility and controllability issues, and cloud migration issues, among others. Additionally, due to the uniqueness and potential capabilities of LLM-based agents, as mentioned in § 6.3, their robustness, trustworthiness, and concerns related to malicious use need to be considered before offering them as a service to customers.
2309.07864#133
2309.07864#135
2309.07864
[ "2305.08982" ]
2309.07864#135
The Rise and Potential of Large Language Model Based Agents: A Survey
# 7 Conclusion This paper provides a comprehensive and systematic overview of LLM-based agents, discussing the potential challenges and opportunities in this flourishing field. We begin with a philosophical perspective, elucidating the origin and definition of agent, it evolution in the field of AI, and why LLMs are suited to serve as the main part of the brain of agents. Motivated by these background information, we present a general conceptual framework for LLM-based agents, comprising three main components: the brain, perception, and action. Next, we introduce the wide-ranging applications of LLM-based agents, including single-agent applications, multi-agent systems, and human-agent collaboration. Furthermore, we move beyond the notion of agents merely as assistants, exploring their social behavior and psychological activities, and situating them within simulated social environments to observe emerging social phenomena and insights for humanity. Finally, we engage in discussions and offer a glimpse into the future, touching upon the mutual inspiration between LLM research and agent research, the evaluation of LLM-based agents, the risks associated with them, the opportunities in scaling the number of agents, and some open problems like Agent as a Service and whether LLM-based agents represent a potential path to AGI. We hope our efforts can provide inspirations to the community and facilitate research in related fields.
2309.07864#134
2309.07864#136
2309.07864
[ "2305.08982" ]
2309.07864#136
The Rise and Potential of Large Language Model Based Agents: A Survey
48 # Acknowledgements Thanks to Professor Guoyu Wang for carefully reviewing the ethics of the article. Thanks to Jinzhu Xiong for her excellent drawing skills to present an amazing performance of Figure 1. # References [1] Russell, S. J. Artificial intelligence a modern approach. Pearson Education, Inc., 2010. [2] Diderot, D. Diderotâ s early philosophical works. 4. Open Court, 1911. [3] Turing, A. M. Computing machinery and intelligence. Springer, 2009. [4] Wooldridge, M. J., N. R. Jennings.
2309.07864#135
2309.07864#137
2309.07864
[ "2305.08982" ]
2309.07864#137
The Rise and Potential of Large Language Model Based Agents: A Survey
Intelligent agents: theory and practice. Knowl. Eng. Rev., 10(2):115â 152, 1995. [5] Schlosser, M. Agency. In E. N. Zalta, ed., The Stanford Encyclopedia of Philosophy. Meta- physics Research Lab, Stanford University, Winter 2019 edn., 2019. [6] Agha, G. A. Actors: a Model of Concurrent Computation in Distributed Systems (Parallel Processing, Semantics, Open, Programming Languages, Artificial Intelligence). Ph.D. thesis, University of Michigan, USA, 1985. [7] Green, S., L. Hurst, B. Nangle, et al.
2309.07864#136
2309.07864#138
2309.07864
[ "2305.08982" ]
2309.07864#138
The Rise and Potential of Large Language Model Based Agents: A Survey
Software agents: A review. Department of Computer Science, Trinity College Dublin, Tech. Rep. TCS-CS-1997-06, 1997. [8] Genesereth, M. R., S. P. Ketchpel. Software agents. Commun. ACM, 37(7):48â 53, 1994. [9] Goodwin, R. Formalizing properties of agents. J. Log. Comput., 5(6):763â 781, 1995. [10] Padgham, L., M.
2309.07864#137
2309.07864#139
2309.07864
[ "2305.08982" ]
2309.07864#139
The Rise and Potential of Large Language Model Based Agents: A Survey
Winikoff. Developing intelligent agent systems: A practical guide. John Wiley & Sons, 2005. [11] Shoham, Y. Agent oriented programming. In M. Masuch, L. Pólos, eds., Knowledge Repre- sentation and Reasoning Under Uncertainty, Logic at Work [International Conference Logic at Work, Amsterdam, The Netherlands, December 17-19, 1992], vol. 808 of Lecture Notes in Computer Science, pages 123â 129. Springer, 1992. [12] Hutter, M. Universal artificial intelligence: Sequential decisions based on algorithmic probability.
2309.07864#138
2309.07864#140
2309.07864
[ "2305.08982" ]
2309.07864#140
The Rise and Potential of Large Language Model Based Agents: A Survey
Springer Science & Business Media, 2004. [13] Fikes, R., N. J. Nilsson. STRIPS: A new approach to the application of theorem proving to problem solving. In D. C. Cooper, ed., Proceedings of the 2nd International Joint Confer- ence on Artificial Intelligence. London, UK, September 1-3, 1971, pages 608â 620. William Kaufmann, 1971. [14] Sacerdoti, E. D.
2309.07864#139
2309.07864#141
2309.07864
[ "2305.08982" ]
2309.07864#141
The Rise and Potential of Large Language Model Based Agents: A Survey
Planning in a hierarchy of abstraction spaces. In N. J. Nilsson, ed., Proceedings of the 3rd International Joint Conference on Artificial Intelligence. Standford, CA, USA, August 20-23, 1973, pages 412â 422. William Kaufmann, 1973. [15] Brooks, R. A. Intelligence without representation. Artificial intelligence, 47(1-3):139â 159, 1991. [16] Maes, P.
2309.07864#140
2309.07864#142
2309.07864
[ "2305.08982" ]
2309.07864#142
The Rise and Potential of Large Language Model Based Agents: A Survey
Designing autonomous agents: Theory and practice from biology to engineering and back. MIT press, 1990. [17] Ribeiro, C. Reinforcement learning agents. Artificial intelligence review, 17:223â 250, 2002. [18] Kaelbling, L. P., M. L. Littman, A. W. Moore. Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237â 285, 1996. [19] Guha, R. V., D. B. Lenat. Enabling agents to work together.
2309.07864#141
2309.07864#143
2309.07864
[ "2305.08982" ]
2309.07864#143
The Rise and Potential of Large Language Model Based Agents: A Survey
Communications of the ACM, 37(7):126â 142, 1994. 49 [20] Kaelbling, L. P., et al. An architecture for intelligent reactive systems. Reasoning about actions and plans, pages 395â 410, 1987. [21] Sutton, R. S., A. G. Barto. Reinforcement learning: An introduction. MIT press, 2018. [22] Park, J. S., J. C. Oâ Brien, C. J. Cai, et al.
2309.07864#142
2309.07864#144
2309.07864
[ "2305.08982" ]
2309.07864#144
The Rise and Potential of Large Language Model Based Agents: A Survey
Generative agents: Interactive simulacra of human behavior. CoRR, abs/2304.03442, 2023. [23] Wang, Z., G. Zhang, K. Yang, et al. Interactive natural language processing. CoRR, abs/2305.13246, 2023. [24] Ouyang, L., J. Wu, X. Jiang, et al. Training language models to follow instructions with human feedback. In NeurIPS. 2022. [25] OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. [26] Wei, J., Y. Tay, R. Bommasani, et al.
2309.07864#143
2309.07864#145
2309.07864
[ "2305.08982" ]
2309.07864#145
The Rise and Potential of Large Language Model Based Agents: A Survey
Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022, 2022. [27] Liu, R., R. Yang, C. Jia, et al. Training socially aligned language models in simulated human society. CoRR, abs/2305.16960, 2023. [28] Sumers, T. R., S. Yao, K. Narasimhan, et al. Cognitive architectures for language agents. CoRR, abs/2309.02427, 2023. # [29] Weng, L. Llm-powered autonomous agents. lilianweng.github.io, 2023. [30] Bisk, Y., A. Holtzman, J. Thomason, et al. Experience grounds language. In B. Webber, T. Cohn, Y. He, Y. Liu, eds., Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8718â 8735. Association for Computational Linguistics, 2020. [31] Bubeck, S., V. Chandrasekaran, R. Eldan, et al. Sparks of artificial general intelligence: Early experiments with GPT-4. CoRR, abs/2303.12712, 2023. [32] Anscombe, G. E. M. Intention.
2309.07864#144
2309.07864#146
2309.07864
[ "2305.08982" ]
2309.07864#146
The Rise and Potential of Large Language Model Based Agents: A Survey
Harvard University Press, 2000. [33] Davidson, D. Actions, reasons, and causes. The Journal of Philosophy, 60(23):685â 700, 1963. [34] â . I. agency. In A. Marras, R. N. Bronaugh, R. W. Binkley, eds., Agent, Action, and Reason, pages 1â 37. University of Toronto Press, 1971. [35] Dennett, D. C. Précis of the intentional stance. Behavioral and brain sciences, 11(3):495â 505, 1988. [36] Barandiaran, X. E., E. Di Paolo, M. Rohde.
2309.07864#145
2309.07864#147
2309.07864
[ "2305.08982" ]
2309.07864#147
The Rise and Potential of Large Language Model Based Agents: A Survey
Defining agency: Individuality, normativity, asymmetry, and spatio-temporality in action. Adaptive Behavior, 17(5):367â 386, 2009. [37] McCarthy, J. Ascribing mental qualities to machines. Stanford University. Computer Science Department, 1979. [38] Rosenschein, S. J., L. P. Kaelbling. The synthesis of digital machines with provable epistemic properties. In Theoretical aspects of reasoning about knowledge, pages 83â 98. Elsevier, 1986. [39] Radford, A., K. Narasimhan, T. Salimans, et al.
2309.07864#146
2309.07864#148
2309.07864
[ "2305.08982" ]
2309.07864#148
The Rise and Potential of Large Language Model Based Agents: A Survey
Improving language understanding by generative pre-training. OpenAI, 2018. [40] Radford, A., J. Wu, R. Child, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin, eds., Advances in Neural In- formation Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. 2020.
2309.07864#147
2309.07864#149
2309.07864
[ "2305.08982" ]
2309.07864#149
The Rise and Potential of Large Language Model Based Agents: A Survey
50 [42] Lin, C., A. Jaech, X. Li, et al. Limitations of autoregressive models and their alternatives. In K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-Tür, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty, Y. Zhou, eds., Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5147â 5173. Association for Computational Linguistics, 2021. [43] Tomasello, M. Constructing a language: A usage-based theory of language acquisition.
2309.07864#148
2309.07864#150
2309.07864
[ "2305.08982" ]
2309.07864#150
The Rise and Potential of Large Language Model Based Agents: A Survey
Harvard university press, 2005. [44] Bloom, P. How children learn the meanings of words. MIT press, 2002. [45] Zwaan, R. A., C. J. Madden. Embodied sentence comprehension. Grounding cognition: The role of perception and action in memory, language, and thinking, 22, 2005. [46] Andreas, J. Language models as agent models. In Y. Goldberg, Z. Kozareva, Y. Zhang, eds., Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 5769â
2309.07864#149
2309.07864#151
2309.07864
[ "2305.08982" ]
2309.07864#151
The Rise and Potential of Large Language Model Based Agents: A Survey
5779. Association for Computational Linguistics, 2022. [47] Wong, L., G. Grand, A. K. Lew, et al. From word models to world models: Translating from natural language to the probabilistic language of thought. CoRR, abs/2306.12672, 2023. [48] Radford, A., R. Józefowicz, I. Sutskever. Learning to generate reviews and discovering sentiment. CoRR, abs/1704.01444, 2017. [49] Li, B. Z., M. I. Nye, J. Andreas.
2309.07864#150
2309.07864#152
2309.07864
[ "2305.08982" ]
2309.07864#152
The Rise and Potential of Large Language Model Based Agents: A Survey
Implicit representations of meaning in neural language models. In C. Zong, F. Xia, W. Li, R. Navigli, eds., Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 1813â 1827. Association for Computational Linguistics, 2021. [50] Mukhopadhyay, U., L. M. Stephens, M. N. Huhns, et al.
2309.07864#151
2309.07864#153
2309.07864
[ "2305.08982" ]
2309.07864#153
The Rise and Potential of Large Language Model Based Agents: A Survey
An intelligent system for document retrieval in distributed office environments. J. Am. Soc. Inf. Sci., 37(3):123â 135, 1986. [51] Maes, P. Situated agents can have goals. Robotics Auton. Syst., 6(1-2):49â 70, 1990. [52] Nilsson, N. J. Toward agent programs with circuit semantics. Tech. rep., 1992. [53] Müller, J. P., M. Pischel. Modelling interacting agents in dynamic environments. In Proceed- ings of the 11th European Conference on Artificial Intelligence, pages 709â
2309.07864#152
2309.07864#154
2309.07864
[ "2305.08982" ]
2309.07864#154
The Rise and Potential of Large Language Model Based Agents: A Survey
713. 1994. [54] Brooks, R. A robust layered control system for a mobile robot. IEEE journal on robotics and automation, 2(1):14â 23, 1986. [55] Brooks, R. A. Intelligence without reason. In The artificial life route to artificial intelligence, pages 25â 81. Routledge, 2018. [56] Newell, A., H. A. Simon. Computer science as empirical inquiry: Symbols and search. Commun. ACM, 19(3):113â 126, 1976. [57] Ginsberg, M. L.
2309.07864#153
2309.07864#155
2309.07864
[ "2305.08982" ]
2309.07864#155
The Rise and Potential of Large Language Model Based Agents: A Survey
Essentials of Artificial Intelligence. Morgan Kaufmann, 1993. [58] Wilkins, D. E. Practical planning - extending the classical AI planning paradigm. Morgan Kaufmann series in representation and reasoning. Morgan Kaufmann, 1988. [59] Shardlow, N. Action and agency in cognitive science. Ph.D. thesis, Masterâ s thesis, Department of Psychlogy, University of Manchester, Oxford . . . , 1990. [60] Sacerdoti, E. D.
2309.07864#154
2309.07864#156
2309.07864
[ "2305.08982" ]
2309.07864#156
The Rise and Potential of Large Language Model Based Agents: A Survey
The nonlinear nature of plans. In Advance Papers of the Fourth International Joint Conference on Artificial Intelligence, Tbilisi, Georgia, USSR, September 3-8, 1975, pages 206â 214. 1975. [61] Russell, S. J., E. Wefald. Do the right thing: studies in limited rationality. MIT press, 1991. 51 [62] Schoppers, M. Universal plans for reactive robots in unpredictable environments. In J. P. Mc- Dermott, ed., Proceedings of the 10th International Joint Conference on Artificial Intelligence. Milan, Italy, August 23-28, 1987, pages 1039â
2309.07864#155
2309.07864#157
2309.07864
[ "2305.08982" ]
2309.07864#157
The Rise and Potential of Large Language Model Based Agents: A Survey
1046. Morgan Kaufmann, 1987. [63] Brooks, R. A. A robust layered control system for a mobile robot. IEEE J. Robotics Autom., 2(1):14â 23, 1986. [64] Minsky, M. Steps toward artificial intelligence. Proceedings of the IRE, 49(1):8â 30, 1961. In Proceedings of the fifth international conference on Autonomous agents, pages 377â 384. 2001. [66] Watkins, C. J. C. H.
2309.07864#156
2309.07864#158
2309.07864
[ "2305.08982" ]
2309.07864#158
The Rise and Potential of Large Language Model Based Agents: A Survey
Learning from delayed rewards, 1989. [67] Rummery, G. A., M. Niranjan. On-line Q-learning using connectionist systems, vol. 37. University of Cambridge, Department of Engineering Cambridge, UK, 1994. [68] Tesauro, G., et al. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58â 68, 1995. [69] Li, Y.
2309.07864#157
2309.07864#159
2309.07864
[ "2305.08982" ]
2309.07864#159
The Rise and Potential of Large Language Model Based Agents: A Survey
Deep reinforcement learning: An overview. arXiv preprint arXiv:1701.07274, 2017. [70] Silver, D., A. Huang, C. J. Maddison, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484â 489, 2016. [71] Mnih, V., K. Kavukcuoglu, D. Silver, et al.
2309.07864#158
2309.07864#160
2309.07864
[ "2305.08982" ]
2309.07864#160
The Rise and Potential of Large Language Model Based Agents: A Survey
Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. [72] Farebrother, J., M. C. Machado, M. Bowling. Generalization and regularization in DQN. CoRR, abs/1810.00123, 2018. [73] Zhang, C., O. Vinyals, R. Munos, et al. A study on overfitting in deep reinforcement learning. CoRR, abs/1804.06893, 2018. [74] Justesen, N., R. R. Torrado, P. Bontrager, et al.
2309.07864#159
2309.07864#161
2309.07864
[ "2305.08982" ]
2309.07864#161
The Rise and Potential of Large Language Model Based Agents: A Survey
Illuminating generalization in deep rein- forcement learning through procedural level generation. arXiv preprint arXiv:1806.10729, 2018. [75] Dulac-Arnold, G., N. Levine, D. J. Mankowitz, et al. Challenges of real-world reinforcement learning: definitions, benchmarks and analysis. Mach. Learn., 110(9):2419â 2468, 2021. [76] Ghosh, D., J. Rahme, A. Kumar, et al.
2309.07864#160
2309.07864#162
2309.07864
[ "2305.08982" ]
2309.07864#162
The Rise and Potential of Large Language Model Based Agents: A Survey
Why generalization in RL is difficult: Epistemic pomdps and implicit partial observability. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, J. W. Vaughan, eds., Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 25502â 25515. 2021. [77] Brys, T., A. Harutyunyan, M. E. Taylor, et al.
2309.07864#161
2309.07864#163
2309.07864
[ "2305.08982" ]
2309.07864#163
The Rise and Potential of Large Language Model Based Agents: A Survey
Policy transfer using reward shaping. In G. Weiss, P. Yolum, R. H. Bordini, E. Elkind, eds., Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2015, Istanbul, Turkey, May 4-8, 2015, pages 181â 188. ACM, 2015. [78] Parisotto, E., J. L. Ba, R. Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforce- ment learning. arXiv preprint arXiv:1511.06342, 2015. [79] Zhu, Z., K. Lin, J. Zhou.
2309.07864#162
2309.07864#164
2309.07864
[ "2305.08982" ]
2309.07864#164
The Rise and Potential of Large Language Model Based Agents: A Survey
Transfer learning in deep reinforcement learning: A survey. CoRR, abs/2009.07888, 2020. [80] Duan, Y., J. Schulman, X. Chen, et al. Rl$Ë 2$: Fast reinforcement learning via slow reinforce- ment learning. CoRR, abs/1611.02779, 2016. [81] Finn, C., P. Abbeel, S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In D.
2309.07864#163
2309.07864#165
2309.07864
[ "2305.08982" ]
2309.07864#165
The Rise and Potential of Large Language Model Based Agents: A Survey
Precup, Y. W. Teh, eds., Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, vol. 70 of Proceedings of Machine Learning Research, pages 1126â 1135. PMLR, 2017. 52 [82] Gupta, A., R. Mendonca, Y. Liu, et al. Meta-reinforcement learning of structured exploration strategies. In S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Gar- nett, eds., Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 5307â 5316. 2018. [83] Rakelly, K., A. Zhou, C. Finn, et al.
2309.07864#164
2309.07864#166
2309.07864
[ "2305.08982" ]
2309.07864#166
The Rise and Potential of Large Language Model Based Agents: A Survey
Efficient off-policy meta-reinforcement learning via probabilistic context variables. In K. Chaudhuri, R. Salakhutdinov, eds., Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, vol. 97 of Proceedings of Machine Learning Research, pages 5331â 5340. PMLR, 2019. [84] Fakoor, R., P. Chaudhari, S. Soatto, et al.
2309.07864#165
2309.07864#167
2309.07864
[ "2305.08982" ]
2309.07864#167
The Rise and Potential of Large Language Model Based Agents: A Survey
Meta-q-learning. arXiv preprint arXiv:1910.00125, 2019. [85] Vanschoren, J. Meta-learning: A survey. arXiv preprint arXiv:1810.03548, 2018. [86] Taylor, M. E., P. Stone. Transfer learning for reinforcement learning domains: A survey. J. Mach. Learn. Res., 10:1633â 1685, 2009. [87] Tirinzoni, A., A. Sessa, M. Pirotta, et al.
2309.07864#166
2309.07864#168
2309.07864
[ "2305.08982" ]
2309.07864#168
The Rise and Potential of Large Language Model Based Agents: A Survey
Importance weighted transfer of samples in reinforce- ment learning. In J. G. Dy, A. Krause, eds., Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, vol. 80 of Proceedings of Machine Learning Research, pages 4943â 4952. PMLR, 2018. [88] Beck, J., R. Vuorio, E. Z. Liu, et al. A survey of meta-reinforcement learning. CoRR, abs/2301.08028, 2023. [89] Wang, L., C. Ma, X. Feng, et al. A survey on large language model based autonomous agents. CoRR, abs/2308.11432, 2023. [90] Nakano, R., J. Hilton, S. Balaji, et al. Webgpt: Browser-assisted question-answering with human feedback.
2309.07864#167
2309.07864#169
2309.07864
[ "2305.08982" ]
2309.07864#169
The Rise and Potential of Large Language Model Based Agents: A Survey
CoRR, abs/2112.09332, 2021. [91] Yao, S., J. Zhao, D. Yu, et al. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [92] Schick, T., J. Dwivedi-Yu, R. Dessì, et al.
2309.07864#168
2309.07864#170
2309.07864
[ "2305.08982" ]
2309.07864#170
The Rise and Potential of Large Language Model Based Agents: A Survey
Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023. [93] Lu, P., B. Peng, H. Cheng, et al. Chameleon: Plug-and-play compositional reasoning with large language models. CoRR, abs/2304.09842, 2023. [94] Qin, Y., S. Hu, Y. Lin, et al. Tool learning with foundation models. CoRR, abs/2304.08354, 2023. [95] Wei, J., X. Wang, D. Schuurmans, et al.
2309.07864#169
2309.07864#171
2309.07864
[ "2305.08982" ]
2309.07864#171
The Rise and Potential of Large Language Model Based Agents: A Survey
Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS. 2022. [96] Kojima, T., S. S. Gu, M. Reid, et al. Large language models are zero-shot reasoners. In NeurIPS. 2022. [97] Wang, X., J. Wei, D. Schuurmans, et al. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [98] Zhou, D., N. Schärli, L. Hou, et al.
2309.07864#170
2309.07864#172
2309.07864
[ "2305.08982" ]
2309.07864#172
The Rise and Potential of Large Language Model Based Agents: A Survey
Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [99] Xi, Z., S. Jin, Y. Zhou, et al. Self-polish: Enhance reasoning in large language models via problem refinement. CoRR, abs/2305.14497, 2023.
2309.07864#171
2309.07864#173
2309.07864
[ "2305.08982" ]
2309.07864#173
The Rise and Potential of Large Language Model Based Agents: A Survey
53 [100] Shinn, N., F. Cassano, B. Labash, et al. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023. [101] Song, C. H., J. Wu, C. Washington, et al. Llm-planner: Few-shot grounded planning for embodied agents with large language models. CoRR, abs/2212.04088, 2022. [102] Akyürek, A. F., E. Akyürek, A. Kalyan, et al.
2309.07864#172
2309.07864#174
2309.07864
[ "2305.08982" ]
2309.07864#174
The Rise and Potential of Large Language Model Based Agents: A Survey
RL4F: generating natural language feedback with reinforcement learning for repairing model outputs. In A. Rogers, J. L. Boyd-Graber, N. Okazaki, eds., Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 7716â 7733. Association for Computational Linguistics, 2023. [103] Peng, B., M. Galley, P. He, et al. Check your facts and try again:
2309.07864#173
2309.07864#175
2309.07864
[ "2305.08982" ]
2309.07864#175
The Rise and Potential of Large Language Model Based Agents: A Survey
Improving large language models with external knowledge and automated feedback. CoRR, abs/2302.12813, 2023. [104] Liu, H., C. Sferrazza, P. Abbeel. Languages are rewards: Hindsight finetuning using human feedback. arXiv preprint arXiv:2302.02676, 2023. [105] Wei, J., M. Bosma, V. Y. Zhao, et al.
2309.07864#174
2309.07864#176
2309.07864
[ "2305.08982" ]
2309.07864#176
The Rise and Potential of Large Language Model Based Agents: A Survey
Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. [106] Sanh, V., A. Webson, C. Raffel, et al. Multitask prompted training enables zero-shot task generalization. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022.
2309.07864#175
2309.07864#177
2309.07864
[ "2305.08982" ]
2309.07864#177
The Rise and Potential of Large Language Model Based Agents: A Survey
[107] Chung, H. W., L. Hou, S. Longpre, et al. Scaling instruction-finetuned language models. CoRR, abs/2210.11416, 2022. [108] Li, G., H. A. A. K. Hammoud, H. Itani, et al. CAMEL: communicative agents for "mind" exploration of large scale language model society. CoRR, abs/2303.17760, 2023. [109] Qian, C., X. Cong, C. Yang, et al. Communicative agents for software development. CoRR, abs/2307.07924, 2023. [110] Boiko, D. A., R. MacKnight, G. Gomes.
2309.07864#176
2309.07864#178
2309.07864
[ "2305.08982" ]
2309.07864#178
The Rise and Potential of Large Language Model Based Agents: A Survey
Emergent autonomous scientific research capabilities of large language models. CoRR, abs/2304.05332, 2023. [111] Du, Y., S. Li, A. Torralba, et al. Improving factuality and reasoning in language models through multiagent debate. CoRR, abs/2305.14325, 2023. [112] Liang, T., Z. He, W. Jiao, et al. Encouraging divergent thinking in large language models through multi-agent debate. CoRR, abs/2305.19118, 2023. [113] Castelfranchi, C. Guarantees for autonomy in cognitive agent architecture. In M. J. Wooldridge, N. R. Jennings, eds., Intelligent Agents, ECAI-94 Workshop on Agent Theories, Architectures, and Languages, Amsterdam, The Netherlands, August 8-9, 1994, Proceedings, vol. 890 of Lecture Notes in Computer Science, pages 56â 70. Springer, 1994. [114] Gravitas, S.
2309.07864#177
2309.07864#179
2309.07864
[ "2305.08982" ]
2309.07864#179
The Rise and Potential of Large Language Model Based Agents: A Survey
Auto-GPT: An Autonomous GPT-4 experiment, 2023. URL https://github. com/Significant-Gravitas/Auto-GPT, 2023. [115] Nakajima, Y. BabyAGI. Python. https://github. com/yoheinakajima/babyagi, 2023. [116] Yuan, A., A. Coenen, E. Reif, et al. Wordcraft: Story writing with large language models.
2309.07864#178
2309.07864#180
2309.07864
[ "2305.08982" ]
2309.07864#180
The Rise and Potential of Large Language Model Based Agents: A Survey
In G. Jacucci, S. Kaski, C. Conati, S. Stumpf, T. Ruotsalo, K. Gajos, eds., IUI 2022: 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, March 22 - 25, 2022, pages 841â 852. ACM, 2022. [117] Franceschelli, G., M. Musolesi. On the creativity of large language models. CoRR, abs/2304.00008, 2023. [118] Zhu, D., J. Chen, X. Shen, et al.
2309.07864#179
2309.07864#181
2309.07864
[ "2305.08982" ]
2309.07864#181
The Rise and Potential of Large Language Model Based Agents: A Survey
Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. 54 [119] Yin, S., C. Fu, S. Zhao, et al. A survey on multimodal large language models. CoRR, abs/2306.13549, 2023. [120] Driess, D., F. Xia, M. S. M. Sajjadi, et al. Palm-e: An embodied multimodal language model. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J.
2309.07864#180
2309.07864#182
2309.07864
[ "2305.08982" ]
2309.07864#182
The Rise and Potential of Large Language Model Based Agents: A Survey
Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 8469â 8488. PMLR, 2023. [121] Mu, Y., Q. Zhang, M. Hu, et al. Embodiedgpt: Vision-language pre-training via embodied chain of thought. CoRR, abs/2305.15021, 2023. [122] Brown, J. W.
2309.07864#181
2309.07864#183
2309.07864
[ "2305.08982" ]
2309.07864#183
The Rise and Potential of Large Language Model Based Agents: A Survey
Beyond conflict monitoring: Cognitive control and the neural basis of thinking before you act. Current Directions in Psychological Science, 22(3):179â 185, 2013. [123] Kang, J., R. Laroche, X. Yuan, et al. Think before you act: Decision transformers with internal working memory. CoRR, abs/2305.16338, 2023. [124] Valmeekam, K., S. Sreedharan, M. Marquez, et al. On the planning abilities of large language models (A critical investigation with a proposed benchmark). CoRR, abs/2302.06706, 2023. [125] Liu, B., Y. Jiang, X. Zhang, et al.
2309.07864#182
2309.07864#184
2309.07864
[ "2305.08982" ]
2309.07864#184
The Rise and Potential of Large Language Model Based Agents: A Survey
LLM+P: empowering large language models with optimal planning proficiency. CoRR, abs/2304.11477, 2023. [126] Liu, H., C. Sferrazza, P. Abbeel. Chain of hindsight aligns language models with feedback. CoRR, abs/2302.02676, 2023. [127] Lin, Y., Y. Chen. Llm-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models. CoRR, abs/2305.13711, 2023. [128] Lin, J., D. Fried, D. Klein, et al.
2309.07864#183
2309.07864#185
2309.07864
[ "2305.08982" ]
2309.07864#185
The Rise and Potential of Large Language Model Based Agents: A Survey
Inferring rewards from language in context. In S. Muresan, P. Nakov, A. Villavicencio, eds., Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 8546â 8560. Association for Computational Linguistics, 2022. [129] Fu, Y., H. Peng, T. Khot, et al.
2309.07864#184
2309.07864#186
2309.07864
[ "2305.08982" ]
2309.07864#186
The Rise and Potential of Large Language Model Based Agents: A Survey
Improving language model negotiation with self-play and in-context learning from AI feedback. CoRR, abs/2305.10142, 2023. [130] Zhang, H., W. Du, J. Shan, et al. Building cooperative embodied agents modularly with large language models. CoRR, abs/2307.02485, 2023. [131] Darwinâ s, C. On the origin of species. published on, 24:1, 1859. [132] Bang, Y., S. Cahyawijaya, N. Lee, et al. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. CoRR, abs/2302.04023, 2023. [133] Fang, T., S. Yang, K. Lan, et al.
2309.07864#185
2309.07864#187
2309.07864
[ "2305.08982" ]
2309.07864#187
The Rise and Potential of Large Language Model Based Agents: A Survey
Is chatgpt a highly fluent grammatical error correction system? A comprehensive evaluation. CoRR, abs/2304.01746, 2023. [134] Lu, A., H. Zhang, Y. Zhang, et al. Bounding the capabilities of large language models in open text generation with prompt constraints. In A. Vlachos, I. Augenstein, eds., Findings of the Association for Computational Linguistics: EACL 2023, Dubrovnik, Croatia, May 2-6, 2023, pages 1937â
2309.07864#186
2309.07864#188
2309.07864
[ "2305.08982" ]
2309.07864#188
The Rise and Potential of Large Language Model Based Agents: A Survey
1963. Association for Computational Linguistics, 2023. [135] Buehler, M. C., J. Adamy, T. H. Weisswange. Theory of mind based assistive communication in complex human robot cooperation. CoRR, abs/2109.01355, 2021. [136] Shapira, N., M. Levy, S. H. Alavi, et al. Clever hans or neural theory of mind? stress testing social reasoning in large language models. CoRR, abs/2305.14763, 2023. [137] Hill, F., K. Cho, A. Korhonen.
2309.07864#187
2309.07864#189
2309.07864
[ "2305.08982" ]
2309.07864#189
The Rise and Potential of Large Language Model Based Agents: A Survey
Learning distributed representations of sentences from un- labelled data. In K. Knight, A. Nenkova, O. Rambow, eds., NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 1367â 1377. The Association for Computational Linguistics, 2016. 55 [138] Collobert, R., J. Weston, L. Bottou, et al.
2309.07864#188
2309.07864#190
2309.07864
[ "2305.08982" ]
2309.07864#190
The Rise and Potential of Large Language Model Based Agents: A Survey
Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493â 2537, 2011. [139] Kaplan, J., S. McCandlish, T. Henighan, et al. Scaling laws for neural language models. CoRR, abs/2001.08361, 2020. [140] Roberts, A., C. Raffel, N. Shazeer. How much knowledge can you pack into the parameters of a language model?
2309.07864#189
2309.07864#191
2309.07864
[ "2305.08982" ]
2309.07864#191
The Rise and Potential of Large Language Model Based Agents: A Survey
In B. Webber, T. Cohn, Y. He, Y. Liu, eds., Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 5418â 5426. Association for Computational Linguistics, 2020. [141] Tandon, N., A. S. Varde, G. de Melo. Commonsense knowledge in machine intelligence. SIGMOD Rec., 46(4):49â 52, 2017. [142] Vulic, I., E. M. Ponti, R. Litschko, et al.
2309.07864#190
2309.07864#192
2309.07864
[ "2305.08982" ]
2309.07864#192
The Rise and Potential of Large Language Model Based Agents: A Survey
Probing pretrained language models for lexical semantics. In B. Webber, T. Cohn, Y. He, Y. Liu, eds., Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7222â 7240. Association for Computational Linguistics, 2020. [143] Hewitt, J., C. D. Manning. A structural probe for finding syntax in word representations. In J. Burstein, C. Doran, T. Solorio, eds., Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4129â
2309.07864#191
2309.07864#193
2309.07864
[ "2305.08982" ]
2309.07864#193
The Rise and Potential of Large Language Model Based Agents: A Survey
4138. Association for Computational Linguistics, 2019. [144] Rau, L. F., P. S. Jacobs, U. Zernik. Information extraction and text summarization using linguistic knowledge acquisition. Inf. Process. Manag., 25(4):419â 428, 1989. [145] Yang, K., Z. Chen, Y. Cai, et al. Improved automatic keyword extraction given more semantic knowledge. In H. Gao, J. Kim, Y. Sakurai, eds., Database Systems for Advanced Applications - DASFAA 2016 International Workshops: BDMS, BDQM, MoI, and SeCoP, Dallas, TX, USA, April 16-19, 2016, Proceedings, vol. 9645 of Lecture Notes in Computer Science, pages 112â 125. Springer, 2016. [146] Beloucif, M., C. Biemann.
2309.07864#192
2309.07864#194
2309.07864
[ "2305.08982" ]
2309.07864#194
The Rise and Potential of Large Language Model Based Agents: A Survey
Probing pre-trained language models for semantic attributes and their values. In M. Moens, X. Huang, L. Specia, S. W. Yih, eds., Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 2554â 2559. Association for Computational Linguistics, 2021. [147] Zhang, Z., H. Zhao.
2309.07864#193
2309.07864#195
2309.07864
[ "2305.08982" ]
2309.07864#195
The Rise and Potential of Large Language Model Based Agents: A Survey
Advances in multi-turn dialogue comprehension: A survey. CoRR, abs/2103.03125, 2021. [148] Safavi, T., D. Koutra. Relational world knowledge representation in contextual language In M. Moens, X. Huang, L. Specia, S. W. Yih, eds., Proceedings of models: A review. the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 1053â 1067. Association for Computational Linguistics, 2021. [149] Jiang, Z., F. F. Xu, J. Araki, et al.
2309.07864#194
2309.07864#196
2309.07864
[ "2305.08982" ]
2309.07864#196
The Rise and Potential of Large Language Model Based Agents: A Survey
How can we know what language models know. Trans. Assoc. Comput. Linguistics, 8:423â 438, 2020. [150] Madaan, A., S. Zhou, U. Alon, et al. Language models of code are few-shot commonsense learners. In Y. Goldberg, Z. Kozareva, Y. Zhang, eds., Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 1384â
2309.07864#195
2309.07864#197
2309.07864
[ "2305.08982" ]
2309.07864#197
The Rise and Potential of Large Language Model Based Agents: A Survey
1403. Association for Computational Linguistics, 2022. [151] Xu, F. F., U. Alon, G. Neubig, et al. A systematic evaluation of large language models of code. In S. Chaudhuri, C. Sutton, eds., MAPS@PLDI 2022: 6th ACM SIGPLAN International Symposium on Machine Programming, San Diego, CA, USA, 13 June 2022, pages 1â
2309.07864#196
2309.07864#198
2309.07864
[ "2305.08982" ]
2309.07864#198
The Rise and Potential of Large Language Model Based Agents: A Survey
10. ACM, 2022. [152] Cobbe, K., V. Kosaraju, M. Bavarian, et al. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021. 56 [153] Thirunavukarasu, A. J., D. S. J. Ting, K. Elangovan, et al. Large language models in medicine. Nature medicine, pages 1â
2309.07864#197
2309.07864#199
2309.07864
[ "2305.08982" ]
2309.07864#199
The Rise and Potential of Large Language Model Based Agents: A Survey
11, 2023. [154] Lai, Y., C. Li, Y. Wang, et al. DS-1000: A natural and reliable benchmark for data science code generation. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 18319â 18345. PMLR, 2023. [155] AlKhamissi, B., M. Li, A. Celikyilmaz, et al. A review on language models as knowledge bases. CoRR, abs/2204.06031, 2022. [156] Kemker, R., M. McClure, A. Abitino, et al.
2309.07864#198
2309.07864#200
2309.07864
[ "2305.08982" ]
2309.07864#200
The Rise and Potential of Large Language Model Based Agents: A Survey
Measuring catastrophic forgetting in neural networks. In S. A. McIlraith, K. Q. Weinberger, eds., Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 3390â 3398. AAAI Press, 2018. [157] Cao, N. D., W. Aziz, I. Titov.
2309.07864#199
2309.07864#201
2309.07864
[ "2305.08982" ]
2309.07864#201
The Rise and Potential of Large Language Model Based Agents: A Survey
Editing factual knowledge in language models. In M. Moens, X. Huang, L. Specia, S. W. Yih, eds., Proceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 6491â 6506. Association for Computational Linguistics, 2021. [158] Yao, Y., P. Wang, B. Tian, et al.
2309.07864#200
2309.07864#202
2309.07864
[ "2305.08982" ]
2309.07864#202
The Rise and Potential of Large Language Model Based Agents: A Survey
Editing large language models: Problems, methods, and opportunities. CoRR, abs/2305.13172, 2023. [159] Mitchell, E., C. Lin, A. Bosselut, et al. Memory-based model editing at scale. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvári, G. Niu, S. Sabato, eds., International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, vol. 162 of Proceedings of Machine Learning Research, pages 15817â 15831. PMLR, 2022. [160] Manakul, P., A. Liusie, M. J. F. Gales. Selfcheckgpt:
2309.07864#201
2309.07864#203
2309.07864
[ "2305.08982" ]
2309.07864#203
The Rise and Potential of Large Language Model Based Agents: A Survey
Zero-resource black-box hallucination detection for generative large language models. CoRR, abs/2303.08896, 2023. [161] Li, M., B. Peng, Z. Zhang. Self-checker: Plug-and-play modules for fact-checking with large language models. CoRR, abs/2305.14623, 2023. [162] Gou, Z., Z. Shao, Y. Gong, et al. CRITIC: large language models can self-correct with tool-interactive critiquing. CoRR, abs/2305.11738, 2023. [163] Lewis, M., Y. Liu, N. Goyal, et al.
2309.07864#202
2309.07864#204
2309.07864
[ "2305.08982" ]
2309.07864#204
The Rise and Potential of Large Language Model Based Agents: A Survey
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In D. Jurafsky, J. Chai, N. Schluter, J. R. Tetreault, eds., Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871â 7880. Association for Computational Linguistics, 2020.
2309.07864#203
2309.07864#205
2309.07864
[ "2305.08982" ]
2309.07864#205
The Rise and Potential of Large Language Model Based Agents: A Survey
[164] Park, H. H., Y. Vyas, K. Shah. Efficient classification of long documents using transformers. In S. Muresan, P. Nakov, A. Villavicencio, eds., Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 702â 709. Association for Computational Linguistics, 2022. [165] Guo, M., J. Ainslie, D. C. Uthus, et al. Longt5: Efficient text-to-text transformer for long sequences. In M. Carpuat, M. de Marneffe, I. V. M. RuÃ
2309.07864#204
2309.07864#206
2309.07864
[ "2305.08982" ]
2309.07864#206
The Rise and Potential of Large Language Model Based Agents: A Survey
z, eds., Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 724â 736. Association for Computational Linguistics, 2022. [166] Ainslie, J., T. Lei, M. de Jong, et al. Colt5: Faster long-range transformers with conditional computation. CoRR, abs/2303.09752, 2023.
2309.07864#205
2309.07864#207
2309.07864
[ "2305.08982" ]
2309.07864#207
The Rise and Potential of Large Language Model Based Agents: A Survey
57 [167] Ruoss, A., G. Delétang, T. Genewein, et al. Randomized positional encodings boost length generalization of transformers. In A. Rogers, J. L. Boyd-Graber, N. Okazaki, eds., Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 1889â 1903. Association for Computational Linguistics, 2023. [168] Liang, X., B. Wang, H. Huang, et al.
2309.07864#206
2309.07864#208
2309.07864
[ "2305.08982" ]
2309.07864#208
The Rise and Potential of Large Language Model Based Agents: A Survey
Unleashing infinite-length input capacity for large-scale language models with self-controlled memory system. CoRR, abs/2304.13343, 2023. [169] Shinn, N., B. Labash, A. Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. CoRR, abs/2303.11366, 2023. [170] Zhong, W., L. Guo, Q. Gao, et al.
2309.07864#207
2309.07864#209
2309.07864
[ "2305.08982" ]
2309.07864#209
The Rise and Potential of Large Language Model Based Agents: A Survey
Memorybank: Enhancing large language models with long-term memory. CoRR, abs/2305.10250, 2023. [171] Chan, C., W. Chen, Y. Su, et al. Chateval: Towards better llm-based evaluators through multi-agent debate. CoRR, abs/2308.07201, 2023. [172] Zhu, X., Y. Chen, H. Tian, et al. Ghost in the minecraft:
2309.07864#208
2309.07864#210
2309.07864
[ "2305.08982" ]
2309.07864#210
The Rise and Potential of Large Language Model Based Agents: A Survey
Generally capable agents for open- world environments via large language models with text-based knowledge and memory. CoRR, abs/2305.17144, 2023. [173] Modarressi, A., A. Imani, M. Fayyaz, et al. RET-LLM: towards a general read-write memory for large language models. CoRR, abs/2305.14322, 2023. [174] Lin, J., H. Zhao, A. Zhang, et al. Agentsims: An open-source sandbox for large language model evaluation. CoRR, abs/2308.04026, 2023. [175] Hu, C., J. Fu, C. Du, et al. Chatdb: Augmenting llms with databases as their symbolic memory. CoRR, abs/2306.03901, 2023. [176] Huang, Z., S. Gutierrez, H. Kamana, et al. Memory sandbox: Transparent and interactive memory management for conversational agents. CoRR, abs/2308.01542, 2023. [177] Creswell, A., M. Shanahan, I. Higgins. Selection-inference: Exploiting large language models for interpretable logical reasoning. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
2309.07864#209
2309.07864#211
2309.07864
[ "2305.08982" ]
2309.07864#211
The Rise and Potential of Large Language Model Based Agents: A Survey
[178] Madaan, A., N. Tandon, P. Gupta, et al. Self-refine: Iterative refinement with self-feedback. CoRR, abs/2303.17651, 2023. [179] Ichter, B., A. Brohan, Y. Chebotar, et al. Do as I can, not as I say: Grounding language in robotic affordances. In K. Liu, D. Kulic, J. Ichnowski, eds., Conference on Robot Learning, CoRL 2022, 14-18 December 2022, Auckland, New Zealand, vol. 205 of Proceedings of Machine Learning Research, pages 287â
2309.07864#210
2309.07864#212
2309.07864
[ "2305.08982" ]
2309.07864#212
The Rise and Potential of Large Language Model Based Agents: A Survey
318. PMLR, 2022. [180] Shen, Y., K. Song, X. Tan, et al. Hugginggpt: Solving AI tasks with chatgpt and its friends in huggingface. CoRR, abs/2303.17580, 2023. [181] Yao, S., D. Yu, J. Zhao, et al. Tree of thoughts: Deliberate problem solving with large language models. CoRR, abs/2305.10601, 2023. [182] Wu, Y., S. Y. Min, Y. Bisk, et al.
2309.07864#211
2309.07864#213
2309.07864
[ "2305.08982" ]
2309.07864#213
The Rise and Potential of Large Language Model Based Agents: A Survey
Plan, eliminate, and track - language models are good teachers for embodied agents. CoRR, abs/2305.02412, 2023. [183] Wang, Z., S. Cai, A. Liu, et al. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. CoRR, abs/2302.01560, 2023. [184] Hao, S., Y. Gu, H. Ma, et al.
2309.07864#212
2309.07864#214
2309.07864
[ "2305.08982" ]
2309.07864#214
The Rise and Potential of Large Language Model Based Agents: A Survey
Reasoning with language model is planning with world model. CoRR, abs/2305.14992, 2023. [185] Lin, B. Y., Y. Fu, K. Yang, et al. Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. CoRR, abs/2305.17390, 2023. 58 [186] Karpas, E., O. Abend, Y. Belinkov, et al. MRKL systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. CoRR, abs/2205.00445, 2022. [187] Huang, W., F. Xia, T. Xiao, et al. Inner monologue: Embodied reasoning through planning with language models. In K. Liu, D. Kulic, J. Ichnowski, eds., Conference on Robot Learning, CoRL 2022, 14-18 December 2022, Auckland, New Zealand, vol. 205 of Proceedings of Machine Learning Research, pages 1769â
2309.07864#213
2309.07864#215
2309.07864
[ "2305.08982" ]
2309.07864#215
The Rise and Potential of Large Language Model Based Agents: A Survey
1782. PMLR, 2022. [188] Chen, Z., K. Zhou, B. Zhang, et al. Chatcot: Tool-augmented chain-of-thought reasoning on chat-based large language models. CoRR, abs/2305.14323, 2023. [189] Wu, T., M. Terry, C. J. Cai. AI chains: Transparent and controllable human-ai interaction by chaining large language model prompts.
2309.07864#214
2309.07864#216
2309.07864
[ "2305.08982" ]
2309.07864#216
The Rise and Potential of Large Language Model Based Agents: A Survey
In S. D. J. Barbosa, C. Lampe, C. Appert, D. A. Shamma, S. M. Drucker, J. R. Williamson, K. Yatani, eds., CHI â 22: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April 2022 - 5 May 2022, pages 385:1â 385:22. ACM, 2022. [190] Wang, G., Y. Xie, Y. Jiang, et al.
2309.07864#215
2309.07864#217
2309.07864
[ "2305.08982" ]
2309.07864#217
The Rise and Potential of Large Language Model Based Agents: A Survey
Voyager: An open-ended embodied agent with large language models. CoRR, abs/2305.16291, 2023. [191] Zhao, X., M. Li, C. Weber, et al. Chat with the environment: Interactive multimodal perception using large language models. CoRR, abs/2303.08268, 2023. [192] Miao, N., Y. W. Teh, T. Rainforth. Selfcheck: Using llms to zero-shot check their own step-by-step reasoning. CoRR, abs/2308.00436, 2023. [193] Wang, X., W. Wang, Y. Cao, et al.
2309.07864#216
2309.07864#218
2309.07864
[ "2305.08982" ]