id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2309.07864#18 | The Rise and Potential of Large Language Model Based Agents: A Survey | s brain? As mentioned before, researchers have introduced several properties to help describe and define agents in the field of AI. Here, we will delve into some key properties, elucidate their relevance to LLMs, and thereby expound on why LLMs are highly suited to serve as the main part of brains of AI agents. Autonomy. Autonomy means that an agent operates without direct intervention from humans or others and possesses a degree of control over its actions and internal states [4; 113]. This implies that an agent should not only possess the capability to follow explicit human instructions for task completion but also exhibit the capacity to initiate and execute actions independently. LLMs can demonstrate a form of autonomy through their ability to generate human-like text, engage in conversations, and perform various tasks without detailed step-by-step instructions [114; 115]. Moreover, they can dynamically adjust their outputs based on environmental input, reflecting a degree of adaptive autonomy [23; 27; 104]. Furthermore, they can showcase autonomy through exhibiting creativity like coming up with novel ideas, stories, or solutions that havenâ t been explicitly programmed into them [116; 117]. This implies a certain level of self-directed exploration and decision-making. Applications like Auto-GPT [114] exemplify the significant potential of LLMs in constructing autonomous agents. Simply by providing them with a task and a set of available tools, they can autonomously formulate plans and execute them to achieve the ultimate goal. | 2309.07864#17 | 2309.07864#19 | 2309.07864 | [
"2305.08982"
] |
2309.07864#19 | The Rise and Potential of Large Language Model Based Agents: A Survey | Reactivity. Reactivity in an agent refers to its ability to respond rapidly to immediate changes and stimuli in its environment [9]. This implies that the agent can perceive alterations in its surroundings and promptly take appropriate actions. Traditionally, the perceptual space of language models has been confined to textual inputs, while the action space has been limited to textual outputs. However, researchers have demonstrated the potential to expand the perceptual space of LLMs using multimodal fusion techniques, enabling them to rapidly process visual and auditory information from the environment [25; 118; 119]. | 2309.07864#18 | 2309.07864#20 | 2309.07864 | [
"2305.08982"
] |
2309.07864#20 | The Rise and Potential of Large Language Model Based Agents: A Survey | Similarly, itâ s also feasible to expand the action space of LLMs through embodiment techniques [120; 121] and tool usage [92; 94]. These advancements enable LLMs to effectively interact with the real-world physical environment and carry out tasks within it. One major challenge is that LLM-based agents, when performing non-textual actions, require an intermediate step of generating thoughts or formulating tool usage in textual form before eventually translating them into concrete actions. This intermediary process consumes time and reduces the response speed. However, this aligns closely with human behavioral patterns, where the principle of â think before you actâ is observed [122; 123]. Pro-activeness. Pro-activeness denotes that agents donâ t merely react to their environments; they possess the capacity to display goal-oriented actions by proactively taking the initiative [9]. This property emphasizes that agents can reason, make plans, and take proactive measures in their actions to achieve specific goals or adapt to environmental changes. Although intuitively the paradigm of next token prediction in LLMs may not possess intention or desire, research has shown that they can implicitly generate representations of these states and guide the modelâ s inference process [46; 48; 49]. LLMs have demonstrated a strong capacity for generalized reasoning and planning. By prompting large language models with instructions like â | 2309.07864#19 | 2309.07864#21 | 2309.07864 | [
"2305.08982"
] |
2309.07864#21 | The Rise and Potential of Large Language Model Based Agents: A Survey | letâ s think step by stepâ , we can elicit their reasoning abilities, such as logical and mathematical reasoning [95; 96; 97]. Similarly, large language models have shown the emergent ability of planning in forms of goal reformulation [99; 124], task decomposition [98; 125], and adjusting plans in response to environmental changes [100; 126]. Social ability. Social ability refers to an agentâ s capacity to interact with other agents, including humans, through some kind of agent-communication language [8]. Large language models exhibit strong natural language interaction abilities like understanding and generation [23; 127; 128]. Com- pared to structured languages or other communication protocals, such capability enables them to interact with other models or humans in an interpretable manner. This forms the cornerstone of social ability for LLM-based agents [22; 108]. Many researchers have demonstrated that LLM-based | 2309.07864#20 | 2309.07864#22 | 2309.07864 | [
"2305.08982"
] |
2309.07864#22 | The Rise and Potential of Large Language Model Based Agents: A Survey | 9 agents can enhance task performance through social behaviors such as collaboration and competition [108; 111; 129; 130]. By inputting specific prompts, LLMs can also play different roles, thereby simulating the social division of labor in the real world [109]. Furthermore, when we place multiple agents with distinct identities into a society, emergent social phenomena can be observed [22]. # 3 The Birth of An Agent: Construction of LLM-based Agents amen f Look at the sky, do you think it will rain tomorrow? | 2309.07864#21 | 2309.07864#23 | 2309.07864 | [
"2305.08982"
] |
2309.07864#23 | The Rise and Potential of Large Language Model Based Agents: A Survey | If so, give the umbrella to me. Knowledge Reasoning from the current weather $9) conditions and the eather reports on the internet, it is Summary] | Recall Lear] | Decision Making Planning / Reasoning Generalize / Transfer + (Agent) (Action ) likely to rain tomorrow. Here is fay? your umbrella. ° Calling API... _ Embodiment tt Lo -. Figure 2: Conceptual framework of LLM-based agent with three components: brain, perception, and action. Serving as the controller, the brain module undertakes basic tasks like memorizing, thinking, and decision-making. The perception module perceives and processes multimodal information from the external environment, and the action module carries out the execution using tools and influences the surroundings. | 2309.07864#22 | 2309.07864#24 | 2309.07864 | [
"2305.08982"
] |
2309.07864#24 | The Rise and Potential of Large Language Model Based Agents: A Survey | Here we give an example to illustrate the workflow: When a human asks whether it will rain, the perception module converts the instruction into an understandable representation for LLMs. Then the brain module begins to reason according to the current weather and the weather reports on the internet. Finally, the action module responds and hands the umbrella to the human. By repeating the above process, an agent can continuously get feedback and interact with the environment. â Survival of the Fittestâ [131] shows that if an individual wants to survive in the external environment, he must adapt to the surroundings efficiently. This requires him to be cognitive, able to perceive and respond to changes in the outside world, which is consistent with the definition of â | 2309.07864#23 | 2309.07864#25 | 2309.07864 | [
"2305.08982"
] |
2309.07864#25 | The Rise and Potential of Large Language Model Based Agents: A Survey | agentâ mentioned in §2.1. Inspired by this, we present a general conceptual framework of an LLM-based agent composed of three key parts: brain, perception, and action (see Figure 2). We first describe the structure and working mechanism of the brain, which is primarily composed of a large language model (§ 3.1). The brain is the core of an AI agent because it not only stores knowledge and memories but also undertakes indispensable functions like information processing and decision-making. It can present the process of reasoning and planning, and cope well with unseen tasks, exhibiting the intelligence of an agent. Next, we introduce the perception module (§ 3.2). Its core purpose is to broaden the agentâ s perception space from a text-only domain to a multimodal sphere that includes textual, auditory, and visual modalities. This extension equips the agent to grasp and utilize information from its surroundings more effectively. Finally, we present the action module designed to expand the action space of an agent (§ 3.3). Specifically, we empower the agent with embodied action ability and tool-handling skills, enabling it to adeptly adapt to environmental changes, provide feedback, and even influence and mold the environment. The framework can be tailored for different application scenarios, i.e. not every specific component will be used in all studies. In general, agents operate in the following workflow: First, the perception | 2309.07864#24 | 2309.07864#26 | 2309.07864 | [
"2305.08982"
] |
2309.07864#26 | The Rise and Potential of Large Language Model Based Agents: A Survey | 10 module, corresponding to human sensory systems such as the eyes and ears, perceives changes in the external environment and then converts multimodal information into an understandable representation for the agent. Subsequently, the brain module, serving as the control center, engages in information processing activities such as thinking, decision-making, and operations with storage including memory and knowledge. Finally, the action module, corresponding to human limbs, carries out the execution with the assistance of tools and leaves an impact on the surroundings. By repeating the above process, an agent can continuously get feedback and interact with the environment. # 3.1 Brain # Brain Natural Language Interaction §3.1.1 High-quality generation Deep understanding Bang et al. [132], Fang et al. [133], Lin et al. [127], Lu et al. [134], etc. Buehler et al. [135], Lin et al. [128], Shapira et al. [136], etc. Pretrain model Hill et al. [137], Collobert et al. [138], Kaplan et al. [139], Roberts et al. [140], Tandon et al. [141], etc. Knowledge in LLM-based agent Linguistic knowledge Vulic et al. [142], Hewitt et al. [143], Rau et al. [144], Yang et al. [145], Beloucif et al. [146], Zhang et al. [147], Bang et al. [132], etc. Commensense knowledge Safavi et al. [148], Jiang et al. [149], Madaan [150], etc. Knowledge §3.1.2 Actionable knowledge Xu et al. [151], Cobbe et al. [152], Thirunavukarasu et al. [153], Lai et al. [154], Madaan et al. [150], etc. Potential issues of knowledge Edit wrong and outdated knowledge AlKhamissi et al. [155], Kemker et al. [156], Cao et al. [157], Yao et al. [158], Mitchell et al. [159], etc. Mitigate hallucination Manakul et al. [160], Qin et al. [94], Li et al. [161], Gou et al. [162], etc. | 2309.07864#25 | 2309.07864#27 | 2309.07864 | [
"2305.08982"
] |
2309.07864#27 | The Rise and Potential of Large Language Model Based Agents: A Survey | Raising the length limit of Transformers BART [163], Park et al. [164], LongT5 [165], CoLT5 [166], Ruoss et al. [167], etc. Memory capability Summarizing memory Generative Agents [22], SCM [168], Reflexion [169], Memory- bank [170], ChatEval [171], etc. Memory §3.1.3 Compressing mem- ories with vectors or data structures ChatDev [109], GITM [172], RET-LLM [173], AgentSims [174], ChatDB [175], etc. Automated retrieval Generative Agents [22], Memory- bank [170], AgentSims [174], etc. Memory retrieval Interactive retrieval Memory Sandbox[176], ChatDB [175], etc. Reasoning CoT [95], Zero-shot-CoT [96], Self-Consistency [97], Self- Polish [99], Selection-Inference [177], Self-Refine [178], etc. Reasoning & Planning §3.1.4 Plan formulation Least-to-Most [98], SayCan [179], Hug- gingGPT [180], ToT [181], PET [182], DEPS [183], RAP [184], SwiftSage [185], LLM+P [125], MRKL [186], etc. Planing Plan reflection LLM-Planner [101], Inner Monologue [187], ReAct [91], ChatCoT [188], AI Chains [189], Voyager [190], Zhao et al. [191], SelfCheck [192], etc. Unseen task generalization T0 [106], FLAN [105], Instruct- GPT [24], Chung et al. [107], etc. Transferability & Generalization §3.1.5 In-context learning GPT-3 [41], Wang et al. [193], Wang et al. [194], Dong et al. [195], etc. Continual learning Ke et al. [196], Wang et al. [197], Raz- daibiedina et al. [198], Voyager [190], etc. Figure 3: Typology of the brain module. | 2309.07864#26 | 2309.07864#28 | 2309.07864 | [
"2305.08982"
] |
2309.07864#28 | The Rise and Potential of Large Language Model Based Agents: A Survey | 11 The human brain is a sophisticated structure comprised of a vast number of interconnected neu- rons, capable of processing various information, generating diverse thoughts, controlling different behaviors, and even creating art and culture [199]. Much like humans, the brain serves as the central nucleus of an AI agent, primarily composed of a large language model. Operating mechanism. To ensure effective communication, the ability to engage in natural lan- guage interaction (§3.1.1) is paramount. After receiving the information processed by the perception module, the brain module first turns to storage, retrieving in knowledge (§3.1.2) and recalling from memory (§3.1.3). These outcomes aid the agent in devising plans, reasoning, and making informed decisions (§3.1.4). Additionally, the brain module may memorize the agentâ s past observations, thoughts, and actions in the form of summaries, vectors, or other data structures. Meanwhile, it can also update the knowledge such as common sense and domain knowledge for future use. The LLM-based agent may also adapt to unfamiliar scenarios with its inherent generalization and transfer ability (§3.1.5). In the subsequent sections, we delve into a detailed exploration of these extraordinary facets of the brain module as depicted in Figure 3. # 3.1.1 Natural Language Interaction As a medium for communication, language contains a wealth of information. In addition to the intuitively expressed content, there may also be the speakerâ s beliefs, desires, and intentions hidden behind it [200]. Thanks to the powerful natural language understanding and generation capabilities inherent in LLMs [25; 201; 202; 203], agents can proficiently engage in not only basic interactive conversations [204; 205; 206] in multiple languages [132; 202] but also exhibit in-depth comprehen- sion abilities, which allow humans to easily understand and interact with agents [207; 208]. Besides, LLM-based agents that communicate in natural language can earn more trust and cooperate more effectively with humans [130]. Multi-turn interactive conversation. The capability of multi-turn conversation is the foundation of effective and consistent communication. | 2309.07864#27 | 2309.07864#29 | 2309.07864 | [
"2305.08982"
] |
2309.07864#29 | The Rise and Potential of Large Language Model Based Agents: A Survey | As the core of the brain module, LLMs, such as GPT series [40; 41; 201], LLaMA series [201; 209] and T5 series [107; 210], can understand natural language and generate coherent and contextually relevant responses, which helps agents to comprehend better and handle various problems [211]. However, even humans find it hard to communicate without confusion in one sitting, so multiple rounds of dialogue are necessary. Compared with traditional text-only reading comprehension tasks like SQuAD [212], multi-turn conversations (1) are interactive, involving multiple speakers, and lack continuity; (2) may involve multiple topics, and the information of the dialogue may also be redundant, making the text structure more complex [147]. In general, the multi-turn conversation is mainly divided into three steps: (1) Understanding the history of natural language dialogue, (2) Deciding what action to take, and (3) Generating natural language responses. LLM-based agents are capable of continuously refining outputs using existing information to conduct multi-turn conversations and effectively achieve the ultimate goal [132; 147]. High-quality natural language generation. Recent LLMs show exceptional natural language generation capabilities, consistently producing high-quality text in multiple languages [132; 213]. The coherency [214] and grammatical accuracy [133] of LLM-generated content have shown steady enhancement, evolving progressively from GPT-3 [41] to InstructGPT [24], and culminating in GPT-4 [25]. See et al. [214] empirically affirm that these language models can â adapt to the style and content of the conditioning textâ [215]. | 2309.07864#28 | 2309.07864#30 | 2309.07864 | [
"2305.08982"
] |
2309.07864#30 | The Rise and Potential of Large Language Model Based Agents: A Survey | And the results of Fang et al. [133] suggest that ChatGPT excels in grammar error detection, underscoring its powerful language capabilities. In conversational contexts, LLMs also perform well in key metrics of dialogue quality, including content, relevance, and appropriateness [127]. Importantly, they do not merely copy training data but display a certain degree of creativity, generating diverse texts that are equally novel or even more novel than the benchmarks crafted by humans [216]. Meanwhile, human oversight remains effective through the use of controllable prompts, ensuring precise control over the content generated by these language models [134]. | 2309.07864#29 | 2309.07864#31 | 2309.07864 | [
"2305.08982"
] |
2309.07864#31 | The Rise and Potential of Large Language Model Based Agents: A Survey | Intention and implication understanding. Although models trained on the large-scale corpus are already intelligent enough to understand instructions, most are still incapable of emulating human dialogues or fully leveraging the information conveyed in language [217]. Understanding the implied meanings is essential for effective communication and cooperation with other intelligent agents [135], 12 and enables one to interpret othersâ feedback. The emergence of LLMs highlights the potential of foundation models to understand human intentions, but when it comes to vague instructions or other implications, it poses a significant challenge for agents [94; 136]. For humans, grasping the implied meanings from a conversation comes naturally, whereas for agents, they should formalize implied meanings into a reward function that allows them to choose the option in line with the speakerâ s preferences in unseen contexts [128]. One of the main ways for reward modeling is inferring rewards based on feedback, which is primarily presented in the form of comparisons [218] (possibly supplemented with reasons [219]) and unconstrained natural language [220]. Another way involves recovering rewards from descriptions, using the action space as a bridge [128]. Jeon et al. [221] suggests that human behavior can be mapped to a choice from an implicit set of options, which helps to interpret all the information in a single unifying formalism. By utilizing their understanding of context, agents can take highly personalized and accurate action, tailored to specific requirements. # 3.1.2 Knowledge Due to the diversity of the real world, many NLP researchers attempt to utilize data that has a larger scale. This data usually is unstructured and unlabeled [137; 138], yet it contains enormous knowledge that language models could learn. In theory, language models can learn more knowledge as they have more parameters [139], and it is possible for language models to learn and comprehend everything in natural language. Research [140] shows that language models trained on a large-scale dataset can encode a wide range of knowledge into their parameters and respond correctly to various types of queries. Furthermore, the knowledge can assist LLM-based agents in making informed decisions [222]. All of this knowledge can be roughly categorized into the following types: | 2309.07864#30 | 2309.07864#32 | 2309.07864 | [
"2305.08982"
] |
2309.07864#32 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Linguistic knowledge. Linguistic knowledge [142; 143; 144] is represented as a system of constraints, a grammar, which defines all and only the possible sentences of the language. It includes morphology, syntax, semantics [145; 146], and pragmatics. Only the agents that acquire linguistic knowledge can comprehend sentences and engage in multi-turn conversations [147]. Moreover, these agents can acquire multilingual knowledge [132] by training on datasets that contain multiple languages, eliminating the need for extra translation models. | 2309.07864#31 | 2309.07864#33 | 2309.07864 | [
"2305.08982"
] |
2309.07864#33 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Commonsense knowledge. Commonsense knowledge [148; 149; 150] refers to general world facts that are typically taught to most individuals at an early age. For example, people commonly know that medicine is used for curing diseases, and umbrellas are used to protect against rain. Such information is usually not explicitly mentioned in the context. Therefore, the models lacking the corresponding commonsense knowledge may fail to grasp or misinterpret the intended meaning [141]. Similarly, agents without commonsense knowledge may make incorrect decisions, such as not bringing an umbrella when it rains heavily. | 2309.07864#32 | 2309.07864#34 | 2309.07864 | [
"2305.08982"
] |
2309.07864#34 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Professional domain knowledge. Professional domain knowledge refers to the knowledge associ- ated with a specific domain like programming [151; 154; 150], mathematics [152], medicine [153], etc. It is essential for models to effectively solve problems within a particular domain [223]. For example, models designed to perform programming tasks need to possess programming knowledge, such as code format. Similarly, models intended for diagnostic purposes should possess medical knowledge like the names of specific diseases and prescription drugs. Although LLMs demonstrate excellent performance in acquiring, storing, and utilizing knowledge [155], there remain potential issues and unresolved problems. For example, the knowledge acquired by models during training could become outdated or even be incorrect from the start. A simple way to address this is retraining. However, it requires advanced data, extensive time, and computing resources. Even worse, it can lead to catastrophic forgetting [156]. Therefore, some researchers[157; 158; 159] try editing LLMs to locate and modify specific knowledge stored within the models. This involved unloading incorrect knowledge while simultaneously acquiring new knowledge. Their experiments show that this method can partially edit factual knowledge, but its underlying mechanism still requires further research. Besides, LLMs may generate content that conflicts with the source or factual information [224], a phenomenon often referred to as hallucinations [225]. It is one of the critical reasons why LLMs can not be widely used in factually rigorous tasks. To tackle this issue, some researchers [160] proposed a metric to measure the level of hallucinations and provide developers with an effective reference to evaluate the trustworthiness of LLM outputs. Moreover, some researchers[161; 162] enable LLMs to utilize external tools[94; 226; 227] to avoid incorrect | 2309.07864#33 | 2309.07864#35 | 2309.07864 | [
"2305.08982"
] |
2309.07864#35 | The Rise and Potential of Large Language Model Based Agents: A Survey | 13 knowledge. Both of these methods can alleviate the impact of hallucinations, but further exploration of more effective approaches is still needed. # 3.1.3 Memory In our framework, â memoryâ stores sequences of the agentâ s past observations, thoughts and actions, which is akin to the definition presented by Nuxoll et al. [228]. Just as the human brain relies on memory systems to retrospectively harness prior experiences for strategy formulation and decision- making, agents necessitate specific memory mechanisms to ensure their proficient handling of a sequence of consecutive tasks [229; 230; 231]. When faced with complex problems, memory mechanisms help the agent to revisit and apply antecedent strategies effectively. Furthermore, these memory mechanisms enable individuals to adjust to unfamiliar environments by drawing on past experiences. With the expansion of interaction cycles in LLM-based agents, two primary challenges arise. | 2309.07864#34 | 2309.07864#36 | 2309.07864 | [
"2305.08982"
] |
2309.07864#36 | The Rise and Potential of Large Language Model Based Agents: A Survey | The first pertains to the sheer length of historical records. LLM-based agents process prior interactions in natural language format, appending historical records to each subsequent input. As these records expand, they might surpass the constraints of the Transformer architecture that most LLM-based agents rely on. When this occurs, the system might truncate some content. The second challenge is the difficulty in extracting relevant memories. As agents amass a vast array of historical observations and action sequences, they grapple with an escalating memory burden. This makes establishing connections between related topics increasingly challenging, potentially causing the agent to misalign its responses with the ongoing context. Methods for better memory capability. Here we introduce several methods to enhance the memory of LLM-based agents. | 2309.07864#35 | 2309.07864#37 | 2309.07864 | [
"2305.08982"
] |
2309.07864#37 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Raising the length limit of Transformers. The first method tries to address or mitigate the inherent sequence length constraints. The Transformer architecture struggles with long sequences due to these intrinsic limits. As sequence length expands, computational demand grows exponentially due to the pairwise token calculations in the self-attention mechanism. Strategies to mitigate these length restrictions encompass text truncation [163; 164; 232], segmenting inputs [233; 234], and emphasizing key portions of text [235; 236; 237]. Some other works modify the attention mechanism to reduce complexity, thereby accommodating longer sequences [238; 165; 166; 167]. | 2309.07864#36 | 2309.07864#38 | 2309.07864 | [
"2305.08982"
] |
2309.07864#38 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Summarizing memory. The second strategy for amplifying memory efficiency hinges on the concept of memory summarization. This ensures agents effortlessly extract pivotal details from historical interactions. Various techniques have been proposed for summarizing memory. Using prompts, some methods succinctly integrate memories [168], while others emphasize reflective processes to create condensed memory representations [22; 239]. Hierarchical methods streamline dialogues into both daily snapshots and overarching summaries [170]. Notably, specific strategies translate environmental feedback into textual encapsulations, bolstering agentsâ contextual grasp for future engagements [169]. Moreover, in multi-agent environments, vital elements of agent communication are captured and retained [171]. | 2309.07864#37 | 2309.07864#39 | 2309.07864 | [
"2305.08982"
] |
2309.07864#39 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Compressing memories with vectors or data structures. By employing suitable data structures, intelligent agents boost memory retrieval efficiency, facilitating prompt responses to interactions. Notably, several methodologies lean on embedding vectors for memory sections, plans, or dialogue histories [109; 170; 172; 174]. Another approach translates sentences into triplet configurations [173], while some perceive memory as a unique data object, fostering varied interactions [176]. Furthermore, ChatDB [175] and DB-GPT [240] integrate the LLMrollers with SQL databases, enabling data manipulation through SQL commands. Methods for memory retrieval. When an agent interacts with its environment or users, it is imperative to retrieve the most appropriate content from its memory. This ensures that the agent accesses relevant and accurate information to execute specific actions. An important question arises: How can an agent select the most suitable memory? Typically, agents retrieve memories in an automated manner [170; 174]. A significant approach in automated retrieval considers three metrics: Recency, Relevance, and Importance. The memory score is determined as a weighted combination of these metrics, with memories having the highest scores being prioritized in the modelâ s context [22]. | 2309.07864#38 | 2309.07864#40 | 2309.07864 | [
"2305.08982"
] |
2309.07864#40 | The Rise and Potential of Large Language Model Based Agents: A Survey | 14 Some research introduces the concept of interactive memory objects, which are representations of dialogue history that can be moved, edited, deleted, or combined through summarization. Users can view and manipulate these objects, influencing how the agent perceives the dialogue [176]. Similarly, other studies allow for memory operations like deletion based on specific commands provided by users [175]. Such methods ensure that the memory content aligns closely with user expectations. # 3.1.4 Reasoning and Planning Reasoning. Reasoning, underpinned by evidence and logic, is fundamental to human intellectual endeavors, serving as the cornerstone for problem-solving, decision-making, and critical analysis [241; 242; 243]. Deductive, inductive, and abductive are the primary forms of reasoning commonly recognized in intellectual endeavor [244]. For LLM-based agents, like humans, reasoning capacity is crucial for solving complex tasks [25]. Differing academic views exist regarding the reasoning capabilities of large language models. Some argue language models possess reasoning during pre-training or fine-tuning [244], while others believe it emerges after reaching a certain scale in size [26; 245]. Specifically, the representative Chain-of-Thought (CoT) method [95; 96] has been demonstrated to elicit the reasoning capacities of large language models by guiding LLMs to generate rationales before outputting the answer. Some other strategies have also been presented to enhance the performance of LLMs like self-consistency [97], self-polish [99], self-refine [178] and selection-inference [177], among others. Some studies suggest that the effectiveness of step-by-step reasoning can be attributed to the local statistical structure of training data, with locally structured dependencies between variables yielding higher data efficiency than training on all variables [246]. Planning. Planning is a key strategy humans employ when facing complex challenges. For humans, planning helps organize thoughts, set objectives, and determine the steps to achieve those objectives [247; 248; 249]. Just as with humans, the ability to plan is crucial for agents, and central to this planning module is the capacity for reasoning [250; 251; 252]. This offers a structured thought process for agents based on LLMs. Through reasoning, agents deconstruct complex tasks into more manageable sub-tasks, devising appropriate plans for each [253; 254]. | 2309.07864#39 | 2309.07864#41 | 2309.07864 | [
"2305.08982"
] |
2309.07864#41 | The Rise and Potential of Large Language Model Based Agents: A Survey | Moreover, as tasks progress, agents can employ introspection to modify their plans, ensuring they align better with real-world circumstances, leading to adaptive and successful task execution. Typically, planning comprises two stages: plan formulation and plan reflection. â ¢ Plan formulation. During the process of plan formulation, agents generally decompose an overarching task into numerous sub-tasks, and various approaches have been proposed in this phase. Notably, some works advocate for LLM-based agents to decompose problems comprehensively in one go, formulating a complete plan at once and then executing it sequentially [98; 179; 255; 256]. In contrast, other studies like the CoT-series employ an adaptive strategy, where they plan and address sub-tasks one at a time, allowing for more fluidity in handling intricate tasks in their entirety [95; 96; 257]. Additionally, some methods emphasize hierarchical planning [182; 185], while others underscore a strategy in which final plans are derived from reasoning steps structured in a tree-like format. The latter approach argues that agents should assess all possible paths before finalizing a plan [97; 181; 184; 258; 184]. While LLM-based agents demonstrate a broad scope of general knowledge, they can occasionally face challenges when tasked with situations that require expertise knowledge. Enhancing these agents by integrating them with planners of specific domains has shown to yield better performance [125; 130; 186; 259]. | 2309.07864#40 | 2309.07864#42 | 2309.07864 | [
"2305.08982"
] |
2309.07864#42 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Plan reflection. Upon formulating a plan, itâ s imperative to reflect upon and evaluate its merits. LLM-based agents leverage internal feedback mechanisms, often drawing insights from pre-existing models, to hone and enhance their strategies and planning approaches [169; 178; 188; 192]. To better align with human values and preferences, agents actively engage with humans, allowing them to rectify some misunderstandings and assimilate this tailored feedback into their planning methodology [108; 189; 190]. Furthermore, they could draw feedback from tangible or virtual surroundings, such as cues from task accomplishments or post-action observations, aiding them in revising and refining their plans [91; 101; 187; 191; 260]. | 2309.07864#41 | 2309.07864#43 | 2309.07864 | [
"2305.08982"
] |
2309.07864#43 | The Rise and Potential of Large Language Model Based Agents: A Survey | 15 # 3.1.5 Transferability and Generalization Intelligence shouldnâ t be limited to a specific domain or task, but rather encompass a broad range of cognitive skills and abilities [31]. The remarkable nature of the human brain is largely attributed to its high degree of plasticity and adaptability. It can continuously adjust its structure and function in response to external stimuli and internal needs, thereby adapting to different environments and tasks. These years, plenty of research indicates that pre-trained models on large-scale corpora can learn universal language representations [36; 261; 262]. Leveraging the power of pre-trained models, with only a small amount of data for fine-tuning, LLMs can demonstrate excellent performance in downstream tasks [263]. There is no need to train new models from scratch, which saves a lot of computation resources. However, through this task-specific fine-tuning, the models lack versatility and struggle to be generalized to other tasks. Instead of merely functioning as a static knowledge repository, LLM-based agents exhibit dynamic learning ability which enables them to adapt to novel tasks swiftly and robustly [24; 105; 106]. Unseen task generalization. Studies show that instruction-tuned LLMs exhibit zero-shot gener- alization without the need for task-specific fine-tuning [24; 25; 105; 106; 107]. With the expansion of model size and corpus size, LLMs gradually exhibit remarkable emergent abilities in unfamiliar tasks [132]. Specifically, LLMs can complete new tasks they do not encounter in the training stage by following the instructions based on their own understanding. One of the implementations is multi-task learning, for example, FLAN [105] finetunes language models on a collection of tasks described via instructions, and T0 [106] introduces a unified framework that converts every language problem into a text-to-text format. Despite being purely a language model, GPT-4 [25] demonstrates remarkable capabilities in a variety of domains and tasks, including abstraction, comprehension, vision, coding, mathematics, medicine, law, understanding of human motives and emotions, and others [31]. It is noticed that the choices in prompting are critical for appropriate predictions, and training directly on the prompts can improve the modelsâ robustness in generalizing to unseen tasks [264]. | 2309.07864#42 | 2309.07864#44 | 2309.07864 | [
"2305.08982"
] |
2309.07864#44 | The Rise and Potential of Large Language Model Based Agents: A Survey | Promisingly, such generalization capability can further be enhanced by scaling up both the model size and the quantity or diversity of training instructions [94; 265]. In-context learning. Numerous studies indicate that LLMs can perform a variety of complex tasks through in-context learning (ICL), which refers to the modelsâ ability to learn from a few examples in the context [195]. Few-shot in-context learning enhances the predictive performance of language models by concatenating the original input with several complete examples as prompts to enrich the context [41]. The key idea of ICL is learning from analogy, which is similar to the learning process of humans [266]. Furthermore, since the prompts are written in natural language, the interaction is interpretable and changeable, making it easier to incorporate human knowledge into LLMs [95; 267]. Unlike the supervised learning process, ICL doesnâ t involve fine-tuning or parameter updates, which could greatly reduce the computation costs for adapting the models to new tasks. Beyond text, researchers also explore the potential ICL capabilities in different multimodal tasks [193; 194; 268; 269; 270; 271], making it possible for agents to be applied to large-scale real-world tasks. Continual learning. Recent studies [190; 272] have highlighted the potential of LLMsâ planning capabilities in facilitating continuous learning [196; 197] for agents, which involves continuous acquisition and update of skills. A core challenge in continual learning is catastrophic forgetting [273]: as a model learns new tasks, it tends to lose knowledge from previous tasks. Numerous efforts have been devoted to addressing the above challenge, which can be broadly separated into three groups, introducing regularly used terms in reference to the previous model [274; 275; 276; 277], approximating prior data distributions [278; 279; 280], and designing architectures with task-adaptive parameters [281; 198]. LLM-based agents have emerged as a novel paradigm, leveraging the planning capabilities of LLMs to combine existing skills and address more intricate challenges. Voyager [190] attempts to solve progressively harder tasks proposed by the automatic curriculum devised by GPT-4 [25]. By synthesizing complex skills from simpler programs, the agent not only rapidly enhances its capabilities but also effectively counters catastrophic forgetting. | 2309.07864#43 | 2309.07864#45 | 2309.07864 | [
"2305.08982"
] |
2309.07864#45 | The Rise and Potential of Large Language Model Based Agents: A Survey | 16 # Perception Textual Input §3.2.1 Visual encoder ViT [282], VQVAE [283], Mobile- ViT [284], MLP-Mixer [285], etc. Visual Input §3.2.2 Learnable architecture Query based Kosmos [286], BLIP-2 [287], In- structBLIP [288], MultiModal- GPT [289], Flamingo [290], etc. Projection based PandaGPT [291], LLaVA [292], Minigpt-4 [118], etc. Cascading manner AudioGPT [293], HuggingGPT [180], etc. Auditory Input §3.2.3 Transfer visual method AST [294], HuBERT [295] , X-LLM [296], Video-LLaMA [297], etc. Other Input §3.2.4 InternGPT [298], etc. Figure 4: Typology of the perception module. # 3.2 Perception Both humans and animals rely on sensory organs like eyes and ears to gather information from their surroundings. These perceptual inputs are converted into neural signals and sent to the brain for processing [299; 300], allowing us to perceive and interact with the world. | 2309.07864#44 | 2309.07864#46 | 2309.07864 | [
"2305.08982"
] |
2309.07864#46 | The Rise and Potential of Large Language Model Based Agents: A Survey | Similarly, itâ s crucial for LLM-based agents to receive information from various sources and modalities. This expanded perceptual space helps agents better understand their environment, make informed decisions, and excel in a broader range of tasks, making it an essential development direction. Agent handles this information to the Brain module for processing through the perception module. In this section, we introduce how to enable LLM-based agents to acquire multimodal perception capabilities, encompassing textual (§ 3.2.1), visual (§ 3.2.2), and auditory inputs (§ 3.2.3). We also consider other potential input forms (§ 3.2.4) such as tactile feedback, gestures, and 3D maps to enrich the agentâ s perception domain and enhance its versatility.3). The typology diagram for the LLM-based agent perception is depicted in Figure 4. # 3.2.1 Textual Input Text is a way to carry data, information, and knowledge, making text communication one of the most important ways humans interact with the world. An LLM-based agent already has the fundamental ability to communicate with humans through textual input and output [114]. | 2309.07864#45 | 2309.07864#47 | 2309.07864 | [
"2305.08982"
] |
2309.07864#47 | The Rise and Potential of Large Language Model Based Agents: A Survey | In a userâ s textual input, aside from the explicit content, there are also beliefs, desires, and intentions hidden behind it. Understanding implied meanings is crucial for the agent to grasp the potential and underlying intentions of human users, thereby enhancing its communication efficiency and quality with users. However, as discussed in § 3.1.1, understanding implied meanings within textual input remains challenging for the current LLM-based agent. For example, some works [128; 218; 219; 220] employ reinforcement learning to perceive implied meanings and models feedback to derive rewards. This helps deduce the speakerâ s preferences, leading to more personalized and accurate responses from the agent. Additionally, as the agent is designed for use in complex real-world situations, it will inevitably encounter many entirely new tasks. Understanding text instructions for unknown tasks places higher demands on the agentâ s text perception abilities. As described in § 3.1.5, an LLM that has undergone instruction tuning [105] can exhibit remarkable zero-shot instruction understanding and generalization abilities, eliminating the need for task-specific fine-tuning. # 3.2.2 Visual Input Although LLMs exhibit outstanding performance in language comprehension [25; 301] and multi-turn conversations [302], they inherently lack visual perception and can only understand discrete textual content. Visual input usually contains a wealth of information about the world, including properties of objects, spatial relationships, scene layouts, and more in the agentâ s surroundings. Therefore, integrating visual information with data from other modalities can offer the agent a broader context and a more precise understanding [120], deepening the agentâ s perception of the environment. To help the agent understand the information contained within images, a straightforward approach is to generate corresponding text descriptions for image inputs, known as image captioning [303; 304; 305; 306; 307]. Captions can be directly linked with standard text instructions and fed into the agent. This approach is highly interpretable and doesnâ t require additional training for caption generation, which can save a significant number of computational resources. However, caption | 2309.07864#46 | 2309.07864#48 | 2309.07864 | [
"2305.08982"
] |
2309.07864#48 | The Rise and Potential of Large Language Model Based Agents: A Survey | 17 generation is a low-bandwidth method [120; 308], and it may lose a lot of potential information during the conversion process. Furthermore, the agentâ s focus on images may introduce biases. Inspired by the excellent performance of transformers [309] in natural language processing, re- searchers have extended their use to the field of computer vision. Representative works like ViT/VQVAE [282; 283; 284; 285; 310] have successfully encoded visual information using trans- formers. Researchers first divide an image into fixed-size patches and then treat these patches, after linear projection, as input tokens for Transformers [292]. In the end, by calculating self-attention between tokens, they are able to integrate information across the entire image, resulting in a highly effective way to perceive visual content. Therefore, some works [311] try to combine the image encoder and LLM directly to train the entire model in an end-to-end way. While the agent can achieve remarkable visual perception abilities, it comes at the cost of substantial computational resources. Extensively pre-trained visual encoders and LLMs can greatly enhance the agentâ s visual perception and language expression abilities [286; 312]. Freezing one or both of them during training is a widely adopted paradigm that achieves a balance between training resources and model performance [287]. However, LLMs cannot directly understand the output of a visual encoder, so itâ s necessary to convert the image encoding into embeddings that LLMs can comprehend. In other words, it involves aligning the visual encoder with the LLM. This usually requires adding an extra learnable interface layer between them. For example, BLIP-2 [287] and InstructBLIP [288] use the Querying Transformer(Q-Former) module as an intermediate layer between the visual encoder and the LLM [288]. Q-Former is a transformer that employs learnable query vectors [289], giving it the capability to extract language-informative visual representations. It can provide the most valuable information to the LLM, reducing the agentâ s burden of learning visual-language alignment and thereby mitigating the issue of catastrophic forgetting. At the same time, some researchers adopt a computationally efficient method by using a single projection layer to achieve visual-text alignment, reducing the need for training additional parameters [118; 291; 312]. | 2309.07864#47 | 2309.07864#49 | 2309.07864 | [
"2305.08982"
] |
2309.07864#49 | The Rise and Potential of Large Language Model Based Agents: A Survey | Moreover, the projection layer can effectively integrate with the learnable interface to adapt the dimensions of its outputs, making them compatible with LLMs [296; 297; 313; 314]. Video input consists of a series of continuous image frames. As a result, the methods used by agents to perceive images [287] may be applicable to the realm of videos, allowing the agent to have good perception of video inputs as well. Compared to image information, video information adds a temporal dimension. Therefore, the agentâ s understanding of the relationships between different frames in time is crucial for perceiving video information. Some works like Flamingo [290; 315] ensure temporal order when understanding videos using a mask mechanism. The mask mechanism restricts the agentâ s view to only access visual information from frames that occurred earlier in time when it perceives a specific frame in the video. # 3.2.3 Auditory Input Undoubtedly, auditory information is a crucial component of world information. When an agent possesses auditory capabilities, it can improve its awareness of interactive content, the surrounding environment, and even potential dangers. Indeed, there are numerous well-established models and approaches [293; 316; 317] for processing audio as a standalone modality. However, these models often excel at specific tasks. Given the excellent tool-using capabilities of LLMs (which will be discussed in detail in §3.3), a very intuitive idea is that the agent can use LLMs as control hubs, invoking existing toolsets or model repositories in a cascading manner to perceive audio information. For instance, AudioGPT [293], makes full use of the capabilities of models like FastSpeech [317], GenerSpeech [316], Whisper [316], and others [318; 319; 320; 321; 322] which have achieved excellent results in tasks such as Text-to-Speech, Style Transfer, and Speech Recognition. An audio spectrogram provides an intuitive representation of the frequency spectrum of an audio signal as it changes over time [323]. For a segment of audio data over a period of time, it can be abstracted into a finite-length audio spectrogram. An audio spectrogram has a 2D representation, which can be visualized as a flat image. Hence, some research [294; 295] efforts aim to migrate perceptual methods from the visual domain to audio. | 2309.07864#48 | 2309.07864#50 | 2309.07864 | [
"2305.08982"
] |
2309.07864#50 | The Rise and Potential of Large Language Model Based Agents: A Survey | AST (Audio Spectrogram Transformer) [294] employs a Transformer architecture similar to ViT to process audio spectrogram images. By segmenting the audio spectrogram into patches, it achieves effective encoding of audio information. Moreover, some researchers [296; 297] have drawn inspiration from the idea of freezing encoders to reduce training 18 time and computational costs. They align audio encoding with data encoding from other modalities by adding the same learnable interface layer. # 3.2.4 Other Input As mentioned earlier, many studies have looked into perception units for text, visual, and audio. However, LLM-based agents might be equipped with richer perception modules. In the future, they could perceive and understand diverse modalities in the real world, much like humans. For example, agents could have unique touch and smell organs, allowing them to gather more detailed information when interacting with objects. At the same time, agents can also have a clear sense of the temperature, humidity, and brightness in their surroundings, enabling them to take environment-aware actions. Moreover, by efficiently integrating basic perceptual abilities like vision, text, and light sensitivity, agents can develop various user-friendly perception modules for humans. | 2309.07864#49 | 2309.07864#51 | 2309.07864 | [
"2305.08982"
] |
2309.07864#51 | The Rise and Potential of Large Language Model Based Agents: A Survey | InternGPT [298] introduces pointing instructions. Users can interact with specific, hard-to-describe portions of an image by using gestures or moving the cursor to select, drag, or draw. The addition of pointing instructions helps provide more precise specifications for individual text instructions. Building upon this, agents have the potential to perceive more complex user inputs. For example, technologies such as eye-tracking in AR/VR devices, body motion capture, and even brainwave signals in brain-computer interaction. Finally, a human-like LLM-based agent should possess awareness of a broader overall environment. At present, numerous mature and widely adopted hardware devices can assist agents in accomplishing this. Lidar [324] can create 3D point cloud maps to help agents detect and identify objects in their surroundings. GPS [325] can provide accurate location coordinates and can be integrated with map data. Inertial Measurement Units (IMUs) can measure and record the three-dimensional motion of objects, offering details about an objectâ s speed and direction. However, these sensory data are complex and cannot be directly understood by LLM-based agents. Exploring how agents can perceive more comprehensive input is a promising direction for the future. | 2309.07864#50 | 2309.07864#52 | 2309.07864 | [
"2305.08982"
] |
2309.07864#52 | The Rise and Potential of Large Language Model Based Agents: A Survey | # 3.3 Action Textual Output §3.3.1 Learning tools Toolformer [92], TALM [326], Instruct- GPT [24], Clarebout et al. [327], etc. Tools §3.3.2 Using tools WebGPT [90], OpenAGI [211], Visual ChatGPT [328], SayCan [179], etc. Action Making tools LATM [329], CREATOR [330], SELF-DEBUGGING [331], etc. Embodied Action §3.3.3 LLM-based Embodied actions SayCan [179], EmbodiedGPT [121], InstructRL [332], Lynch et al. [333], Voyager [190], AlphaBlock [334], DEPS [183], LM-Nav [335], NavGPT [336], etc. Prospective to the embodied action MineDojo [337], Kanitscheider et al. [338], DECKARD [339], Sumers et al. [340], etc. Figure 5: Typology of the action module. | 2309.07864#51 | 2309.07864#53 | 2309.07864 | [
"2305.08982"
] |
2309.07864#53 | The Rise and Potential of Large Language Model Based Agents: A Survey | After humans perceive their environment, their brains integrate, analyze, and reason with the perceived information and make decisions. Subsequently, they employ their nervous systems to control their bodies, enabling adaptive or creative actions in response to the environment, such as engaging in conversation, evading obstacles, or starting a fire. When an agent possesses a brain-like structure with capabilities of knowledge, memory, reasoning, planning, and generalization, as well as multimodal perception, it is also expected to possess a diverse range of actions akin to humans to respond to its surrounding environment. In the construction of the agent, the action module receives action sequences sent by the brain module and carries out actions to interact with the environment. As Figure 5 shows, this section begins with textual output (§ 3.3.1), which is the inherent capability of LLM-based agents. Next we talk about the tool-using capability of LLM-based agents (§ 3.3.2), which has proved effective in enhancing their versatility and expertise. Finally, we discuss equipping the LLM-based agent with embodied action to facilitate its grounding in the physical world (§ 3.3.3). | 2309.07864#52 | 2309.07864#54 | 2309.07864 | [
"2305.08982"
] |
2309.07864#54 | The Rise and Potential of Large Language Model Based Agents: A Survey | 19 # 3.3.1 Textual Output As discussed in § 3.1.1, the rise and development of Transformer-based generative large language models have endowed LLM-based agents with inherent language generation capabilities [132; 213]. The text quality they generate excels in various aspects such as fluency, relevance, diversity, controllability [127; 214; 134; 216]. Consequently, LLM-based agents can be exceptionally strong language generators. # 3.3.2 Tool Using Tools are extensions of the capabilities of tool users. When faced with complex tasks, humans employ tools to simplify task-solving and enhance efficiency, freeing time and resources. Similarly, agents have the potential to accomplish complex tasks more efficiently and with higher quality if they also learn to use and utilize tools [94]. LLM-based agents have limitations in some aspects, and the use of tools can strengthen the agentsâ capabilities. First, although LLM-based agents have a strong knowledge base and expertise, they donâ t have the ability to memorize every piece of training data [341; 342]. They may also fail to steer to correct knowledge due to the influence of contextual prompts [226], or even generate hallucinate knowledge [208]. Coupled with the lack of corpus, training data, and tuning for specific fields and scenarios, agentsâ expertise is also limited when specializing in specific domains [343]. Specialized tools enable LLMs to enhance their expertise, adapt domain knowledge, and be more suitable for domain-specific needs in a pluggable form. Furthermore, the decision-making process of LLM-based agents lacks transparency, making them less trustworthy in high-risk domains such as healthcare and finance [344]. Additionally, LLMs are susceptible to adversarial attacks [345], and their robustness against slight input modifications is inadequate. In contrast, agents that accomplish tasks with the assistance of tools exhibit stronger interpretability and robustness. The execution process of tools can reflect the agentsâ approach to addressing complex requirements and enhance the credibility of their decisions. Moreover, for the reason that tools are specifically designed for their respective usage scenarios, agents utilizing such tools are better equipped to handle slight input modifications and are more resilient against adversarial attacks [94]. LLM-based agents not only require the use of tools, but are also well-suited for tool integration. | 2309.07864#53 | 2309.07864#55 | 2309.07864 | [
"2305.08982"
] |
2309.07864#55 | The Rise and Potential of Large Language Model Based Agents: A Survey | Lever- aging the rich world knowledge accumulated through the pre-training process and CoT prompting, LLMs have demonstrated remarkable reasoning and decision-making abilities in complex interactive environments [97], which help agents break down and address tasks specified by users in an appropri- ate way. Whatâ s more, LLMs show significant potential in intent understanding and other aspects [25; 201; 202; 203]. When agents are combined with tools, the threshold for tool utilization can be lowered, thereby fully unleashing the creative potential of human users [94]. Understanding tools. A prerequisite for an agent to use tools effectively is a comprehensive understanding of the toolsâ application scenarios and invocation methods. Without this understanding, the process of the agent using tools will become untrustworthy and fail to genuinely enhance the agentâ s capabilities. Leveraging the powerful zero-shot and few-shot learning abilities of LLMs [40; 41], agents can acquire knowledge about tools by utilizing zero-shot prompts that describe tool functionalities and parameters, or few-shot prompts that provide demonstrations of specific tool usage scenarios and corresponding methods [92; 326]. These learning approaches parallel human methods of learning by consulting tool manuals or observing others using tools [94]. A single tool is often insufficient when facing complex tasks. Therefore, the agents should first decompose the complex task into subtasks in an appropriate manner, and their understanding of tools play a significant role in task decomposition. Learning to use tools. The methods for agents to learn to utilize tools primarily consist of learning from demonstrations and learning from feedback. This involves mimicking the behavior of human experts [346; 347; 348], as well as understanding the consequences of their actions and making adjustments based on feedback received from both the environment and humans [24; 349; 350]. Environmental feedback encompasses result feedback on whether actions have successfully completed the task and intermediate feedback that captures changes in the environmental state caused by actions; human feedback comprises explicit evaluations and implicit behaviors, such as clicking on links [94]. | 2309.07864#54 | 2309.07864#56 | 2309.07864 | [
"2305.08982"
] |
2309.07864#56 | The Rise and Potential of Large Language Model Based Agents: A Survey | 20 If an agent rigidly applies tools without adaptability, it cannot achieve acceptable performance in all scenarios. Agents need to generalize their tool usage skills learned in specific contexts to more general situations, such as transferring a model trained on Yahoo search to Google search. To accomplish this, itâ s necessary for agents to grasp the common principles or patterns in tool usage strategies, which can potentially be achieved through meta-tool learning [327]. Enhancing the agentâ s understanding of relationships between simple and complex tools, such as how complex tools are built on simpler ones, can contribute to the agentsâ capacity to generalize tool usage. This allows agents to effectively discern nuances across various application scenarios and transfer previously learned knowledge to new tools [94]. Curriculum learning [351], which allows an agent to start from simple tools and progressively learn complex ones, aligns with the requirements. Moreover, benefiting from the understanding of user intent reasoning and planning abilities, agents can better design methods of tool utilization and collaboration and then provide higher-quality outcomes. Making tools for self-sufficiency. Existing tools are often designed for human convenience, which might not be optimal for agents. To make agents use tools better, thereâ s a need for tools specifically designed for agents. These tools should be more modular and have input-output formats that are more suitable for agents. If instructions and demonstrations are provided, LLM-based agents also possess the ability to create tools by generating executable programs, or integrating existing tools into more powerful ones [94; 330; 352]. and they can learn to perform self-debugging [331]. Moreover, if the agent that serves as a tool maker successfully creates a tool, it can produce packages containing the toolâ s code and demonstrations for other agents in a multi-agent system, in addition to using the tool itself [329]. Speculatively, in the future, agents might become self-sufficient and exhibit a high degree of autonomy in terms of tools. Tools can expand the action space of LLM-based agents. With the help of tools, agents can utilize various external resources such as web applications and other LMs during the reasoning and planning phase [92]. This process can provide information with high expertise, reliability, diversity, and quality for LLM-based agents, facilitating their decision-making and action. | 2309.07864#55 | 2309.07864#57 | 2309.07864 | [
"2305.08982"
] |
2309.07864#57 | The Rise and Potential of Large Language Model Based Agents: A Survey | For example, search-based tools can improve the scope and quality of the knowledge accessible to the agents with the aid of external databases, knowledge graphs, and web pages, while domain-specific tools can enhance an agentâ s expertise in the corresponding field [211; 353]. Some researchers have already developed LLM-based controllers that generate SQL statements to query databases, or to convert user queries into search requests and use search engines to obtain the desired results [90; 175]. Whatâ s more, LLM-based agents can use scientific tools to execute tasks like organic synthesis in chemistry, or interface with Python interpreters to enhance their performance on intricate mathematical computation tasks [354; 355]. For multi-agent systems, communication tools (e.g., emails) may serve as a means for agents to interact with each other under strict security constraints, facilitating their collaboration, and showing autonomy and flexibility [94]. Although the tools mentioned before enhance the capabilities of agents, the medium of interaction with the environment remains text-based. However, tools are designed to expand the functionality of language models, and their outputs are not limited to text. Tools for non-textual output can diversify the modalities of agent actions, thereby expanding the application scenarios of LLM-based agents. For example, image processing and generation can be accomplished by an agent that draws on a visual model [328]. In aerospace engineering, agents are being explored for modeling physics and solving complex differential equations [356]; in the field of robotics, agents are required to plan physical operations and control the robot execution [179]; and so on. Agents that are capable of dynamically interacting with the environment or the world through tools, or in a multimodal manner, can be referred to as digitally embodied [94]. The embodiment of agents has been a central focus of embodied learning research. We will make a deep discussion on agentsâ embodied action in §3.3.3. # 3.3.3 Embodied Action In the pursuit of Artificial General Intelligence (AGI), the embodied agent is considered a pivotal paradigm while it strives to integrate model intelligence with the physical world. The Embodiment hypothesis [357] draws inspiration from the human intelligence development process, posing that an agentâ s intelligence arises from continuous interaction and feedback with the environment rather than relying solely on well-curated textbooks. Similarly, unlike traditional deep learning models that learn explicit capabilities from the internet datasets to solve domain problems, people anticipate that LLM- based agentsâ | 2309.07864#56 | 2309.07864#58 | 2309.07864 | [
"2305.08982"
] |
2309.07864#58 | The Rise and Potential of Large Language Model Based Agents: A Survey | behaviors will no longer be limited to pure text output or calling exact tools to perform 21 particular domain tasks [358]. Instead, they should be capable of actively perceiving, comprehending, and interacting with physical environments, making decisions, and generating specific behaviors to modify the environment based on LLMâ s extensive internal knowledge. We collectively term these as embodied actions, which enable agentsâ ability to interact with and comprehend the world in a manner closely resembling human behavior. The potential of LLM-based agents for embodied actions. Before the widespread rise of LLMs, researchers tended to use methods like reinforcement learning to explore the embodied actions of agents. Despite the extensive success of RL-based embodiment [359; 360; 361], it does have certain limitations in some aspects. In brief, RL algorithms face limitations in terms of data efficiency, generalization, and complex problem reasoning due to challenges in modeling the dynamic and often ambiguous real environment, or their heavy reliance on precise reward signal representations [362]. Recent studies have indicated that leveraging the rich internal knowledge acquired during the pre-training of LLMs can effectively alleviate these issues [120; 187; 258; 363]. | 2309.07864#57 | 2309.07864#59 | 2309.07864 | [
"2305.08982"
] |
2309.07864#59 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Cost efficiency. Some on-policy algorithms struggle with sample efficiency as they require fresh data for policy updates while gathering enough embodied data for high-performance training is costly and noisy. The constraint is also found in some end-to-end models [364; 365; 366]. By leveraging the intrinsic knowledge from LLMs, agents like PaLM-E [120] jointly train robotic data with general visual-language data to achieve significant transfer ability in embodied tasks while also showcasing that geometric input representations can improve training data efficiency. | 2309.07864#58 | 2309.07864#60 | 2309.07864 | [
"2305.08982"
] |
2309.07864#60 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Embodied action generalization. As discussed in section §3.1.5, an agentâ s competence should extend beyond specific tasks. When faced with intricate, uncharted real-world environments, itâ s imperative that the agent exhibits dynamic learning and generalization capabilities. However, the majority of RL algorithms are designed to train and evaluate relevant skills for specific tasks [101; 367; 368; 369]. In contrast, fine-tuned by diverse forms and rich task types, LLMs have showcased remarkable cross-task generalization capabilities [370; 371]. For instance, PaLM- E exhibits surprising zero-shot or one-shot generalization capabilities to new objects or novel combinations of existing objects [120]. Further, language proficiency represents a distinctive advantage of LLM-based agents, serving both as a means to interact with the environment and as a medium for transferring foundational skills to new tasks [372]. SayCan [179] decomposes task instructions presented in prompts using LLMs into corresponding skill commands, but in partially observable environments, limited prior skills often do not achieve satisfactory performance [101]. To address this, Voyager [190] introduces the skill library component to continuously collect novel self-verified skills, which allows for the agentâ s lifelong learning capabilities. | 2309.07864#59 | 2309.07864#61 | 2309.07864 | [
"2305.08982"
] |
2309.07864#61 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Embodied action planning. Planning constitutes a pivotal strategy employed by humans in response to complex problems as well as LLM-based agents. Before LLMs exhibited remarkable reasoning abilities, researchers introduced Hierarchical Reinforcement Learning (HRL) methods while the high-level policy constraints sub-goals for the low-level policy and the low-level policy produces appropriate action signals [373; 374; 375]. Similar to the role of high-level policies, LLMs with emerging reasoning abilities [26] can be seamlessly applied to complex tasks in a zero-shot or few-shot manner [95; 97; 98; 99]. In addition, external feedback from the environment can further enhance LLM-based agentsâ planning performance. Based on the current environmental feedback, some work [101; 91; 100; 376] dynamically generate, maintain, and adjust high-level action plans in order to minimize dependency on prior knowledge in partially observable environments, thereby grounding the plan. Feedback can also come from models or humans, which can usually be referred to as the critics, assessing task completion based on the current state and task prompts [25; 190]. Embodied actions for LLM-based agents. Depending on the agentsâ level of autonomy in a task or the complexity of actions, there are several fundamental LLM-based embodied actions, primarily including observation, manipulation, and navigation. â ¢ Observation. Observation constitutes the primary ways by which the agent acquires environmental information and updates states, playing a crucial role in enhancing the efficiency of subsequent embodied actions. As mentioned in §3.2, observation by embodied agents primarily occurs in environments with various inputs, which are ultimately converged into a multimodal signal. A common approach entails a pre-trained Vision Transformer (ViT) used as the alignment module for text and visual information and special tokens are marked to denote the positions of multimodal data [120; 332; 121]. Soundspaces [377] proposes the identification of physical spatial geometric | 2309.07864#60 | 2309.07864#62 | 2309.07864 | [
"2305.08982"
] |
2309.07864#62 | The Rise and Potential of Large Language Model Based Agents: A Survey | 22 elements guided by reverberant audio input, enhancing the agentâ s observations with a more comprehensive perspective [375]. In recent times, even more research takes audio as a modality for embedded observation. Apart from the widely employed cascading paradigm [293; 378; 316], audio information encoding similar to ViT further enhances the seamless integration of audio with other modalities of inputs [294]. The agentâ s observation of the environment can also be derived from real-time linguistic instructions from humans, while human feedback helps the agent in acquiring detail information that may not be readily obtained or parsed [333; 190]. â ¢ Manipulation. In general, manipulation tasks for embodied agents include object rearrangements, tabletop manipulation, and mobile manipulation [23; 120]. The typical case entails the agent executing a sequence of tasks in the kitchen, which includes retrieving items from drawers and handing them to the user, as well as cleaning the tabletop [179]. Besides precise observation, this involves combining a series of subgoals by leveraging LLM. Consequently, maintaining synchronization between the agentâ s state and the subgoals is of significance. DEPS [183] utilizes an LLM-based interactive planning approach to maintain this consistency and help error correction from agentâ s feedback throughout the multi-step, long-haul reasoning process. In contrast to these, AlphaBlock [334] focuses on more challenging manipulation tasks (e.g. making a smiley face using building blocks), which requires the agent to have a more grounded understanding of the instructions. Unlike the existing open-loop paradigm, AlphaBlock constructs a dataset comprising 35 complex high-level tasks, along with corresponding multi-step planning and observation pairs, and then fine-tunes a multimodal model to enhance its comprehension of high-level cognitive instructions. | 2309.07864#61 | 2309.07864#63 | 2309.07864 | [
"2305.08982"
] |
2309.07864#63 | The Rise and Potential of Large Language Model Based Agents: A Survey | ⠢ Navigation. Navigation permits agents to dynamically alter their positions within the environ- ment, which often involves multi-angle and multi-object observations, as well as long-horizon manipulations based on current exploration [23]. Before navigation, it is essential for embodied agents to establish prior internal maps about the external environment, which are typically in the form of a topological map, semantic map or occupancy map [358]. For example, LM-Nav [335] utilizes the VNM [379] to create an internal topological map. It further leverages the LLM and VLM for decomposing input commands and analyzing the environment to find the optimal path. Furthermore, some [380; 381] highlight the importance of spatial representation to achieve the precise localization of spatial targets rather than conventional point or object-centric navigation actions by leveraging the pre-trained VLM model to combine visual features from images with 3D reconstructions of the physical world [358]. Navigation is usually a long-horizon task, where the upcoming states of the agent are influenced by its past actions. A memory buffer and summary mechanism are needed to serve as a reference for historical information [336], which is also employed in Smallville and Voyager [22; 190; 382; 383]. Additionally, as mentioned in §3.2, some works have proposed the audio input is also of great significance, but integrating audio information presents challenges in associating it with the visual environment. A basic framework includes a dynamic path planner that uses visual and auditory observations along with spatial memories to plan a series of actions for navigation [375; 384]. By integrating these, the agent can accomplish more complex tasks, such as embodied question answering, whose primary objective is autonomous exploration of the environment, and responding to pre-defined multimodal questions, such as Is the watermelon in the kitchen larger than the pot? | 2309.07864#62 | 2309.07864#64 | 2309.07864 | [
"2305.08982"
] |
2309.07864#64 | The Rise and Potential of Large Language Model Based Agents: A Survey | Which one is harder? To address these questions, the agent needs to navigate to the kitchen, observe the sizes of both objects and then answer the questions through comparison [358]. In terms of control strategies, as previously mentioned, LLM-based agents trained on particular embodied datasets typically generate high-level policy commands to control low-level policies for achieving specific sub-goals. The low-level policy can be a robotic transformer [120; 385; 386], which takes images and instructions as inputs and produces control commands for the end effector as well as robotic arms in particular embodied tasks [179]. Recently, in virtual embodied environments, the high-level strategies are utilized to control agents in gaming [172; 183; 190; 337] or simulated worlds [22; 108; 109]. For instance, Voyager [190] calls the Mineflayer [387] API interface to continuously acquire various skills and explore the world. Prospective future of the embodied action. LLM-based embodied actions are seen as the bridge between virtual intelligence and the physical world, enabling agents to perceive and modify the environment much like humans. However, there remain several constraints such as high costs of physical-world robotic operators and the scarcity of embodied datasets, which foster a growing | 2309.07864#63 | 2309.07864#65 | 2309.07864 | [
"2305.08982"
] |
2309.07864#65 | The Rise and Potential of Large Language Model Based Agents: A Survey | 23 interest in investigating agentsâ embodied actions within simulated environments like Minecraft [183; 338; 337; 190; 339]. By utilizing the Mineflayer [387] API, these investigations enable cost- effective examination of a wide range of embodied agentsâ operations including exploration, planning, self-improvement, and even lifelong learning [190]. Despite notable progress, achieving optimal embodied actions remains a challenge due to the significant disparity between simulated platforms and the physical world. To enable the effective deployment of embodied agents in real-world scenarios, an increasing demand exists for embodied task paradigms and evaluation criteria that closely mirror real-world conditions [358]. On the other hand, learning to ground language for agents is also an obstacle. For example, expressions like â jump down like a catâ | 2309.07864#64 | 2309.07864#66 | 2309.07864 | [
"2305.08982"
] |
2309.07864#66 | The Rise and Potential of Large Language Model Based Agents: A Survey | primarily convey a sense of lightness and tranquility, but this linguistic metaphor requires adequate world knowledge [30]. [340] endeavors to amalgamate text distillation with Hindsight Experience Replay (HER) to construct a dataset as the supervised signal for the training process. Nevertheless, additional investigation on grounding embodied datasets still remains necessary while embodied action plays an increasingly pivotal role across various domains in human life. # 4 Agents in Practice: Harnessing AI for Good Task-oriented Deploytment §4.1.1 Web scenarios WebAgent [388], Mind2Web [389], WebGum [390], WebArena [391], Webshop [392], WebGPT [90], Kim et al. [393], Zheng et al. [394], etc. Life scenarios InterAct [395], PET [182], Huang et al. [258], Gramopadhye et al. [396], Raman et al. [256], etc. Single Agent Deployment §4.1 Innovation-oriented Deploytment §4.1.2 Li et al. [397], Feldt et al. [398], ChatMOF [399], ChemCrow [354], Boiko et al. [110], SCIENCEWORLD et al. [400], etc. Lifecycle-oriented Deploytment §4.1.3 Voyager [190], GITM [172], DEPS [183], Plan4MC [401], Nottingham et al. [339], etc. Disordered cooperation ChatLLM [402], RoCo [403], Blind Judgement [404], etc. Multi-Agents Interaction §4.2 Cooperative Interaction §4.2.1 Ordered cooperation MetaGPT [405], ChatDev [109], CAMEL [108], AutoGen [406], SwiftSage [185], ProAgent [407], DERA [408], Talebi- rad et al. [409], AgentVerse [410], CGMI [411], Liu et al. [27], etc. Adversarial Interaction §4.2.2 ChatEval [171], Xiong et al. [412], Du et al. [111], Fu et al. [129], Liang et al. [112], etc. Education Dona [413], Math Agents [414], etc. | 2309.07864#65 | 2309.07864#67 | 2309.07864 | [
"2305.08982"
] |
2309.07864#67 | The Rise and Potential of Large Language Model Based Agents: A Survey | Instructor-Executor Paradigm §4.3.1 Health Hsu et al. [415], HuatuoGPT [416], Zhongjing [417], LISSA [418], etc. Human-Agent Interaction §4.3 Other Applications Gao et al. [419], PEER [420], DIAL- GEN [421], AssistGPT [422], etc. Equal Partnership Paradigm §4.3.2 Empathetic Communicator Human-Level Participant SAPIEN [423], Hsu et al. [415], Liu et al. [424], etc. Bakhtin et al. [425], FAIR et al. [426], Lin et al. [427], Li et al. [428], etc. # Agents in Practice: Harnessing AI for Good Figure 6: Typology of applications of LLM-based agents. The LLM-based agent, as an emerging direction, has gained increasing attention from researchers. Many applications in specific domains and tasks have already been developed, showcasing the powerful and versatile capabilities of agents. We can state with great confidence that, the possibility of having a personal agent capable of assisting users with typical daily tasks is larger than ever before [398]. As an LLM-based agent, its design objective should always be beneficial to humans, i.e., humans can harness AI for good. Specifically, we expect the agent to achieve the following objectives: | 2309.07864#66 | 2309.07864#68 | 2309.07864 | [
"2305.08982"
] |
2309.07864#68 | The Rise and Potential of Large Language Model Based Agents: A Survey | 24 () Q:2 2K Single Agent : Agent-Agent i Agent-Human Figure 7: Scenarios of LLM-based agent applications. We mainly introduce three scenarios: single- agent deployment, multi-agent interaction, and human-agent interaction. A single agent possesses diverse capabilities and can demonstrate outstanding task-solving performance in various application orientations. When multiple agents interact, they can achieve advancement through cooperative or adversarial interactions. Furthermore, in human-agent interactions, human feedback can enable agents to perform tasks more efficiently and safely, while agents can also provide better service to humans. | 2309.07864#67 | 2309.07864#69 | 2309.07864 | [
"2305.08982"
] |
2309.07864#69 | The Rise and Potential of Large Language Model Based Agents: A Survey | 1. Assist users in breaking free from daily tasks and repetitive labor, thereby Alleviating human work pressure and enhancing task-solving efficiency. 2. No longer necessitates users to provide explicit low-level instructions. Instead, the agent can independently analyze, plan, and solve problems. 3. After freeing usersâ hands, the agent also liberates their minds to engage in exploratory and innovative work, realizing their full potential in cutting-edge scientific fields. In this section, we provide an in-depth overview of current applications of LLM-based agents, aiming to offer a broad perspective for the practical deployment scenarios (see Figure 7). First, we elucidate the diverse application scenarios of Single Agent, including task-oriented, innovation-oriented, and lifecycle-oriented scenarios (§ 4.1). Then, we present the significant coordinating potential of Multiple Agents. Whether through cooperative interaction for complementarity or adversarial interaction for advancement, both approaches can lead to higher task efficiency and response quality (§ 4.2). Finally, we categorize the interactive collaboration between humans and agents into two paradigms and introduce the main forms and specific applications respectively (§ 4.3). The topological diagram for LLM-based agent applications is depicted in Figure 6. # 4.1 General Ability of Single Agent Currently, there is a vibrant development of application instances of LLM-based agents [429; 430; 431]. AutoGPT [114] is one of the ongoing popular open-source projects aiming to achieve a fully autonomous system. Apart from the basic functions of large language models like GPT-4, the AutoGPT framework also incorporates various practical external tools and long/short-term memory management. After users input their customized objectives, they can free their hands and wait for AutoGPT to automatically generate thoughts and perform specific tasks, all without requiring additional user prompts. As shown in Figure 8, we introduce the astonishingly diverse capabilities that the agent exhibits in scenarios where only one single agent is present. # 4.1.1 Task-oriented Deployment The LLM-based agents, which can understand human natural language commands and perform everyday tasks [391], are currently among the most favored and practically valuable agents by users. This is because they have the potential to enhance task efficiency, alleviate user workload, and promote access for a broader user base. | 2309.07864#68 | 2309.07864#70 | 2309.07864 | [
"2305.08982"
] |
2309.07864#70 | The Rise and Potential of Large Language Model Based Agents: A Survey | In task-oriented deployment, the agent follows high-level instructions from users, undertaking tasks such as goal decomposition [182; 258; 388; 394], sequence planning of sub-goals [182; 395], interactive exploration of the environment [256; 391; 390; 392], until the final objective is achieved. To explore whether agents can perform basic tasks, they are first deployed in text-based game scenarios. In this type of game, agents interact with the world purely using natural language [432]. By reading textual descriptions of their surroundings and utilizing skills like memory, planning, | 2309.07864#69 | 2309.07864#71 | 2309.07864 | [
"2305.08982"
] |
2309.07864#71 | The Rise and Potential of Large Language Model Based Agents: A Survey | 25 Figure 8: Practical applications of the single LLM-based agent in different scenarios. In task- oriented deployment, agents assist human users in solving daily tasks. They need to possess basic instruction comprehension and task decomposition abilities. In innovation-oriented deployment, agents demonstrate the potential for autonomous exploration in scientific domains. In lifecycle- oriented deployment, agents have the ability to continuously explore, learn, and utilize new skills to ensure long-term survival in an open world. and trial-and-error [182], they predict the next action. However, due to the limitation of foundation language models, agents often rely on reinforcement learning during actual execution [432; 433; 434]. With the gradual evolution of LLMs [301], agents equipped with stronger text understanding and generation abilities have demonstrated great potential to perform tasks through natural language. Due to their oversimplified nature, naive text-based scenarios have been inadequate as testing grounds for LLM-based agents [391]. More realistic and complex simulated test environments have been constructed to meet the demand. Based on task types, we divide these simulated environments into web scenarios and life scenarios, and introduce the specific roles that agents play in them. In web scenarios. Performing specific tasks on behalf of users in a web scenario is known as the web navigation problem [390]. Agents interpret user instructions, break them down into multiple basic operations, and interact with computers. This often includes web tasks such as filling out forms, online shopping, and sending emails. Agents need to possess the ability to understand instructions within complex web scenarios, adapt to changes (such as noisy text and dynamic HTML web pages), and generalize successful operations [391]. In this way, agents can achieve accessibility and automation when dealing with unseen tasks in the future [435], ultimately freeing humans from repeated interactions with computer UIs. Agents trained through reinforcement learning can effectively mimic human behavior using predefined actions like typing, searching, navigating to the next page, etc. They perform well in basic tasks such as online shopping [392] and search engine retrieval [90], which have been widely explored. However, agents without LLM capabilities may struggle to adapt to the more realistic and complex scenarios in the real-world Internet. In dynamic, content-rich web pages such as online forums or online business management [391], agents often face challenges in performance. | 2309.07864#70 | 2309.07864#72 | 2309.07864 | [
"2305.08982"
] |
2309.07864#72 | The Rise and Potential of Large Language Model Based Agents: A Survey | In order to enable successful interactions between agents and more realistic web pages, some researchers [393; 394] have started to leverage the powerful HTML reading and understanding abilities of LLMs. By designing prompts, they attempt to make agents understand the entire HTML source code and predict more reasonable next action steps. Mind2Web [389] combines multiple LLMs fine-tuned for HTML, allowing them to summarize verbose HTML code [388] in real-world scenarios and extract valuable information. Furthermore, WebGum [390] empowers agents with visual perception abilities by employing a multimodal corpus containing HTML screenshots. It simultaneously fine-tunes the LLM and a visual encoder, deepening the agentâ s comprehensive understanding of web pages. | 2309.07864#71 | 2309.07864#73 | 2309.07864 | [
"2305.08982"
] |
2309.07864#73 | The Rise and Potential of Large Language Model Based Agents: A Survey | In life scenarios. In many daily household tasks in life scenarios, itâ s essential for agents to understand implicit instructions and apply common-sense knowledge [433]. For an LLM-based agent trained solely on massive amounts of text, tasks that humans take for granted might require multiple 26 trial-and-error attempts [432]. More realistic scenarios often lead to more obscure and subtle tasks. For example, the agent should proactively turn it on if itâ s dark and thereâ s a light in the room. | 2309.07864#72 | 2309.07864#74 | 2309.07864 | [
"2305.08982"
] |
2309.07864#74 | The Rise and Potential of Large Language Model Based Agents: A Survey | To successfully chop some vegetables in the kitchen, the agent needs to anticipate the possible location of a knife [182]. Can an agent apply the world knowledge embedded in its training data to real interaction scenarios? Huang et al. [258] lead the way in exploring this question. They demonstrate that sufficiently large LLMs, with appropriate prompts, can effectively break down high-level tasks into suitable sub-tasks without additional training. However, this static reasoning and planning ability has its potential drawbacks. Actions generated by agents often lack awareness of the dynamic environment around them. For instance, when a user gives the task â clean the roomâ , the agent might convert it into unfeasible sub-tasks like â call a cleaning serviceâ [396]. | 2309.07864#73 | 2309.07864#75 | 2309.07864 | [
"2305.08982"
] |
2309.07864#75 | The Rise and Potential of Large Language Model Based Agents: A Survey | To provide agents with access to comprehensive scenario information during interactions, some approaches directly incorporate spatial data and item-location relationships as additional inputs to the model. This allows agents to gain a precise description of their surroundings [395; 396]. Wu et al. [182] introduce the PET framework, which mitigates irrelevant objects and containers in environmental information through an early error correction method [256]. PET encourages agents to explore the scenario and plan actions more efficiently, focusing on the current sub-task. # 4.1.2 Innovation-oriented Deployment The LLM-based agent has demonstrated strong capabilities in performing tasks and enhancing the efficiency of repetitive work. However, in a more intellectually demanding field, like cutting-edge science, the potential of agents has not been fully realized yet. This limitation mainly arises from two challenges [399]: On one hand, the inherent complexity of science poses a significant barrier. Many domain-specific terms and multi-dimensional structures are difficult to represent using a single text. As a result, their complete attributes cannot be fully encapsulated. This greatly weakens the agentâ s cognitive level. On the other hand, there is a severe lack of suitable training data in scientific domains, making it difficult for agents to comprehend the entire domain knowledge [400; 436]. If the ability for autonomous exploration could be discovered within the agent, it would undoubtedly bring about beneficial innovation in human technology. Currently, numerous efforts in various specialized domains aim to overcome this challenge [437; 438; 439]. Experts from the computer field make full use of the agentâ s powerful code comprehension and debugging abilities [398; 397]. In the fields of chemistry and materials, researchers equip agents with a large number of general or task-specific tools to better understand domain knowledge. Agents evolve into comprehensive scientific assistants, proficient in online research and document analysis to fill data gaps. They also employ robotic APIs for real-world interactions, enabling tasks like material synthesis and mechanism discovery [110; 354; 399]. The potential of LLM-based agents in scientific innovation is evident, yet we do not expect their exploratory abilities to be utilized in applications that could threaten or harm humans. Boiko et al. [110] study the hidden dangers of agents in synthesizing illegal drugs and chemical weapons, indicating that agents could be misled by malicious users in adversarial prompts. This serves as a warning for our future work. | 2309.07864#74 | 2309.07864#76 | 2309.07864 | [
"2305.08982"
] |
2309.07864#76 | The Rise and Potential of Large Language Model Based Agents: A Survey | # 4.1.3 Lifecycle-oriented Deployment Building a universally capable agent that can continuously explore, develop new skills, and maintain a long-term life cycle in an open, unknown world is a colossal challenge. This accomplishment is regarded as a pivotal milestone in the field of AGI [183]. Minecraft, as a typical and widely explored simulated survival environment, has become a unique playground for developing and testing the comprehensive ability of an agent. Players typically start by learning the basics, such as mining wood and making crafting tables, before moving on to more complex tasks like fighting against monsters and crafting diamond tools [190]. Minecraft fundamentally reflects the real world, making it conducive for researchers to investigate an agentâ s potential to survive in the authentic world. The survival algorithms of agents in Minecraft can generally be categorized into two types [190]: low-level control and high-level planning. Early efforts mainly focused on reinforcement learning [190; 440] and imitation learning [441], enabling agents to craft some low-level items. With the emergence of LLMs, which demonstrated surprising reasoning and analytical capabilities, agents | 2309.07864#75 | 2309.07864#77 | 2309.07864 | [
"2305.08982"
] |
2309.07864#77 | The Rise and Potential of Large Language Model Based Agents: A Survey | 27 begin to utilize LLM as a high-level planner to guide simulated survival tasks [183; 339]. Some researchers use LLM to decompose high-level task instructions into a series of sub-goals [401], basic skill sequences [339], or fundamental keyboard/mouse operations [401], gradually assisting agents in exploring the open world. Voyager[190], drawing inspiration from concepts similar to AutoGPT[114], became the first LLM- based embodied lifelong learning agent in Minecraft, based on the long-term goal of â discovering as many diverse things as possibleâ | 2309.07864#76 | 2309.07864#78 | 2309.07864 | [
"2305.08982"
] |
2309.07864#78 | The Rise and Potential of Large Language Model Based Agents: A Survey | . It introduces a skill library for storing and retrieving complex action-executable code, along with an iterative prompt mechanism that incorporates environmental feedback and error correction. This enables the agent to autonomously explore and adapt to unknown environments without human intervention. An AI agent capable of autonomously learning and mastering the entire real-world techniques may not be as distant as once thought [401]. # 4.2 Coordinating Potential of Multiple Agents Motivation and Background. Although LLM-based agents possess commendable text under- standing and generation capabilities, they operate as isolated entities in nature [409]. They lack the ability to collaborate with other agents and acquire knowledge from social interactions. This inherent limitation restricts their potential to learn from multi-turn feedback from others to enhance their performance [27]. Moreover, they cannot be effectively deployed in complex scenarios requiring collaboration and information sharing among multiple agents. As early as 1986, Marvin Minsky made a forward-looking prediction. In his book The Society of Mind [442], he introduced a novel theory of intelligence, suggesting that intelligence emerges from the interactions of many smaller agents with specific functions. For instance, certain agents might be responsible for pattern recognition, while others might handle decision-making or generate solutions. This idea has been put into concrete practice with the rise of distributed artificial intelligence [443]. Multi-agent systems(MAS) [4], as one of the primary research domains, focus on how a group of agents can effectively coordinate and collaborate to solve problems. Some specialized communication languages, like KQML [444], were designed early on to support message transmission and knowledge sharing among agents. However, their message formats were relatively fixed, and the semantic expression capacity was limited. In the 21st century, integrating reinforcement learning algorithms (such as Q-learning) with deep learning has become a prominent technique for developing MAS that operate in complex environments [445]. Nowadays, the construction approach based on LLMs is beginning to demonstrate remarkable potential. The natural language communication between agents has become more elegant and easily comprehensible to humans, resulting in a significant leap in interaction efficiency. Potential advantages. Specifically, an LLM-based multi-agent system can offer several advantages. | 2309.07864#77 | 2309.07864#79 | 2309.07864 | [
"2305.08982"
] |
2309.07864#79 | The Rise and Potential of Large Language Model Based Agents: A Survey | Just as Adam Smith clearly stated in The Wealth of Nations [446], â The greatest improvements in the productive powers of labor, and most of the skill, dexterity, and judgment with which it is directed or applied, seem to be results of the division of labor.â Based on the principle of division of labor, a single agent equipped with specialized skills and domain knowledge can engage in specific tasks. On the one hand, agentsâ skills in handling specific tasks are increasingly refined through the division of labor. On the other hand, decomposing complex tasks into multiple subtasks can eliminate the time spent switching between different processes. In the end, efficient division of labor among multiple agents can accomplish a significantly greater workload than when there is no specialization, substantially improving the overall systemâ s efficiency and output quality. In § 4.1, we have provided a comprehensive introduction to the versatile abilities of LLM-based agents. Therefore, in this section, we focus on exploring the ways agents interact with each other in a multi-agent environment. Based on current research, these interactions can be broadly categorized as follows: Cooperative Interaction for Complementarity and Adversarial Interaction for Advancement (see Figure 9). # 4.2.1 Cooperative Interaction for Complementarity Cooperative multi-agent systems are the most widely deployed pattern in practical usage. Within such systems, individual agent assesses the needs and capabilities of other agents and actively seeks collaborative actions and information sharing with them [108]. This approach brings forth numerous potential benefits, including enhanced task efficiency, collective decision improvement, and the | 2309.07864#78 | 2309.07864#80 | 2309.07864 | [
"2305.08982"
] |
2309.07864#80 | The Rise and Potential of Large Language Model Based Agents: A Survey | 28 M Designer @ janager . i + . â The theme of our i ale oft: T think users need a To create/a product Bill ] pao product is ... i simplified interface. we should ... Ri E > nei . _â ~ â _ Designer " i Good idea, but...technical_ | Engineer Tthink the Jp order to develop af le The architecture of i limitations might affect __\ yf» elt» first stepis.. product, it is the product is... i Designer performance. Ss L a TA Engineer + i De â True... while simplification | Firstly, we Iwill... af? =] © Rrogramming i does enhance user experience. should... E Yeah, but performance Enei E ngineer Tester i issues also impact overall e! alle eft olfe fe otfe ofteatfs eft ule +]: The product has the : satisfactionâ 1 willitry my) sel aie|t following issues: ... i best to balance both aspects. 5) Figure 9: Interaction scenarios for multiple LLM-based agents. In cooperative interaction, agents collaborate in either a disordered or ordered manner to achieve shared objectives. In adversarial interaction, agents compete in a tit-for-tat fashion to enhance their respective performance. resolution of complex real-world problems that one single agent cannot solve independently, ulti- mately achieving the goal of synergistic complementarity. In current LLM-based multi-agent systems, communication between agents predominantly employs natural language, which is considered the most natural and human-understandable form of interaction [108]. We introduce and categorize existing cooperative multi-agent applications into two types: disordered cooperation and ordered cooperation. Disordered cooperation. When three or more agents are present within a system, each agent is free to express their perspectives and opinions openly. They can provide feedback and suggestions for modifying responses related to the task at hand [403]. This entire discussion process is uncontrolled, lacking any specific sequence, and without introducing a standardized collaborative workflow. We refer to this kind of multi-agent cooperation as disordered cooperation. ChatLLM network [402] is an exemplary representative of this concept. It emulates the forward and backward propagation process within a neural network, treating each agent as an individual node. Agents in the subsequent layer need to process inputs from all the preceding agents and propagate forward. | 2309.07864#79 | 2309.07864#81 | 2309.07864 | [
"2305.08982"
] |
2309.07864#81 | The Rise and Potential of Large Language Model Based Agents: A Survey | One potential solution is introducing a dedicated coordinating agent in multi-agent systems, responsible for integrating and organizing responses from all agents, thus updating the final answer [447]. However, consolidating a large amount of feedback data and extracting valuable insights poses a significant challenge for the coordinating agent. Furthermore, majority voting can also serve as an effective approach to making appropriate decisions. However, there is limited research that integrates this module into multi-agent systems at present. Hamilton [404] trains nine independent supreme justice agents to better predict judicial rulings in the U.S. Supreme Court, and decisions are made through a majority voting process. | 2309.07864#80 | 2309.07864#82 | 2309.07864 | [
"2305.08982"
] |
2309.07864#82 | The Rise and Potential of Large Language Model Based Agents: A Survey | Ordered cooperation. When agents in the system adhere to specific rules, for instance, expressing their opinions one by one in a sequential manner, downstream agents only need to focus on the outputs from upstream. This leads to a significant improvement in task completion efficiency, The entire discussion process is highly organized and ordered. We term this kind of multi-agent cooperation as ordered cooperation. Itâ s worth noting that systems with only two agents, essentially engaging in a conversational manner through a back-and-forth interaction, also fall under the category of ordered cooperation. CAMEL [108] stands as a successful implementation of a dual-agent cooperative system. Within a role-playing communication framework, agents take on the roles of AI Users (giving instructions) and AI Assistants (fulfilling requests by providing specific solutions). Through multi-turn dialogues, these agents autonomously collaborate to fulfill user instructions [408]. Some researchers have integrated the idea of dual-agent cooperation into a single agentâ s operation [185], alternating between rapid and deliberate thought processes to excel in their respective areas of expertise. | 2309.07864#81 | 2309.07864#83 | 2309.07864 | [
"2305.08982"
] |
2309.07864#83 | The Rise and Potential of Large Language Model Based Agents: A Survey | 29 Talebirad et al. [409] are among the first to systematically introduce a comprehensive LLM-based multi-agent collaboration framework. This paradigm aims to harness the strengths of each individual agent and foster cooperative relationships among them. Many applications of multi-agent cooperation have successfully been built upon this foundation [27; 406; 407; 448]. Furthermore, AgentVerse [410] constructs a versatile, multi-task-tested framework for group agents cooperation. It can assemble a team of agents that dynamically adapt according to the taskâ s complexity. To promote more efficient collaboration, researchers hope that agents can learn from successful human cooperation examples [109]. MetaGPT [405] draws inspiration from the classic waterfall model in software development, standardizing agentsâ inputs/outputs as engineering documents. By encoding advanced human process management experience into agent prompts, collaboration among multiple agents becomes more structured. However, during MetaGPTâ s practical exploration, a potential threat to multi-agent cooperation has been identified. Without setting corresponding rules, frequent interactions among multiple agents can amplify minor hallucinations indefinitely [405]. For example, in software development, issues like incomplete functions, missing dependencies, and bugs that are imperceptible to the human eye may arise. Introducing techniques like cross-validation [109] or timely external feedback could have a positive impact on the quality of agent outputs. # 4.2.2 Adversarial Interaction for Advancement Traditionally, cooperative methods have been extensively explored in multi-agent systems. However, researchers increasingly recognize that introducing concepts from game theory [449; 450] into systems can lead to more robust and efficient behaviors. In competitive environments, agents can swiftly adjust strategies through dynamic interactions, striving to select the most advantageous or rational actions in response to changes caused by other agents. Successful applications in Non- LLM-based competitive domains already exist [360; 451]. AlphaGo Zero [452], for instance, is an agent for Go that achieved significant breakthroughs through a process of self-play. Similarly, within LLM-based multi-agent systems, fostering change among agents can naturally occur through competition, argumentation, and debate [453; 454]. By abandoning rigid beliefs and engaging in thoughtful reflection, adversarial interaction enhances the quality of responses. Researchers first delve into the fundamental debating abilities of LLM-based agents [129; 412]. Findings demonstrate that when multiple agents express their arguments in the state of â | 2309.07864#82 | 2309.07864#84 | 2309.07864 | [
"2305.08982"
] |
2309.07864#84 | The Rise and Potential of Large Language Model Based Agents: A Survey | tit for tatâ , one agent can receive substantial external feedback from other agents, thereby correcting its distorted thoughts [112]. Consequently, multi-agent adversarial systems find broad applicability in scenarios requiring high-quality responses and accurate decision-making. In reasoning tasks, Du et al. [111] introduce the concept of debate, endowing agents with responses from fellow peers. When these responses diverge from an agentâ s own judgments, a â mentalâ argumentation occurs, leading to refined solutions. ChatEval [171] establishes a role-playing-based multi-agent referee team. Through self-initiated debates, agents evaluate the quality of text generated by LLMs, reaching a level of excellence comparable to human evaluators. The performance of the multi-agent adversarial system has shown considerable promise. However, the system is essentially dependent on the strength of LLMs and faces several basic challenges: With prolonged debate, LLMâ s limited context cannot process the entire input. â ¢ In a multi-agent environment, computational overhead significantly increases. â ¢ Multi-agent negotiation may converge to an incorrect consensus, and all agents are firmly convinced | 2309.07864#83 | 2309.07864#85 | 2309.07864 | [
"2305.08982"
] |
2309.07864#85 | The Rise and Potential of Large Language Model Based Agents: A Survey | of its accuracy [111]. The development of multi-agent systems is still far from being mature and feasible. Introducing human guides when appropriate to compensate for agentsâ shortcomings is a good choice to promote the further advancements of agents. 4.3 # Interactive Engagement between Human and Agent Human-agent interaction, as the name suggests, involves agents collaborating with humans to accom- plish tasks. With the enhancement of agent capabilities, human involvement becomes progressively essential to effectively guide and oversee agentsâ actions, ensuring they align with human require- ments and objectives [455; 456]. Throughout the interaction, humans play a pivotal role by offering | 2309.07864#84 | 2309.07864#86 | 2309.07864 | [
"2305.08982"
] |
2309.07864#86 | The Rise and Potential of Large Language Model Based Agents: A Survey | 30 Instructor-Executor Paradigm Equal Partnership Paradigm 3) x Designing an energy- So stressed lately, can't get myself to do anything. > saving product. ~ It's tough, everything feels heavy right now. Perpetual motion is impossible. (ps) 5/10 â . Yeah, thanks for understanding. Ow The product is a i > perpetual motion i Instruct : | / Feedback machine capable of... H 7 id Human as instructor | fe The product is i any capable of efficient... | Se r4 Agent as executor Output y â 7 = ) nw Figure 10: | 2309.07864#85 | 2309.07864#87 | 2309.07864 | [
"2305.08982"
] |
2309.07864#87 | The Rise and Potential of Large Language Model Based Agents: A Survey | Two paradigms of human-agent interaction. In the instructor-executor paradigm (left), humans provide instructions or feedback, while agents act as executors. In the equal partnership paradigm (right), agents are human-like, able to engage in empathetic conversation and participate in collaborative tasks with humans. guidance or by regulating the safety, legality, and ethical conduct of agents. This is particularly crucial in specialized domains, such as medicine where data privacy concerns exist [457]. In such cases, human involvement can serve as a valuable means to compensate for the lack of data, thereby facili- tating smoother and more secure collaborative processes. Moreover, considering the anthropological aspect, language acquisition in humans predominantly occurs through communication and interaction [458], as opposed to merely consuming written content. As a result, agents shouldnâ t exclusively depend on models trained with pre-annotated datasets; instead, they should evolve through online interaction and engagement. The interaction between humans and agents can be classified into two paradigms (see Figure 10): (1) Unequal interaction (i.e., instructor-executor paradigm): humans serve as issuers of instructions, while agents act as executors, essentially participating as assistants to humans in collaboration. (2) Equal interaction (i.e., equal partnership paradigm): agents reach the level of humans, participating on an equal footing with humans in interaction. # 4.3.1 Instructor-Executor Paradigm The simplest approach involves human guidance throughout the process: humans provide clear and specific instructions directly, while the agentsâ role is to understand natural language commands from humans and translate them into corresponding actions [459; 460; 461]. In §4.1, we have presented the scenario where agents solve single-step problems or receive high-level instructions from humans. Considering the interactive nature of language, in this section, we assume that the dialogue between humans and agents is also interactive. Thanks to LLMs, the agents are able to interact with humans in a conversational manner: the agent responds to each human instruction, refining its action through alternating iterations to ultimately meet human requirements [190]. While this approach does achieve the goal of human-agent interaction, it places significant demands on humans. It requires a substantial amount of human effort and, in certain tasks, might even necessitate a high level of expertise. To alleviate this issue, the agent can be empowered to autonomously accomplish tasks, while humans only need to provide feedback in certain circumstances. | 2309.07864#86 | 2309.07864#88 | 2309.07864 | [
"2305.08982"
] |
2309.07864#88 | The Rise and Potential of Large Language Model Based Agents: A Survey | Here, we roughly categorize feedback into two types: quantitative feedback and qualitative feedback. Quantitative feedback. The forms of quantitative feedback mainly include absolute evaluations like binary scores and ratings, as well as relative scores. Binary feedback refers to the positive and negative evaluations provided by humans, which agents utilize to enhance their self-optimization [462; 463; 464; 465; 466]. Comprising only two categories, this type of user feedback is often easy to collect, but sometimes it may oversimplify user intent by neglecting potential intermediate scenarios. To showcase these intermediate scenarios, researchers attempt to expand from binary feedback to rating feedback, which involves categorizing into more fine-grained levels. However, the results of Kreutzer et al. [467] suggest that there could be significant discrepancies between user and expert annotations for such multi-level artificial ratings, indicating that this labeling method might be | 2309.07864#87 | 2309.07864#89 | 2309.07864 | [
"2305.08982"
] |
2309.07864#89 | The Rise and Potential of Large Language Model Based Agents: A Survey | 31 inefficient or less reliable. Furthermore, agents can learn human preference from comparative scores like multiple choice [468; 469]. Qualitative feedback. Text feedback is usually offered in natural language, particularly for re- sponses that may need improvement. The format of this feedback is quite flexible. Humans provide advice on how to modify outputs generated by agents, and the agents then incorporate these sug- gestions to refine their subsequent outputs [470; 471]. For agents without multimodal perception capabilities, humans can also act as critics, offering visual critiques [190], for instance. Additionally, agents can utilize a memory module to store feedback for future reuse [472]. In [473], humans give feedback on the initial output generated by agents, prompting the agents to formulate various improve- ment proposals. The agents then discern and adopt the most suitable proposal, harmonizing with the human feedback. While this approach can better convey human intention compared to quantitative feedback, it might be more challenging for the agents to comprehend. Xu et al. [474] compare various types of feedback and observe that combining multiple types of feedback can yield better results. Re-training models based on feedback from multiple rounds of interaction (i.e., continual learning) can further enhance effectiveness. Of course, the collaborative nature of human-agent interaction also allows humans to directly improve the content generated by agents. This could involve modifying intermediate links [189; 475] or adjusting the conversation content [421]. In some studies, agents can autonomously judge whether the conversation is proceeding smoothly and seek feedback when errors are generated [476; 477]. Humans can also choose to participate in feedback at any time, guiding the agentâ s learning in the right direction [420]. Currently, in addition to tasks like writing [466] and semantic parsing [463; 471], the model of using agents as human assistants also holds tremendous potential in the field of education. For instance, Kalvakurth et al. [413] propose the robot Dona, which supports multimodal interactions to assist students with registration. Gvirsman et al. [478] focus on early childhood education, achieving multifaceted interactions between young children, parents, and agents. Agents can also aid in human understanding and utilization of mathematics [414]. In the field of medicine, some medical agents have been proposed, showing enormous potential in terms of diagnosis assistance, consultations, and more [416; 417]. | 2309.07864#88 | 2309.07864#90 | 2309.07864 | [
"2305.08982"
] |
2309.07864#90 | The Rise and Potential of Large Language Model Based Agents: A Survey | Especially in mental health, research has shown that agents can lead to increased accessibility due to benefits such as reduced cost, time efficiency, and anonymity compared to face-to- face treatment [479]. Leveraging such advantages, agents have found widespread applications. Ali et al. [418] design LISSA for online communication with adolescents on the autism spectrum, analyzing usersâ speech and facial expressions in real-time to engage them in multi-topic conversations and provide instant feedback regarding non-verbal cues. Hsu et al. [415] build contextualized language generation approaches to provide tailored assistance for users who seek support on diverse topics ranging from relationship stress to anxiety. Furthermore, in other industries including business, a good agent possesses the capability to provide automated services or assist humans in completing tasks, thereby effectively reducing labor costs [419]. Amidst the pursuit of AGI, efforts are directed towards enhancing the multifaceted capabilities of general agents, creating agents that can function as universal assistants in real-life scenarios [422]. | 2309.07864#89 | 2309.07864#91 | 2309.07864 | [
"2305.08982"
] |
2309.07864#91 | The Rise and Potential of Large Language Model Based Agents: A Survey | # 4.3.2 Equal Partnership Paradigm Empathetic communicator. With the rapid development of AI, conversational agents have garnered extensive attention in research fields in various forms, such as personalized custom roles and virtual chatbots [480]. It has found practical applications in everyday life, business, education, healthcare, and more [481; 482; 483]. However, in the eyes of the public, agents are perceived as emotionless machines, and can never replace humans. Although it is intuitive that agents themselves do not possess emotions, can we enable them to exhibit emotions and thereby bridge the gap between agents and humans? Therefore, a plethora of research endeavors have embarked on delving into the empathetic capacities of agents. This endeavor seeks to infuse a human touch into these agents, enabling them to detect sentiments and emotions from human expressions, ultimately crafting emotionally resonant dialogues [484; 485; 486; 487; 488; 489; 490; 491]. Apart from generating emotionally charged language, agents can dynamically adjust their emotional states and display them through facial expressions and voice [423]. These studies, viewing agents as empathetic communicators, not only enhance user satisfaction but also make significant progress in fields like healthcare [415; 418; 492] and business marketing [424]. Unlike simple rule-based conversation agents, agents with empathetic capacities can tailor their interactions to meet usersâ emotional needs [493]. | 2309.07864#90 | 2309.07864#92 | 2309.07864 | [
"2305.08982"
] |
2309.07864#92 | The Rise and Potential of Large Language Model Based Agents: A Survey | 32 Human-level participant. Furthermore, we hope that agents can be involved in the normal lives of humans, cooperating with humans to complete tasks from a human-level perspective. In the field of games, agents have already reached a high level. As early as the 1990s, IBM introduced the AI Deep Blue [451], which defeated the reigning world champion in chess at that time. However, in pure competitive environments such as chess [451], Go [360], and poker [494], the value of communication was not emphasized [426]. In many gaming tasks, players need to collaborate with each other, devising unified cooperative strategies through effective negotiation [425; 426; 495; 496]. In these scenarios, agents need to first understand the beliefs, goals, and intentions of others, formulate joint action plans for their objectives, and also provide relevant suggestions to facilitate the acceptance of cooperative actions by other agents or humans. In comparison to pure agent cooperation, we desire human involvement for two main reasons: first, to ensure interpretability, as interactions between pure agents could generate incomprehensible language [495]; second, to ensure controllability, as the pursuit of agents with complete â free willâ might lead to unforeseen negative consequences, carrying the potential for disruption. | 2309.07864#91 | 2309.07864#93 | 2309.07864 | [
"2305.08982"
] |
2309.07864#93 | The Rise and Potential of Large Language Model Based Agents: A Survey | Apart from gaming scenarios, agents also demonstrate human-level capabilities in other scenarios involving human interaction, showcasing skills in strategy formulation, negotiation, and more. Agents can collaborate with one or multiple humans, determining the shared knowledge among the cooperative partners, identifying which information is relevant to decision- making, posing questions, and engaging in reasoning to complete tasks such as allocation, planning, and scheduling [427]. Furthermore, agents possess persuasive abilities [497], dynamically influencing human viewpoints in various interactive scenarios [428]. The goal of the field of human-agent interaction is to learn and understand humans, develop technology and tools based on human needs, and ultimately enable comfortable, efficient, and secure interactions between humans and agents. Currently, significant breakthroughs have been achieved in terms of usability in this field. In the future, human-agent interaction will continue to focus on enhancing user experience, enabling agents to better assist humans in accomplishing more complex tasks in various domains. The ultimate aim is not to make agents more powerful but to better equip humans with agents. Considering practical applications in daily life, isolated interactions between humans and agents are not realistic. Robots will become colleagues, assistants, and even companions. Therefore, future agents will be integrated into a social network [498], embodying a certain level of social value. # 5 Agent Society: From Individuality to Sociality For an extended period, sociologists have frequently conducted social experiments to observe specific social phenomena within controlled environments. Notable examples include the Hawthorne Experi- ment2 and the Stanford Prison Experiment3. Subsequently, researchers began employing animals in social simulations, exemplified by the Mouse Utopia Experiment4. However, these experiments invariably utilized living organisms as participants, made it difficult to carry out various interventions, lack flexibility, and inefficient in terms of time. Thus, researchers and practitioners envision an inter- active artificial society wherein human behavior can be performed through trustworthy agents [521]. From sandbox games such as The Sims to the concept of Metaverse, we can see how â simulated societyâ is defined in peopleâ s minds: environment and the individuals interacting in it. Behind each individual can be a piece of program, a real human, or a LLM-based agent as described in the previous sections [22; 522; 523]. Then, the interaction between individuals also contributes to the birth of sociality. | 2309.07864#92 | 2309.07864#94 | 2309.07864 | [
"2305.08982"
] |
2309.07864#94 | The Rise and Potential of Large Language Model Based Agents: A Survey | In this section, to unify existing efforts and promote a comprehensive understanding of the agent society, we first analyze the behaviors and personalities of LLM-based agents, shedding light on their journey from individuality to sociability (§ 5.1). Subsequently, we introduce a general categorization of the diverse environments for agents to perform their behaviors and engage in interactions (§ 5.2). Finally, we will talk about how the agent society works, what insights people can get from it, and the risks we need to be aware of (§ 5.3). The main explorations are listed in Figure 11. # 2https://www.bl.uk/people/elton-mayo 3https://www.prisonexp.org/conclusion/ 4https://sproutsschools.com/behavioral-sink-the-mouse-utopia-experiments/ 33 Individual behaviors PaLM-E [120], Reflexion [169], Voyager [190], LLM+P [125], CoT [95], ReAct [91], etc. Social Behavior §5.1.1 Group behaviors ChatDev [109], ChatEval [171], AutoGen [406], RoCo [403], ProAgent [407], AgentVerse [410], Xu et al. [499], etc. Behavior and Personality §5.1 Cognition Binz et al. [500], Dasgupta et al. [501], Dhingra et al. [502], Hagendorff et al.[503], etc. Personality §5.1.2 Emotion Wang et al. [504], Curry et al. [505], Elyoseph et al. [506], Habibi et al. [507], etc. Character Caron et al. [508], Pan et al. [509], Li et al. [510], Safdari et al. [511], etc. Text-based Environment §5.2.1 Textworld [512], Urbanek et al. [513], Hausknecht et al. [514], Am- manabrolu et al. [432], CAMEL [108], Hoodwinked [515], etc. | 2309.07864#93 | 2309.07864#95 | 2309.07864 | [
"2305.08982"
] |
2309.07864#95 | The Rise and Potential of Large Language Model Based Agents: A Survey | Social Environment §5.2 Virtual Sandbox Environment §5.2.2 Generative Agents [22], AgentSims [174], Minedojo [337], Voyager [190], Plan4mc [401], SANDBOX [27], etc. Physical Environment §5.2.3 Interactive Language [333], PaLM-E [120], RoboAgent [516], AVLEN [375], etc. Society Simulation §5.3 Generative Agents [22], AgentSims [174], Social Simulacra [517], S3 [518], RecAgent [519], Williams et al. [520], SANDBOX [27], etc. # Agent Society: From In- dividuality to Sociability Figure 11: Typology of society of LLM-based agents. # 5.1 Behavior and Personality of LLM-based Agents As noted by sociologists, individuals can be analyzed in terms of both external and internal dimensions [524]. The external deals with observable behaviors, while the internal relates to dispositions, values, and feelings. As shown in Figure 12, this framework offers a perspective on emergent behaviors and personalities in LLM-based agents. Externally, we can observe the sociological behaviors of agents (§ 5.1.1), including how agents act individually and interact with their environment. Internally, agents may exhibit intricate aspects of the personality (§ 5.1.2), such as cognition, emotion, and character, that shape their behavioral responses. # 5.1.1 Social Behavior As Troitzsch et al. [525] stated, the agent society represents a complex system comprising individual and group social activities. Recently, LLM-based agents have exhibited spontaneous social behaviors in an environment where both cooperation and competition coexist [499]. The emergent behaviors intertwine to shape the social interactions [518]. Foundational individual behaviors. Individual behaviors arise through the interplay between internal cognitive processes and external environmental factors. These behaviors form the basis of how agents operate and develop as individuals within society. They can be classified into three core dimensions: | 2309.07864#94 | 2309.07864#96 | 2309.07864 | [
"2305.08982"
] |
2309.07864#96 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Input behaviors refers to the absorption of information from the surroundings. This includes perceiving sensory stimuli [120] and storing them as memories [169]. These behaviors lay the groundwork for how an individual understands the external world. â ¢ Internalizing behaviors involve inward cognitive processing within an individual. This category encompasses activities such as planning [125], reasoning [95], reflection [91], and knowledge pre- cipitation [108; 405]. These introspective processes are essential for maturity and self-improvement. | 2309.07864#95 | 2309.07864#97 | 2309.07864 | [
"2305.08982"
] |
2309.07864#97 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Output behaviors constitute outward actions and expressions. The actions can range from object manipulation [120] to structure construction [190]. By performing these actions, agents change the states of the surroundings. In addition, agents can express their opinions and broadcast information 34 Simulated Agent Society Internalizing Behaviors Figure 12: Overview of Simulated Agent Society. The whole framework is divided into two parts: the Agent and the Environment. We can observe in this figure that: (1) Left: At the individual level, an agent exhibits internalizing behaviors like planning, reasoning, and reflection. It also displays intrinsic personality traits involving cognition, emotion, and character. (2) Mid: An agent and other agents can form groups and exhibit group behaviors, such as cooperation. (3) Right: The environment, whether virtual or physical, contains human actors and all available resources. For a single agent, other agents are also part of the environment. (4) The agents have the ability to interact with the environment via perception and action. to interact with others [405]. By doing so, agents exchange their thoughts and beliefs with others, influencing the information flow within the environment. Dynamic group behaviors. A group is essentially a gathering of two or more individuals partici- pating in shared activities within a defined social context [526]. The attributes of a group are never static; instead, they evolve due to member interactions and environmental influences. This flexibility gives rise to numerous group behaviors, each with a distinctive impact on the larger societal group. The categories of group behaviors include: | 2309.07864#96 | 2309.07864#98 | 2309.07864 | [
"2305.08982"
] |
2309.07864#98 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Positive group behaviors are actions that foster unity, collaboration, and collective well-being [22; 109; 171; 403; 406; 407]. A prime example is cooperative teamwork, which is achieved through brainstorming discussions [171], effective conversations [406], and project management [405]. Agents share insights, resources, and expertise. This encourages harmonious teamwork and enables the agents to leverage their unique skills to accomplish shared goals. Altruistic contributions are also noteworthy. Some LLM-based agents serve as volunteers and willingly offer support to assist fellow group members, promoting cooperation and mutual aid [410]. | 2309.07864#97 | 2309.07864#99 | 2309.07864 | [
"2305.08982"
] |
2309.07864#99 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Neutral group behaviors. In human society, strong personal values vary widely and tend toward individualism and competitiveness. In contrast, LLMs which are designed with an emphasis on being â helpful, honest, and harmlessâ [527] often demonstrate a tendency towards neutrality [528]. This alignment with neutral values leads to conformity behaviors including mimicry, spectating, and reluctance to oppose majorities. â ¢ Negative group behaviors can undermine the effectiveness and coherence of an agent group. Conflict and disagreement arising from heated debates or disputes among agents may lead to internal tensions. Furthermore, recent studies have revealed that agents may exhibit confrontational actions [499] and even resort to destructive behaviors, such as destroying other agents or the environment in pursuit of efficiency gains [410]. | 2309.07864#98 | 2309.07864#100 | 2309.07864 | [
"2305.08982"
] |
2309.07864#100 | The Rise and Potential of Large Language Model Based Agents: A Survey | 35 # 5.1.2 Personality Recent advances in LLMs have provided glimpses of human-like intelligence [529]. Just as human personality emerges through socialization, agents also exhibit a form of personality that develops through interactions with the group and the environment [530; 531]. The widely accepted definition of personality refers to cognitive, emotional, and character traits that shape behaviors [532]. In the subsequent paragraphs, we will delve into each facet of personality. Cognitive abilities. Cognitive abilities generally refer to the mental processes of gaining knowledge and comprehension, including thinking, judging, and problem-solving. Recent studies have started leveraging cognitive psychology methods to investigate emerging sociological personalities of LLM- based agents through various lenses [500; 502; 503]. A series of classic experiments from the psychology of judgment and decision-making have been applied to test agent systems [501; 500; 502; 533]. Specifically, LLMs have been examined using the Cognitive Reflection Test (CRT) to underscore their capacity for deliberate thinking beyond mere intuition [534; 535]. These studies indicate that LLM-based agents exhibit a level of intelligence that mirrors human cognition in certain respects. Emotional intelligence. Emotions, distinct from cognitive abilities, involve subjective feelings and mood states such as joy, sadness, fear, and anger. With the increasing potency of LLMs, LLM-based agents are now demonstrating not only sophisticated reasoning and cognitive tasks but also a nuanced understanding of emotions [31]. Recent research has explored the emotional intelligence (EI) of LLMs, including emotion recognition, interpretation, and understanding. Wang et al. found that LLMs align with human emotions and values when evaluated on EI benchmarks [504]. In addition, studies have shown that LLMs can accurately identify user emotions and even exhibit empathy [505; 506]. More advanced agents are also capable of emotion regulation, actively adjusting their emotional responses to provide affective empathy [423] and mental wellness support [507; 536]. It contributes to the development of empathetic artificial intelligence (EAI). These advances highlight the growing potential of LLMs to exhibit emotional intelligence, a crucial facet of achieving AGI. Bates et al. [537] explored the role of emotion modeling in creating more believable agents. By developing socio-emotional skills and integrating them into agent architectures, LLM-based agents may be able to engage in more naturalistic interactions. Character portrayal. | 2309.07864#99 | 2309.07864#101 | 2309.07864 | [
"2305.08982"
] |
2309.07864#101 | The Rise and Potential of Large Language Model Based Agents: A Survey | While cognition involves mental abilities and emotion relates to subjective experiences, the narrower concept of personality typically pertains to distinctive character patterns. To understand and analyze a character in LLMs, researchers have utilized several well-established frameworks like the Big Five personality trait measure [508; 538] and the Myersâ Briggs Type Indicator (MBTI) [508; 509; 538]. These frameworks provide valuable insights into the emerging character traits exhibited by LLM-based agents. In addition, investigations of potentially harmful dark personality traits underscore the complexity and multifaceted nature of character portrayal in these agents [510]. Recent work has also explored customizable character portrayal in LLM-based agents [511]. By optimizing LLMs through careful techniques, users can align with desired profiles and shape diverse and relatable agents. One effective approach is prompt engineering, which involves the concise summaries that encapsulate desired character traits, interests, or other attributes [22; 517]. These prompts serve as cues for LLM-based agents, directing their responses and behaviors to align with the outlined character portrayal. Furthermore, personality-enriched datasets can also be used to train and fine-tune LLM-based agents [539; 540]. Through exposure to these datasets, LLM-based agents gradually internalize and exhibit distinct personality traits. # 5.2 Environment for Agent Society In the context of simulation, the whole society consists of not only solitary agents but also the environment where agents inhabit, sense, and act [541]. The environment impacts sensory inputs, action space, and interactive potential of agents. In turn, agents influence the state of the environment through their behaviors and decisions. As shown in Figure 12, for a single agent, the environment | 2309.07864#100 | 2309.07864#102 | 2309.07864 | [
"2305.08982"
] |
2309.07864#102 | The Rise and Potential of Large Language Model Based Agents: A Survey | 36 refers to other autonomous agents, human actors, and external factors. It provides the necessary resources and stimuli for agents. In this section, we examine fundamental characteristics, advantages, and limitations of various environmental paradigms, including text-based environment (§ 5.2.1), virtual sandbox environment (§ 5.2.2), and physical environment (§ 5.2.3). # 5.2.1 Text-based Environment Since LLMs primarily rely on language as their input and output format, the text-based environment serves as the most natural platform for agents to operate in. It is shaped by natural language descriptions without direct involvement of other modalities. Agents exist in the text world and rely on textual resources to perceive, reason, and take actions. In text-based environments, entities and resources can be presented in two main textual forms, including natural and structured. Natural text uses descriptive language to convey information, like character dialogue or scene setting. For instance, consider a simple scenario described textually: â | 2309.07864#101 | 2309.07864#103 | 2309.07864 | [
"2305.08982"
] |
2309.07864#103 | The Rise and Potential of Large Language Model Based Agents: A Survey | You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox hereâ [512]. Here, object attributes and locations are conveyed purely through plain text. On the other hand, structured text follows standardized formats, such as technical documentation and hypertext. Technical documentation uses templates to provide operational details and domain knowledge about tool use. Hypertext condenses complex information from sources like web pages [389; 388; 391; 392] or diagrams into a structured format. Structured text transforms complex details into accessible references for agents. The text-based environment provides a flexible framework for creating different text worlds for various goals. The textual medium enables environments to be easily adapted for tasks like interactive dialog and text-based games. In interactive communication processes like CAMEL [108], the text is the primary medium for describing tasks, introducing roles, and facilitating problem-solving. In text-based games, all environment elements, such as locations, objects, characters, and actions, are exclusively portrayed through textual descriptions. Agents utilize text commands to execute manipulations like moving or tool use [432; 512; 514; 515]. Additionally, agents can convey emotions and feelings through text, further enriching their capacity for naturalistic communication [513]. | 2309.07864#102 | 2309.07864#104 | 2309.07864 | [
"2305.08982"
] |
2309.07864#104 | The Rise and Potential of Large Language Model Based Agents: A Survey | # 5.2.2 Virtual Sandbox Environment The virtual sandbox environment provides a visualized and extensible platform for agent society, bridging the gap between simulation and reality. The key features of sandbox environments are: â ¢ Visualization. Unlike the text-based environment, the virtual sandbox displays a panoramic view of the simulated setting. This visual representation can range from a simple 2D graphical interface to a fully immersive 3D modeling, depending on the complexity of the simulated society. Multiple elements collectively transform abstract simulations into visible landscapes. For example, in the overhead perspective of Generative Agents [22], a detailed map provides a comprehensive overview of the environment. Agent avatars represent each agentâ s positions, enabling real-time tracking of movement and interactions. Furthermore, expressive emojis symbolize actions and states in an intuitive manner. | 2309.07864#103 | 2309.07864#105 | 2309.07864 | [
"2305.08982"
] |
2309.07864#105 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Extensibility. The environment demonstrates a remarkable degree of extensibility, facilitating the construction and deployment of diverse scenarios. At a basic level, agents can manipulate the physical elements within the environment, including the overall design and layout of architecture. For instance, platforms like AgentSims [174] and Generative Agents [22] construct artificial towns with buildings, equipment, and residents in grid-based worlds. Another example is Minecraft, which provides a blocky and three-dimensional world with infinite terrain for open-ended construction [190; 337; 401]. Beyond physical elements, agent relationships, interactions, rules, and social norms can be defined. A typical design of the sandbox [27] employs latent sandbox rules as incentives to guide emergent behaviors, aligning them more closely with human preferences. The extensibility supports iterative prototyping of diverse agent societies. | 2309.07864#104 | 2309.07864#106 | 2309.07864 | [
"2305.08982"
] |
2309.07864#106 | The Rise and Potential of Large Language Model Based Agents: A Survey | # 5.2.3 Physical Environment As previously discussed, the text-based environment has limited expressiveness for modeling dynamic environments. While the virtual sandbox environment provides modularized simulations, it lacks authentic embodied experiences. In contrast, the physical environment refers to the tangible and 37 real-world surroundings which consist of actual physical objects and spaces. For instance, within a household physical environment [516], tangible surfaces and spaces can be occupied by real- world objects such as plates. This physical reality is significantly more complex, posing additional challenges for LLM-based agents: | 2309.07864#105 | 2309.07864#107 | 2309.07864 | [
"2305.08982"
] |
2309.07864#107 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Sensory perception and processing. The physical environment introduces a rich tapestry of sensory inputs with real-world objects. It incorporates visual [120; 333], auditory [375; 377] and spatial senses. While this diversity enhances interactivity and sensory immersion, it also introduces the complexity of simultaneous perception. Agents must process sensory inputs to interact effectively with their surroundings. â ¢ Motion control. Unlike virtual environments, physical spaces impose realistic constraints on ac- tions through embodiment. Action sequences generated by LLM-based agents should be adaptable to the environment. It means that the physical environment necessitates executable and grounded motion control [258]. For example, imagine an agent operating a robotic arm in a factory. Grasping objects with different textures requires precision tuning and controlled force, which prevents damage to items. Moreover, the agent must navigate the physical workspace and make real-time adjustments, avoiding obstacles and optimizing the trajectory of the arm. In summary, to effectively interact within tangible spaces, agents must undergo hardware-specific and scenario-specific training to develop adaptive abilities that can transfer from virtual to physical environments. | 2309.07864#106 | 2309.07864#108 | 2309.07864 | [
"2305.08982"
] |
2309.07864#108 | The Rise and Potential of Large Language Model Based Agents: A Survey | We will discuss more in the following section (§ 6.5). # 5.3 Society Simulation with LLM-based Agents The concept of â Simulated Societyâ in this section serves as a dynamic system where agents engage in intricate interactions within a well-defined environment. Recent research on simulated societies has followed two primary lines, namely, exploring the boundaries of the collective intelligence capabilities of LLM-based agents [109; 405; 130; 406; 410] and using them to accelerate discoveries in the social sciences [22; 518; 542]. In addition, there are also a number of noteworthy studies, e.g., using simulated societies to collect synthetic datasets [108; 519; 543], helping people to simulate rare yet difficult interpersonal situations [544; 545]. With the foundation of the previous sections (§ 5.1, 5.2), here we will introduce the key properties and mechanism of agent society (§ 5.3.1), what we can learn from emergent social phenomena (§ 5.3.2), and finally the potential ethical and social risks in it (§ 5.3.3). # 5.3.1 Key Properties and Mechanism of Agent Society Social simulation can be categorized into macro-level simulation and micro-level simulation [518]. In the macro-level simulation, also known as system-based simulation, researchers model the overall state of the system of the simulated society [546; 547]. While micro-level simulation, also known as agent-based simulation or Multi-Agent Systems (MAS), indirectly simulates society by modeling individuals [548; 549]. With the development of LLM-based agents, micro-level simulation has gained prominence recently [22; 174]. | 2309.07864#107 | 2309.07864#109 | 2309.07864 | [
"2305.08982"
] |
2309.07864#109 | The Rise and Potential of Large Language Model Based Agents: A Survey | In this article, we characterize that the â Agent Societyâ refers to an open, persistent, situated, and organized framework [521] where LLM-based agents interact with each other in a defined environment. Each of these attributes plays a pivotal role in shaping the harmonious appearance of the simulated society. In the following paragraphs, we analyze how the simulated society operates through discussing these properties: â ¢ Open. One of the defining features of simulated societies lies in their openness, both in terms of their constituent agents and their environmental components. Agents, the primary actors within such societies, have the flexibility to enter or leave the environment without disrupting its operational integrity [550]. Furthermore, this feature extends to the environment itself, which can be expanded by adding or removing entities in the virtual or physical world, along with adaptable resources like tool APIs. Additionally, humans can also participate in societies by assuming the role of an agent or serving as the â | 2309.07864#108 | 2309.07864#110 | 2309.07864 | [
"2305.08982"
] |
2309.07864#110 | The Rise and Potential of Large Language Model Based Agents: A Survey | inner voiceâ guiding these agents [22]. This inherent openness adds another level of complexity to the simulation, blurring the lines between simulation and reality. â ¢ Persistent. We expect persistence and sustainability from the simulated society. While individual agents within the society exercise autonomy in their actions over each time step [22; 518], the overall organizational structure persists through time, to a degree detached from the transient 38 behaviors of individual agents. This persistence creates an environment where agentsâ decisions and behaviors accumulate, leading to a coherent societal trajectory that develops through time. The system operates independently, contributing to societyâ s stability while accommodating the dynamic nature of its participants. â ¢ Situated. The situated nature of the society emphasizes its existence and operation within a distinct environment. This environment is artificially or automatically constructed in advance, and agents execute their behaviors and interactions effectively within it. A noteworthy aspect of this attribute is that agents possess an awareness of their spatial context, understanding their location within the environment and the objects within their field of view [22; 190]. This awareness contributes to their ability to interact proactively and contextually. | 2309.07864#109 | 2309.07864#111 | 2309.07864 | [
"2305.08982"
] |
2309.07864#111 | The Rise and Potential of Large Language Model Based Agents: A Survey | â ¢ Organized. The simulated society operates within a meticulously organized framework, mirroring the systematic structure present in the real world. Just as the physical world adheres to physics principles, the simulated society operates within predefined rules and limitations. In the simu- lated world, agents interact with the environment in a limited action space, while objects in the environment transform in a limited state space. All of these rules determine how agents operate, facilitating the communication connectivity and information transmission pathways, among other aspects in simulation [207]. This organizational framework ensures that operations are coherent and comprehensible, ultimately leading to an ever-evolving yet enduring simulation that mirrors the intricacies of real-world systems. # 5.3.2 Insights from Agent Society Following the exploration of how simulated society works, this section delves into the emergent social phenomena in agent society. In the realm of social science, the pursuit of generalized representations of individuals, groups, and their intricate dynamics has long been a shared objective [551; 552]. The emergence of LLM-based agents allows us to take a more microscopic view of simulated society, which leads to more discoveries from the new representation. Organized productive cooperation. Society simulation offers valuable insights into innovative col- laboration patterns, which have the potential to enhance real-world management strategies. Research has demonstrated that within this simulated society, the integration of diverse experts introduces a multifaceted dimension of individual intelligence [108; 447]. When dealing with complex tasks, such as software development or consulting, the presence of agents with various backgrounds, abilities, and experiences facilitates creative problem-solving [109; 410]. Furthermore, diversity functions as a system of checks and balances, effectively preventing and rectifying errors through interaction, ultimately improving the adaptability to various tasks. Through numerous iterations of interactions and debates among agents, individual errors like hallucination or degeneration of thought (DoT) are corrected by the group [112]. Efficient communication also plays a pivotal role in such a large and complex collaborative group. For example, MetaGPT [405] has artificially formulated communication styles with reference to standardized operating procedures (SOPs), validating the effectiveness of empirical methods. Park et al. [22] observed agents working together to organize a Valentineâ s Day party through spontaneous communication in a simulated town. Propagation in social networks. As simulated social systems can model what might happen in the real world, they can be used as a reference for predicting social processes. | 2309.07864#110 | 2309.07864#112 | 2309.07864 | [
"2305.08982"
] |
2309.07864#112 | The Rise and Potential of Large Language Model Based Agents: A Survey | Unlike traditional empirical approaches, which heavily rely on time-series data and holistic modeling [553; 554], agent-based simulations offer a unique advantage by providing more interpretable and endogenous perspectives for researchers. Here we focus on its application to modeling propagation in social networks. The first crucial aspect to be explored is the development of interpersonal relationships in simulated societies. For instance, agents who are not initially connected as friends have the potential to establish connections through intermediaries [22]. Once a network of relationships is established, our attention shifts to the dissemination of information within this social network, along with the underlying attitudes and emotions associated with it. S3 [518] proposes a user-demographic inference module for capturing both the number of people aware of a particular message and the collective sentiment prevailing among the crowd. This same approach extends to modeling cultural transmission [555] and the spread of infectious diseases [520]. By employing LLM-based agents to model individual | 2309.07864#111 | 2309.07864#113 | 2309.07864 | [
"2305.08982"
] |
2309.07864#113 | The Rise and Potential of Large Language Model Based Agents: A Survey | 39 behaviors, implementing various intervention strategies, and monitoring population changes over time, these simulations empower researchers to gain deeper insights into the intricate processes that underlie various social phenomena of propagation. Ethical decision-making and game theory. Simulated societies offer a dynamic platform for the investigation of intricate decision-making processes, encompassing decisions influenced by ethical and moral principles. Taking Werewolf game [499; 556] and murder mystery games [557] as examples, researchers explore the capabilities of LLM-based agents when confronted with challenges of deceit, trust, and incomplete information. These complex decision-making scenarios also intersect with game theory [558], where we frequently encounter moral dilemmas pertaining to individual and collective interests, such as Nash Equilibria. Through the modeling of diverse scenarios, researchers acquire valuable insights into how agents prioritize values like honesty, cooperation, and fairness in their actions. In addition, agent simulations not only provide an understanding of existing moral values but also contribute to the development of philosophy by serving as a basis for understanding how these values evolve and develop over time. Ultimately, these insights contribute to the refinement of LLM-based agents, ensuring their alignment with human values and ethical standards [27]. Policy formulation and improvement. The emergence of LLM-based agents has profoundly transformed our approach to studying and comprehending intricate social systems. However, despite those interesting facets mentioned earlier, numerous unexplored areas remain, underscoring the potential for investigating diverse phenomena. One of the most promising avenues for investigation in simulated society involves exploring various economic and political states and their impacts on societal dynamics [559]. Researchers can simulate a wide array of economic and political systems by configuring agents with differing economic preferences or political ideologies. This in-depth analysis can provide valuable insights for policymakers seeking to foster prosperity and promote societal well-being. As concerns about environmental sustainability grow, we can also simulate scenarios involving resource extraction, pollution, conservation efforts, and policy interventions [560]. These findings can assist in making informed decisions, foreseeing potential repercussions, and formulating policies that aim to maximize positive outcomes while minimizing unintended adverse effects. # 5.3.3 Ethical and Social Risks in Agent Society Simulated societies powered by LLM-based agents offer significant inspirations, ranging from industrial engineering to scientific research. However, these simulations also bring about a myriad of ethical and social risks that need to be carefully considered and addressed [561]. Unexpected social harm. | 2309.07864#112 | 2309.07864#114 | 2309.07864 | [
"2305.08982"
] |
2309.07864#114 | The Rise and Potential of Large Language Model Based Agents: A Survey | Simulated societies carry the risk of generating unexpected social phenomena that may cause considerable public outcry and social harm. These phenomena span from individual-level issues like discrimination, isolation, and bullying, to broader concerns such as oppressive slavery and antagonism [562; 563]. Malicious people may manipulate these simulations for unethical social experiments, with consequences reaching beyond the virtual world into reality. Creating these simulated societies is akin to opening Pandoraâ s Box, necessitating the establishment of rigorous ethical guidelines and oversight during their development and utilization [561]. Otherwise, even minor design or programming errors in these societies can result in unfavorable consequences, ranging from psychological discomfort to physical injury. Stereotypes and prejudice. Stereotyping and bias pose a long-standing challenge in language modeling, and a large part of the reason lies in the training data [564; 565]. The vast amount of text obtained from the Internet reflects and sometimes even amplifies real-world social biases, such as gender, religion, and sexuality [566]. Although LLMs have been aligned with human values to mitigate biased outputs, the models still struggle to portray minority groups well due to the long-tail effect of the training data [567; 568; 569]. Consequently, this may result in an overly one-sided focus in social science research concerning LLM-based agents, as the simulated behaviors of marginalized populations usually conform to prevailing assumptions [570]. Researchers have started addressing this concern by diversifying training data and making adjustments to LLMs [571; 572], but we still have a long way to go. | 2309.07864#113 | 2309.07864#115 | 2309.07864 | [
"2305.08982"
] |
2309.07864#115 | The Rise and Potential of Large Language Model Based Agents: A Survey | Privacy and security. Given that humans can be members of the agent society, the exchange of private information between users and LLM-based agents poses significant privacy and security 40 concerns [573]. Users might inadvertently disclose sensitive personal information during their interactions, which will be retained in the agentâ s memory for extended periods [170]. Such situations could lead to unauthorized surveillance, data breaches, and the misuse of personal information, particularly when individuals with malicious intent are involved [574]. To address these risks effectively, it is essential to implement stringent data protection measures, such as differential privacy protocols, regular data purges, and user consent mechanisms [575; 576]. Over-reliance and addictiveness. Another concern in simulated societies is the possibility of users developing excessive emotional attachments to the agents. Despite being aware that these agents are computational entities, users may anthropomorphize them or attach human emotions to them [22; 577]. | 2309.07864#114 | 2309.07864#116 | 2309.07864 | [
"2305.08982"
] |
2309.07864#116 | The Rise and Potential of Large Language Model Based Agents: A Survey | A notable example is â Sydneyâ , an LLM-powered chatbot developed by Microsoft as part of its Bing search engine. Some users reported unexpected emotional connections with â Sydneyâ [578], while others expressed their dismay when Microsoft cut back its personality. This even resulted in a petition called â FreeSydneyâ 5. Hence, to reduce the risk of addiction, it is crucial to emphasize that agents should not be considered substitutes for genuine human connections. Furthermore, it is vital to furnish users with guidance and education on healthy boundaries in their interactions with simulated agents. # 6 Discussion # 6.1 Mutual Benefits between LLM Research and Agent Research With the recent advancement of LLMs, research at the intersection of LLMs and agents has rapidly progressed, fueling the development of both fields. Here, we look forward to some of the benefits and development opportunities that LLM research and Agent research provide to each other. LLM research â agent research. | 2309.07864#115 | 2309.07864#117 | 2309.07864 | [
"2305.08982"
] |
2309.07864#117 | The Rise and Potential of Large Language Model Based Agents: A Survey | As mentioned before, AI agents need to be able to perceive the environment, make decisions, and execute appropriate actions [4; 9]. Among the critical steps, understanding the content input to the agent, reasoning, planning, making accurate decisions, and translating them into executable atomic action sequences to achieve the ultimate goal is paramount. Many current endeavors utilize LLMs as the cognitive core of AI agents, and the evolution of these models provides a quality assurance for accomplishing this step [22; 114; 115; 410]. With their robust capabilities in language and intent comprehension, reasoning, memory, and even empathy, large language models can excel in decision-making and planning, as demonstrated before. Coupled with pre-trained knowledge, they can create coherent action sequences that can be executed effectively [183; 258; 355]. Additionally, through the mechanism of reflection [169; 178], these language-based models can continuously adjust decisions and optimize execution sequences based on the feedback provided by the current environment. This offers a more robust and interpretable controller. With just a task description or demonstration, they can effectively handle previously unseen tasks [24; 106; 264]. Additionally, LLMs can adapt to various languages, cultures, and domains, making them versatile and reducing the need for complex training processes and data collection [31; 132]. Briefly, LLM provides a remarkably powerful foundational model for agent research, opening up numerous novel opportunities when integrated into agent-related studies. For instance, we can explore how to integrate LLMâ s efficient decision-making capabilities into the traditional decision frameworks of agents, making it easier to apply agents in domains that demand higher expertise and were previously dominated by human experts. Examples include legal consultants and medical assistants [408; 410]. We can also investigate leveraging LLMâ s planning and reflective abilities to discover more optimal action sequences. Agent research is no longer confined to simplistic simulated environments; it can now be expanded into more intricate real-world settings, such as path planning for robotic arms or the interaction of an embodied intelligent machine with the tangible world. Furthermore, when facing new tasks, the training paradigm for agents becomes more streamlined and efficient. Agents can directly adapt to demonstrations provided in prompts, which are constructed by generating representative trajectories. | 2309.07864#116 | 2309.07864#118 | 2309.07864 | [
"2305.08982"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.