id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.10837#54
Leveraging Large Language Models for Pre-trained Recommender Systems
Contexts Are More Important Than the Number of Parameters. In EMNLP. Zhao, H.; Wang, K.; Yu, M.; and Mei, H. 2023. Explicit Planning Helps Language Models in Logical Reasoning. arXiv preprint. Zhou, K.; Wang, H.; Zhao, W. X.; Zhu, Y.; Wang, S.; Zhang, F.; Wang, Z.; and Wen, J.-R. 2020. S3-rec: Self-supervised learning for sequential recommendation with mutual infor- mation maximization. In Proceedings of the 29th ACM In- ternational Conference on Information & Knowledge Man- agement, 1893â
2308.10837#53
2308.10837#55
2308.10837
[ "1810.04805" ]
2308.10837#55
Leveraging Large Language Models for Pre-trained Recommender Systems
1902. # Recommendations and LLM Motivation Compared with recommendation models based on large lan- guage models (LLMs), conventional recommendation mod- els (Hidasi et al. 2015; Tang and Wang 2018; Kang and McAuley 2018; Sun et al. 2019; Geng et al. 2022) trained from scratch using architectures like Transformer (Vaswani et al. 2017), Bert (Devlin et al. 2018), RNN (Schuster and Paliwal 1997), CNN (Krizhevsky, Sutskever, and Hin- ton 2012) have several key limitations. First, they lack a deep understanding of context and semantics that comes from pretraining a large model on diverse corpora. As a result, they struggle to truly comprehend user preferences and behavioral sequences. Second, they have minimal abil- ity to generate novel, high-quality recommendations since they are not optimized for free-form text generation. LLMs, in contrast, can produce human-like recommendations by leveraging their generative capabilities. Third, conventional models have difficulty effectively leveraging multiple data modalities like text, images, audio, etc. LLMs are adept at multimodal processing due to pretraining objectives that learn connections between modalities. Finally, LLMs can seamlessly adapt to new downstream recommendation tasks through simple fine-tuning, whereas conventional models require extensive retraining. For example, BERT4Rec (Sun et al. 2019) employs deep bidirectional self-attention to model user behavior sequences. They are trained solely based on the recommendation data without the general knowledge corpus, resulting in a limited understanding and reasoning of behavior sequence data and an inability to em- power downstream tasks better. In summary, recommenda- tion models based on pretrained LLMs are more contextual, creative, versatile, and adaptable compared to conventional models trained from scratch. Current Development Although the application of LLMs like ChatGPT in recom- mendation has not been widely explored yet, some novel in- vestigations have emerged recently that show their promis- ing potential in this domain.
2308.10837#54
2308.10837#56
2308.10837
[ "1810.04805" ]
2308.10837#56
Leveraging Large Language Models for Pre-trained Recommender Systems
There are mainly three cate- gories. (1) LLM as a recommendation system. First, Unlike tradi- tional recommendation methods, they do not retrain a new model, relying only on the prompts of LLM (Liu et al. 2023a; Gao et al. 2023; Dai et al. 2023; Chen 2023) or slight fine-tuning (Zhang et al. 2023; Kang et al. 2023; Bao et al. 2023) to convert recommendation tasks into natural language tasks. They always design a set of prompts on recommendation scenarios, including rating prediction, se- quential recommendation, direct recommendation, explana- tion generation, and review summarization. They explore the use of few-shot prompting to inject interaction information that contains user potential interest to help LLM better un- derstand user needs and interests. (2) LLM as supplementary information via embeddings or tokens. This modeling paradigm (Wu et al. 2021; Qiu et al. 2021; Yao et al. 2022; Muhamed et al. 2021; Xiao et al. 2022) views the language model as a feature extractor, which feeds the features of items and users into LLMs and outputs corresponding embeddings. A traditional RS model can utilize knowledge-aware embeddings for various rec- ommendation tasks. This approach (Liu et al. 2023b; Wang et al. 2022, 2023) generates tokens based on the inputted itemsâ and usersâ features. The generated tokens capture po- tential preferences through semantic mining, which can be integrated into the decision-making process of a recommen- dation system.
2308.10837#55
2308.10837#57
2308.10837
[ "1810.04805" ]
2308.10837#57
Leveraging Large Language Models for Pre-trained Recommender Systems
(3) LLM as Agent. As an agent, the large model assists in scheduling the entire recommendation model for recom- mendations and is responsible for pipeline control. Specif- ically, these models (Andreas 2022; Bao et al. 2023; Hou et al. 2023; Lin et al. 2023; Gao et al. 2023; Friedman et al. 2023) help to adapt LLM to the recommendation domain, coordinate user data collection, feature engineering, feature encoder, scoring/ranking function. Challenges Compared to superficially leveraging large language mod- els, our purpose is built on the large language model, maxi- mizing the preservation of knowledge and logical reasoning abilities from the original large language model to ensure the inference for the behavioral sequences and fluent gen- eration of downstream sub-tasks, while also achieving the recommendation function by learning user profile features and user behavior sequences. The crucial aspect of harness- ing the power of language models in enhancing recommen- dation quality is the utilization of their high-quality repre- sentations of textual features and their extensive coverage of external knowledge to establish correlations between items and users. (Wu et al. 2023). Therefore, we need to preserve the tokenization, parameters, and architecture of the large language model as much as possible. For example, Pretrain, Personalized Prompt, and Predict Paradigm (P5) (Geng et al. 2022) is established upon a basic encoderâ decoder frame- work with Transformer blocks to build both the encoder and decoder. Although it is built on T5 (Raffel et al. 2020), it modified the structure of the model by adding additional positional encodings and whole-word embeddings, which will partially destroy the original knowledge in the language model. Notably, there is a difference in the format of the data. Large language models are trained on vast amounts of logically structured text, with consistent reasoning, logical thought processes, and proper grammar. In contrast, recom- mendation systems analyze digital user features, fixed item entities, and incoherent behavioral sequences. Additionally, The purpose of training data for large language models is to teach the model how to understand language and generate new text that is similar to the training data.
2308.10837#56
2308.10837#58
2308.10837
[ "1810.04805" ]
2308.10837#58
Leveraging Large Language Models for Pre-trained Recommender Systems
Conversely, the purpose of user behavioral sequence data in recommenda- tion systems is to dig a deeper understanding of user prefer- ences, behavior sequences, and relationships between them so that to provide personalized recommendations. Therefore, building a recommendation system on top of a large language model that retains the LLMâ s knowledge and logical reasoning abilities, while also achieving the rec- ommendation function by learning user profile features and user behavior sequences poses significant challenges. Baselines in Benchmark Experiments To showcase our competence in a wide range of recommendation-related tasks, we employ representative approaches for different tasks, including Rating Prediction, Direct Recommendation, Sequential Recommendation, Ex- planation Generation, and Review Summarization, that have been previously used by (Geng et al. 2022). The summary of baseline methods for five different task families is provided in Table 10. Rating Prediction. This task involves incorporating user- item rating data as part of the training set, where item rat- ings are represented numerically. The model is asked ques- tions with prompts, and it outputs corresponding rating val- ues. The baselines for this task are MF (Koren, Bell, and Volinsky 2009) and MLP (Cheng et al. 2016). Direct Recommendation. For direct recommendation, we employ classic algorithms BPR-MF (Rendle et al. 2009), BPR-MLP (Cheng et al. 2016) and SimpleX (Mao et al. 2021) as baselines. They showcase the effectiveness of di- rect recommendation tasks when utilizing non-semantic in- formation as features. This allows us to gain a more compre- hensive understanding of the potential of recommendations given by LLM-based models. Sequential Recommendation. The sequential recommen- dation task utilizes the userâ s historical interaction sequences as input to predict the next item. We compare our proposed approaches with representative baselines in the field. Among that, some models aim to model the Markov Chain of user interactions by way of neural network architectures like con- volutional neural networks, recurrent neural networks, and attention-based modules. Caser (Tang and Wang 2018) em- ploys convolutional neural networks to model user inter- ests. HGN (Ma, Kang, and Liu 2019) adopts hierarchical gating networks to capture user behaviors from both long and short-term perspectives.
2308.10837#57
2308.10837#59
2308.10837
[ "1810.04805" ]
2308.10837#59
Leveraging Large Language Models for Pre-trained Recommender Systems
GRU4Rec (Hidasi et al. 2016) utilizes recurrent neural network to model the user click history sequence. SASRec (Kang and McAuley 2018) and FDSA (Zhang et al. 2019) use self-attention modules to model feature transition patterns for sequential recommen- dation and the former combine RNN-based approaches to retain the sequential properties of items. BERT4Rec (Sun et al. 2019) adopts the BERT-style masked language mod- eling to learn the relations among items from the perspec- tive of bidirectional representations in the recommendation. It started to use methods in neural language processing, but BERT did not have a strong semantic understanding capac- ity at that time. S3-Rec (Zhou et al. 2020) leverages self- supervised objectives to enhance the discovery of correla- tions among different items and their attributes. Explanation Generation. We evaluate the task of expla- nation generation by comparing the performance of several baseline models. Attn2Seq (Dong et al. 2017) and NRT (Li et al. 2017) utilizes the neural network to encode attributes of user and item into vectors and then invokes an attention mechanism or GRU (Cho et al. 2014) to generate reviews conditioned on the attribute vector. PETER (Li, Zhang, and Chen 2021) use Transformer architecture and designa a
2308.10837#58
2308.10837#60
2308.10837
[ "1810.04805" ]
2308.10837#60
Leveraging Large Language Models for Pre-trained Recommender Systems
Table 10: The summary of baseline methods for five differ- ent task families. Rating Pre MF (Koren, Bell, and Volinsky 2009) MLP (Cheng et al. 2016) Direct Rec BPR-MF (Rendle et al. 2009) SimpleX (Mao et al. 2021) BPR-MLP (Cheng et al. 2016) Sequential Rec Caser (Tang and Wang 2018) GRU4Rec (Hidasi et al. 2016) FDSA (Zhang et al. 2019) S3-Rec (Zhou et al. 2020) HGN (Ma, Kang, and Liu 2019) BERT4Rec (Sun et al. 2019) SASRec (Kang and McAuley 2018) BERT4Rec (Sun et al. 2019) Explanation Gen Attn2Seq (Dong et al. 2017) PETER (Li, Zhang, and Chen 2021) NRT (Li et al. 2017) PETER+ Review Sum T0 (Sanh et al. 2022) GPT-2 (Radford et al. 2019) 0.505 (16, 0.4989) (8, 0.4965) 0.485 0.465 (32, 0.462) 0.445 ® = (4, 0.4276) = 0.425 0.405 0.385 (2, 0.3709) 0.365 rank Figure 3: The HR@1 with different rank r of LoRA. modified attention mask. The variant PETER+ takes a hint feature word to augment the process of generating explana- tions. Review Related. For review summarization, we adopt pre- trained T0 (Sanh et al. 2022) and GPT-2 (Radford et al. 2019) as baselines. The latter model parameters were ob- tained from Hugging Face1, which is a big platform to share models, datasets, and applications.
2308.10837#59
2308.10837#61
2308.10837
[ "1810.04805" ]
2308.10837#61
Leveraging Large Language Models for Pre-trained Recommender Systems
Further Analysis in the real-world dataset In addition to optimizing the recommendation performance, it is also important to understand why large language mod- els like ChatGPT and GPT-4 are able to effectively conduct recommendation tasks in the first place. To explore this fur- ther, we provide several real-world case studies in Figure 4, where we systematically probe and dissect the reason- ing process of these models when making recommendations, using carefully designed prompt-based queries. This anal- ysis sheds light on the strengths and limitations of relying solely on the knowledge and reasoning capabilities embed- ded in large pre-trained language models for recommenda- tion tasks, and points towards potential areas for improve- ment. Our experiments also analyze the impact of the rank r of Low-Rank Adaptation on model performance. We evalu- ate five different rank values - 2, 4, 8, 16, and 32 - to deter- mine the optimal balance between model capacity and pre-
2308.10837#60
2308.10837#62
2308.10837
[ "1810.04805" ]
2308.10837#62
Leveraging Large Language Models for Pre-trained Recommender Systems
1https://huggingface.co/ Chinese MABE fo KARA IYI: BEA J] ko, RAE A I, BEA Kd, RAE A A, REAM, BAH), BAL 8, RAE A, RAE HH, RAE DID, OR English Based on the known entity sequence words , beauty products, home daily necessities, beauty products, home daily necessities, home daily necessities, beauty products, beauty products, home daily Interaction Rea F | necessities, home daily necessities, home daily necessities, snacks, large-sized clothing, shoes, large- Sequences RR, KID, HF, XM, XMM, AMA, AMA, WAM, | 04 clothing, large-sized clothing, large-sized clothing, large-sized clothing, beauty products, large- LMI, EMP BER TEAL ESHER, BME | © ; an ; : Pains sized clothing. Predict the top 5 entities that the user may interact with next in descending order of probability Ground Truth | #4 Ar Beauty products GPT-3.5-turbo fas A is, RAED A, KM, HP, HRA Ao Beauty products, Home daily necessities, Large-sized clothing, Snacks, Shoes Result » BARS: AERA P, MAA SRLT DK, HA ARGU, THRAMP MAAR HAREM, 22 GPT-3.5-turbo EIâ #, RFRAKSARAKWRâ H, ... AMM AL FRAP, KALLA Beauty products: In the known entity sequence, beauty products appeared four times, and alternated. This may indicate that the user is interested in beauty products but has already made some purchases. â They may consider making more purchases in the future, .... Large-sized clothing: In the known entity sequence, large-sized clothing appeared six times in a row. This may indicate that the user has already purchased a lot of clothing and may consider buying other types of products next. Large-sized clothing, Beauty products, Home daily necessities, Snacks, Shoes Explanation A, MAR Hy, THR RAP CAME TRSEMR, RFRIRAE EM EK ee Rah Bee GPT-4 Result AMS, REAM, WARS, HAS, BE APF: 1. KM: ALC HRAPAAP, MPS GPT4 KMâ
2308.10837#61
2308.10837#63
2308.10837
[ "1810.04805" ]
2308.10837#63
Leveraging Large Language Models for Pre-trained Recommender Systems
REGKRRS, ALAM A PRTRTRSRRRE Explanation HORTA. 2. MAM: APH MRAM Heâ REAR BR L, ALMM APE TARTAR EORPRK... 1. Large-sized clothing: The user has shown a high frequency of interaction with large-sized clothing, especially towards the end of the sequence. This suggests a strong ongoing interest in this category. 2 Beauty products: This is the second most frequently interacted entity in the sequence. The user seems to have a consistent interest in beauty products, which suggests they may interact with this category again .. Figure 4: The case studies of ChatGPT and GPT-4 for next item recommendation in the real-world dataset. dictive ability. As shown in Figure 3, we find that a rank of 8 provides sufficient learning capacity, with minimal im- provements from increasing to 16. This indicates that captur- ing inter- and intra-entity relationships requires only a small number of additional trainable parameters beyond the base LLM, without the need for substantial model expansion. Rank 8 strikes the right balance, enabling Low-Rank Adap- tation to boost performance through targeted parameteriza- tion rather than sheer scale. Overall, our results demonstrate that Low-Rank Adaptation offers an efficient approach to entity-aware language modeling.
2308.10837#62
2308.10837
[ "1810.04805" ]
2308.10848#0
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
3 2 0 2 t c O 3 2 ] L C . s c [ 3 v 8 4 8 0 1 . 8 0 3 2 : v i X r a Preprint # AGENTVERSE: FACILITATING MULTI-AGENT COLLAB- ORATION AND EXPLORING EMERGENT BEHAVIORS Weize Chen!*, Yusheng Su!*, Jingwei Zuo', Cheng Yang*â ¢, Chenfei Yuanâ , Chi-Min Chan', Heyang Yu', Yaxi Luâ , Yi-Hsin Hungâ , Chen Qianâ , Yujia Qin!, Xin Cong', Ruobing Xie*, Zhiyuan Liu'â ¢, Maosong Sun!, Jie Zhou* ! Department of Computer Science and Technology, Tsinghua University 2 School of Economics and Management, Tsinghua University 3 School of Computer Science, Beijing University of Posts and Telecommunications 4 Pattern Recognition Center, WeChat AI, Tencent Inc. [email protected], [email protected] # ABSTRACT Autonomous agents empowered by Large Language Models (LLMs) have under- gone significant improvements, enabling them to generalize across a broad spec- trum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework AGENTVERSE that can effectively orchestrate a collaborative group of expert agents as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that AGENTVERSE can proficiently deploy multi-agent groups that outperform a single agent. Extensive experiments on text understanding, reasoning, coding, tool utiliza- tion, and embodied AI confirm the effectiveness of AGENTVERSE. Moreover, our analysis of agent interactions within AGENTVERSE reveals the emergence of spe- cific collaborative behaviors, contributing to heightened group efficiency. Our code has been released at https://github.com/OpenBMB/AgentVerse/. # INTRODUCTION The pursuit of creating intelligent and autonomous agents that can seamlessly assist humans and operate in real-world settings has been a foundational goal in artificial intelligence (Wooldridge & Jennings, 1995; Minsky, 1988; Bubeck et al., 2023).
2308.10848#1
2308.10848
[ "2308.01862" ]
2308.10848#1
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
The recent advance of Large Language Models (LLMs) (OpenAI, 2023a; Anil et al., 2023; Touvron et al., 2023b) has created newfound avenues in this domain. These LLMs, especially GPT-4 (OpenAI, 2023a), are particularly adept in comprehending human intent and executing commands. They have demonstrated remarkable proficiency in domains such as language understanding, vision (OpenAI, 2023b), and coding (Bubeck et al., 2023). By harnessing the power of LLMs, autonomous agents can make more nuanced decisions and perform actions with an unprecedented degree of autonomy (Zhou et al., 2023). Agents like AutoGPT (Richards & et al., 2023), BabyAGI (Nakajima, 2023), and AgentGPT (Reworkd, 2023), are inspiring examples. Furthermore, recent research has endowed autonomous agents with more human-analogous cognitive mechanisms, spanning from reflection (Yao et al., 2023b; Shinn et al., 2023), task decomposition (Wei et al., 2022b; Yao et al., 2023a), and tool utilization (Schick et al., 2023b; Qin et al., 2023a;b; Qian et al., 2023b). These advancements edge us closer to realizing the concept of artificial general intelligence (AGI) (Goertzel & Pennachin, 2007; Clune, 2019) that can generalize across a broader range of tasks. However, complex real-world tasks often require cooperation among individuals to achieve better effectiveness. Throughout history, numerous studies have delved into methods for enhancing col- laboration among humans to improve work efficiency and effectiveness (Woolley et al., 2010; Fehr & G¨achter, 2000). More recently, with the evolution of autonomous agents towards AGI, extensive research conceptualizes the assemblies of agents as a society or group (Li et al., 2023), and focuses on exploring the potential of their cooperation. For example, Park et al. (2023) found emergent
2308.10848#0
2308.10848#2
2308.10848
[ "2308.01862" ]
2308.10848#2
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
â The first two authors contributed equally. | & Corresponding author. 1 Preprint . Collaborative Decision-Making : (2) On Evaluation | New State! Action Execution N 2D | Agents: GE ON 2 == £5 | : - Goal New State! | Actions: AAV Feedback Ml: Worker > (BG z& FA. engineer New State Figure 1: An illustration of the AGENTVERSE. social behaviors in multi-agent life simulation. Du et al. (2023); Wang et al. (2023b); Zhang et al. (2023a); Qian et al. (2023a); Chan et al. (2023) also underscored the enhanced decision-making of collaborating agents during collaborative problem-solving. However, a limitation in these studies is their narrow focus on specific and limited tasks, leaving the generalizability of their findings uncertain. An additional constraint is their static approach to agent collaboration, where agentsâ roles and capabilities remain rigid, hindering adaptability. To address this problem, we introduce AGENTVERSE. This general multi-agent framework simulates the problem-solving procedures of human groups, and allows for dynamic adjustment of group members based on current progress. Specifically, AGENTVERSE splits the problem-solving process into four pivotal stages as shown in Figure 1: (1) Expert Recruitment: Determine and adjust the agent groupâ s composition based on the ongoing problem-solving progression. (2) Collaborative Decision-Making: Engage the selected agents in joint discussions to devise problem-solving strategies. (3) Action Execution: Agents interact with their environment to implement the devised actions. (4) Evaluation - Assess the differences between the current state and desired outcomes. If the current state is unsatisfactory, feedback is given to the next iteration for further refinement. We conduct extensive experiments and case studies in diverse aspects including text understanding, reasoning, coding, tool utilization and embodied AI to show the effectiveness of AGENTVERSE. Additionally, we highlight the social behaviors that emerge from the multi-agent collaboration, and discuss their advantages and potential risks. In summary, our contributions are:
2308.10848#1
2308.10848#3
2308.10848
[ "2308.01862" ]
2308.10848#3
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
â ¢ Inspired by the collaborative process of a human team, we propose AGENTVERSE as an effective framework for promoting collaboration among multiple agents in problem-solving. We conduct extensive experiments to show that AGENTVERSE effectively improve the agentsâ understanding, reasoning, coding, tool utilizing capabilities and their potential in embodied AI. â ¢ In the multi-agent collaboration, especially within tool utilization and Minecraft game playing, agents manifest certain emergent behaviors. For example, (1) volunteer behaviors, characterized by agents offering assistance to peers, thus improving team efficiency; (2) conformity behaviors, where agents adjust their deviated behaviors to align with the common goal under the critics from others; (3) destructive behaviors, occasionally leading to undesired and detrimental outcomes. # 2 AGENTVERSE FRAMEWORK A problem-solving process is a sequence of iterative stages within a human group (Bransford & Stein, 1993). Initially, the group assesses the difference between the current state and the desired goal, dynamically adjusting its composition to enhance collaboration in decision-making, and subsequently
2308.10848#2
2308.10848#4
2308.10848
[ "2308.01862" ]
2308.10848#4
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
2 Preprint executing well-informed actions. In order to enhance the effectiveness of an autonomous multi-agent group in achieving their goals, we simulate the problem-solving processes of a human group to propose the AGENTVERSE framework, which is composed of four crucial stages: Expert Recruit- ment, Collaborative Decision-Making, Action Execution, and Evaluation, as shown in Figure 1. The entire process can be modeled as a Markov decision process (MDP), characterized as a tuple (S, A, T , R, G). This encompasses the autonomous agent and environment state space S, solution and action space A, transition function T : S Ã A â
2308.10848#3
2308.10848#5
2308.10848
[ "2308.01862" ]
2308.10848#5
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
S, reward function R, and goal space G. 2.1 EXPERT RECRUITMENT Expert Recruitment stage determines the composition of a multi-agent group, playing an important role in deciding the upper bounds of the groupâ s capabilities. Empirical evidence suggests that diversity within human groups introduces varied viewpoints, enhancing the groupâ s performance across different tasks (Woolley et al., 2015; Phillips & Oâ Reilly, 1998). Parallel findings from recent research suggest that designating specific roles for autonomous agents, similar to recruiting experts to form a group, can augment their efficacy (Li et al., 2023; Salewski et al., 2023; Qian et al., 2023a). Current methodologies for assigning role descriptions to autonomous agents predominantly involve manual assignment, necessitating prior knowledge and understanding of the task. Consequently, the scalability remains ambiguous, especially in the face of diverse and intricate problem contexts. In view of this, AGENTVERSE automates expert recruitment to make agent configuration more scalable.
2308.10848#4
2308.10848#6
2308.10848
[ "2308.01862" ]
2308.10848#6
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
For a given goal g â G, a particular agent Mr is prompted as the â recruiterâ , similar to a human resource manager. Instead of relying on pre-defined expert descriptions, Mr dynamically generates a set of expert descriptions based on g. The different agents prompted with these different expert descriptions then form an expert group M = Mr(g) on the given goal g. Notably, the composition of a multi-agent group will be dynamically adjusted based on feedback from the evaluation stage (Section 2.4). This allows AGENTVERSE to employ the most suitable group based on the current state to make better decisions in future rounds. 2.2 COLLABORATIVE DECISION-MAKING This stage engages expert agents in collaborative decision-making. To facilitate effective decision- making, previous research has investigated the impact of different communication structures among agents (Chan et al., 2023; Zhang et al., 2023b; Wu et al., 2023). We focus on two typical communica- tion structures: horizontal structure and vertical structure, respectively.
2308.10848#5
2308.10848#7
2308.10848
[ "2308.01862" ]
2308.10848#7
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
_& Sr) ) In this democratic structure, each agent, denoted as mi â M, Horizontal Structure ( shares and refines its decision ami. The groupâ s collective decision, A = f ({ami}i) â A, emerges as an integration of individual agentsâ decisions using a function f , which might involve techniques like summarization or ensemble. This structure is especially effective in scenarios like consulting and tool using. Vertical Structure ( ) Conversely, vertical structure has a clear division of roles.
2308.10848#6
2308.10848#8
2308.10848
[ "2308.01862" ]
2308.10848#8
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
An agent, termed the solver mâ , proposes an initial decision aâ 0. Other agents, as reviewers, provide feedback on this proposal, prompting iterative refinements by the solver until a consensus is reached among reviewers or a set number of iterations is exhausted. The final decision A is given as A = aâ k â A, with k indicating the number of refinements. Vertical structure is preferable for tasks like math problem-solving and software development, where only one refined decision is required. 2.3 ACTION EXECUTION In the decision-making stage, agents collaboratively contribute to a group decision A containing actions that need to be executed in the current environment. Within the action execution stage, agents then execute the collectively-decided actions in the environment. Depending on the implementation, some agents might not perform any execution. As a result of these actions, the state of the environment transitions from sold to snew = T (sold, A).
2308.10848#7
2308.10848#9
2308.10848
[ "2308.01862" ]
2308.10848#9
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
3 # Preprint Table 1: The results on different tasks that evaluate the agentsâ general capabilities. GPT-3.5-Turbo GPT-4 Task CoT Solo Group CoT Solo Group Conversation (FED) Creative Writing (Commongen-Challenge) Mathematical Reasoning (MGSM) Logical Reasoning (Logic Grid Puzzles) 81.6 76.6 80.4 - 81.1 93.6 82.4 - 85.1 92.3 80.8 - 95.4 95.9 95.2 59.5 95.8 99.0 96.0 64.0 96.8 99.1 95.2 66.5 # 2.4 EVALUATION The evaluation stage is vital for AGENTVERSE, guiding improvements for subsequent rounds. At this stage, the feedback mechanism R assesses the difference between the current state snew and the desired goal g â G. It then offers verbal feedback r = R(snew, g), detailing areas of shortcoming and suggesting ways to enhance performance. R can either be defined by humans (in a human-in-the-loop (Amershi et al., 2014) setting) or an agent for automatic feedback, depending on the implementation. If the goal g remains unmet, the feedback r returns to the initial expert recruitment stage. In the next round, the expert recruitment stage will consider both feedback r and the goal g to adjust the groupâ s composition, aiming to evolve a more effective multi-agent group according to the current progress. # 3 EXPERIMENTS To validate the superiority of AGENTVERSE in facilitating agent collaboration over standalone agents, we design four experimental tasks. Each task is designed to assess distinct aspects of an agent group: general understanding and reasoning capabilities, coding capabilities, tool utilization capabilities, and their potential in Embodied AI. Our findings, which are detailed in this section, consistently highlight the superior performance of AGENTVERSE across these varied and multi-faceted tasks. Of particular interest is the emergence of unique collaborative behaviors within agent groups. While this section focuses on the advantages of multi-agent setups, a deeper exploration of these emergent behaviors will be presented in Section 4. Setups.
2308.10848#8
2308.10848#10
2308.10848
[ "2308.01862" ]
2308.10848#10
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
In all the experiments, we evaluate the performance of agents driven by GPT-3.5-Turbo- 0613 and GPT-4-0613 across various tasks. All the experiments are done in zero-shot setting. For all the quantitative experiments in this section, we compare three settings: (1) CoT: The CoT(chain-of-thought) agent; (2) Solo: Using AGENTVERSE with a single agent in the decision- making stage. Compared with CoT, Solo additionally incorporates the expert recruitment, action execution, and evaluation modules; (3) Group: Implementing AGENTVERSE with multiple agents collaborating during the decision-making. More detailed experimental setups for each task can be found in Appendix A. 3.1 GENERAL UNDERSTANDING AND REASONING CAPABILITIES To assess the agentsâ general understanding and reasoning capabilities, we use four datasets: FED (Mehri & Esk´enazi, 2020), Commongen Challenge (Madaan et al., 2023), MGSM (Shi et al., 2023), and Logic Grid Puzzles (Srivastava et al., 2022). Detailed descriptions of these datasets and metrics can be found in Appendix A. The first two datasets are used to measure the agentsâ text understanding and creative writing abilities, while the latter two focus on examining the agentsâ reasoning abilities, including mathematical and logical reasoning. Experimental Results. The results in Table 1 show that agents assembled by AGENTVERSE (Solo and Group setups) consistently outperform the standalone CoT agent, irrespective of the LLM used. In our preliminary evaluations, GPT-3.5-Turbo struggles with accurately handling the logic grid puzzles dataset; therefore, we omit the result of GPT-3.5-Turbo on logical reasoning. Interestingly, for GPT-3.5-Turbo, the Group setup underperforms the Solo setup in two of three tasks, indicating that the discussion in decision-making might adversely impact performance for agents based on GPT-3.5-Turbo in certain contexts. Delving deeper into this observation, one predominant factor surfaces: the susceptibility to erroneous feedback. A recurring pattern observed in the Group
2308.10848#9
2308.10848#11
2308.10848
[ "2308.01862" ]
2308.10848#11
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
4 # Preprint Consulting with Group Setup Round 0: 1.@ Achemical engineer 2.48 A civil engineer 3.â ¬9An environmental scientist Round 1: 1. A chemical engineer 2. @ An economist 3. © ALawyer Consulting with Solo Setup Round 0 â © Decision-Making: Round 1 {4B Decision-Making: Round 0 Decision-Making: Round 1 Decision-Making: 1. Location Selection: Consideration of regional weather conditions; Closer to the hydrogen production source to minimize transportation; Checking the site soil properties for storage tank stability [...] 2. Compression Tech: Use metal hydrides, carbon nanotubes, and complex hydrides for enhanced storage efficiency; thermal management system optimization; examination of metal hydridesâ regeneration and cycling life, carbon nanotubesâ hydrogen uptake and release §. Safty Measures: nstalation of ek detection systems; assessing potential cascading failure events; pressure relief devices, fire protection (..] 4. Green Energy Sources: Development of efficient fuel cells to convert stored hydrogen into electricity. 5. Infrastructure Development: Ensuring a solid foundation and structurally sound station design 6. Environmental Considerations: Conducting an environmental impact assessment 1. [ol 2b) 6. Environmental Considerations: |... proper waste management systems, consideration of greenhouse gas â emissions during hydrogen production, a comprehensive disaster management plan, biodiversity conservation during facility establishment and operation 7. Community Engagement: Consider Public consultations 8. Government Support: Compliance with state, federal guidelines on hazardous material storage [...] 9. Economic Considerations: Detailed cost-benefit analysis, analyzing economic impact on local communities, potential market opportunities [...] 10. Collaboration with Universities and Research Institutions: collaborate with local universities and research institutions 11. Legal Considerations: obtaining a legal audit 1. Location Selection: Find an optimal location that complies with safety regulations and is close to major transportation routes 2. Compression Tech: Use state-of-the- art compression technology to maximize storage capacity 3. Safety Measures: Implement strict safety measures, including pressure relief devices, leak detection systems, and automatic shutdown features. 4, Green Energy Sources: To make the storage station more sustainable, consider using renewable energy sources 5. Collaboration with Universities and Research Institutions 6. Community Engagement:
2308.10848#10
2308.10848#12
2308.10848
[ "2308.01862" ]
2308.10848#12
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Engage with the local community to educate them about the benefits of hydrogen energy 7. Government Support: Seek support and funding from local and state government 8. Infrastructure Development: Develop infrastructure for hydrogen fueling stations 1. Location Selection: Use Geographic Information System (GIS) technology to create a detailed map of potential sites 2. Compression Tech: Implement cutting-edge compression technology like onic Liquid Piston Compressors 8, Safety Measures: [Roughly the same as last round] 4, Green Energy Sources: Incorporate solar panels or wind turbines to power the compression process 5, Collaboration with Universities and Research Institutions: Form research Partnerships with institutions like Ohio State University or Case Western Reserve University. 6. Community Engagement: Engage with the local community to educate them about the benefits of hydrogen energy 7. Government Support: Apply for grants and incentives offered by the Ohio Development Services Agency 8. Infrastructure Development: Develop a network of hydrogen fueling stations Evaluator: â Comprehensiveness:8, Detailedness: 7, Feasibility: 8, Novelty: 6 The team should also consider the economic feasibility and cost- effectiveness of the proposed solutions. (..] For human resource staff, recruiting experts in advanced materials and hydrogen fuel cell technology would |...) Evaluator: â Comprehensiveness:9, Detailedness: 8, Feasibility: 8, Novelty: 8 â The ideas are comprehensive and well detailed, covering multiple aspects of the project (...] There's a need for a systems integration expert to ensure [...] Furthermore, a public relations specialist should be included to [...] Evaluator: +hensiveness:8, Detailedness: 7, ity: 8, Novelty: 7 â are comprehensive and practical [..-] However, more detailed implementation steps could be provided such as specific technologies or strategies to be used, and potential challenges to be addressed, Evaluator: â Comprehensiveness:9,Detailedness: 8, Feasibility: 7, Novelty: 7 The ideas are quite comprehensive and detailed [..] However, the feasibility can be improved by providing more detailed plans on how to overcome regulatory hurdles, manage costs, and gain public acceptance. Figure 2: The illustration of an example process of consulting.
2308.10848#11
2308.10848#13
2308.10848
[ "2308.01862" ]
2308.10848#13
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
The task is to give some suggestions on building a compressed hydrogen storage station in Ohio. setup is that: sometimes Agent A, despite starting with a correct answer, would be easily swayed by Agent Bâ s incorrect feedback. Roughly 10% of errors in the MGSM dataset can be traced to this dynamic. Notably, this phenomenon is absent in GPT-4-based agents, highlighting the importance of agentsâ resilience to conflicting information during collaborative discussions. Overall, the results show that AGENTVERSE effectively enhances the general understanding and reasoning capabilities of agents. Moreover, agents driven by advanced LLMs demonstrate better performance when engaged in collaborative decision-making. The nuanced challenges observed with GPT-3.5-Turbo indicate the need to improve LLMsâ robustness on incorrect information so that the collaboration can amplify individual strengths without introducing new vulnerabilities.
2308.10848#12
2308.10848#14
2308.10848
[ "2308.01862" ]
2308.10848#14
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Case Study: Consulting. In Table 1, the Group setup does not show a clear advantage over the Solo setup for both LLMs. This is mainly because the evaluation metrics for each benchmark have a limited scope. In the following case, we highlight the benefits of the group formed by GPT-4 agents by focusing on a consulting scenario where the group acts as a consultancy, responding to inquiries as shown in Figure 2. The goal is to offer suggestions for a hydrogen storage station in Ohio. At first glance, the Solo setup seems to cover a broader scope than the Group setup at round 0. However, the Group setup offers more depth thanks to the recruited experts. For instance, while the Solo setup might suggest something basic like â Find an optimal locationâ , the Group setup provides detailed advice, such as â evaluating site soil properties to ensure storage tank stability.â By the second round, different experts offer new insights in the Group setup. As a result, the Group setup not only covers a broader range (highlighted in red in the referenced figure) but also gives more detailed advice. For a detailed look at agent interactions, see Appendix F. 3.2 CODING CAPABILITIES In this section, we first assess the agentsâ coding capabilities using the Humaneval code completion dataset. Next, through a case study, we illustrate how collaboration among multiple agents improves output quality, highlighting its superiority over software development by just one agent. Experimental Results. In Table 2, we see a clear performance improvement moving from CoT to Solo and then to Group setup. This trend is especially pronounced with GPT-4, which sees a performance boost from 83.5 to 89.0. These results highlight AGENTVERSEâ s effectiveness in managing a skilled group of agents for coding. For GPT-3.5-Turbo, although we have observed a drop 5 Preprint Software Development with Group Setup Software Development with Solo Setup i: An experienced programmer â ¬: A software developer: A UVUX designertâ ¢: A software tester | 1 Round 0 Round 1 1 Round 0 Round 1 GOBE Decision-Making: GOB! decision-Making: 1] @becision-Making: â @ Decision-Making: 1 eee sine caeuter eee sinsscasiter ee on nen â 1 : ° â
2308.10848#13
2308.10848#15
2308.10848
[ "2308.01862" ]
2308.10848#15
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
am |: 7 : = H 7 8 2 1 7 8 ° : . es | 4 A 6 : 4 5 6 2 2 o 2 : ZZ | 1 2 3 1 2 3 =a ° a a | ° ° 5 + Clow one Ger Dae tf cee Clear Runnable Color Difference Error Handle Runnable Golor Difference Error Handle | {| â Runnable Color Difference Error Handle Runnable Color Difference Error Handle i o e o eo Functionally Keyboard Tnput Click Feedback | | â FuncHonally Keyboard Tnput Click Feedback | 1 | Functionality Keyboard Tnput Click Feedback | | â Functionality Keyboard Input Click Feedback @ @ ' @ @ 1 Evaluator: Evaluator: 1| Evaluator: Evaluator: Completeness:8, Functionality: 8 Completeness:9, Functionality: 9 1 | Completeness:2, Functionality: 7 Completeness:9, Functionality: 9, Readability: 7, Robusiness: 7 Readability: 9, Robustness: 9 1 | Feadabiliy: 7, Robustness: 7 Readability: 8, Robustness: 8 â Tho keyboard input doesnt include The code is wel-stuctured, readable and 1 | Use a safer way to evaluato mathematical â The code i wel structured and accomplishes {unctonalty for delete clea, or calculate robust. Ithandles common exceptions and expressions. Add more comments. Add more is task. Mere are comments that make it operations provides clear feedback to[..] | [excestion handing easier to understand what each part does. [..] Figure 3: The illustration of an example process of developing a calculator with GUI in Python. Table 2:
2308.10848#14
2308.10848#16
2308.10848
[ "2308.01862" ]
2308.10848#16
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
The pass@1 on Humaneval. Setting GPT-3.5-Turbo GPT-4 CoT Solo Group 73.8 74.4 75.6 83.5 87.2 89.0 Case Study: Software Development. Our examination of the code generated for Humaneval by the Group setup in AGENTVERSE offers benefits beyond mere correctness. The agent group refines solutions, yielding more efficient, robust, and secure algorithms that are not covered by simple pass@1 metric. To better elucidate these advantages, we present a case study with GPT-4 on software development, a domain requiring multifaceted collaboration and refinement. We present an example where AGENTVERSE creates a Python-based calculator GUI by bringing together diverse expert agents. A concise development process overview is visualized in Figure 3. Comparing the applications from the Group and Solo setups reveals notable distinctions. Both achieve core functionality, but the Group-created calculator boasts a user-friendly interface with features like color distinctions and keyboard input. This improved design resulted from the diverse feedback of the multi-agent group. Suggestions from UI designer and evaluators enhance the user experience, while software tester enhances code robustness. A deeper examination of the code confirms that the multi-agent groupâ s output excels in exception handling compared to that of a solo agent. The codes generated by the two setups and the complete progress can be seen at Appendix F. 3.3 TOOL UTILIZATION CAPABILITIES The capability of LLMs to use real-world tools has been emphasized in many recent studies (Schick et al., 2023a; Qin et al., 2023a). By equipping the LLMs with different tools such as a calculator, a web browser, and a code interpreter, the capabilities of LLMs can be significantly improved. In this section, we demonstrate that AGENTVERSE enables a group of agents to address intricate and multi-faceted tasks that require interaction with multiple tools, thereby enhancing work efficiency. Experimental Results. We design a set of 10 intricate tasks, each requiring the use of at least two distinct tools to accomplish. By providing agents access to several tools, including Bing search API, a web browser, a code interpreter, and task-related APIs, we explore how AGENTVERSE facilitates agent collaboration, dissects the overarching task into manageable sub-tasks, and effectively deploys the available tools to address realistic user queries.
2308.10848#15
2308.10848#17
2308.10848
[ "2308.01862" ]
2308.10848#17
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Of the 10 challenging tasks provided, an agent group orchestrated by AGENTVERSE adeptly accomplishes 9 tasks. On the other hand, a standalone ReAct agent (Yao et al., 2023b), which is a prevalent agent designed for tool using, can only fulfill 3 tasks. In 6 out of 7 tasks where the single ReAct agent fails, the agent does not adhere to one or more criteria detailed in the task, and exit earlier than expected. We refer interested readers to Appendix B for a comprehensive comparison of the solutions given by AGENTVERSE and a single ReAct agent.
2308.10848#16
2308.10848#18
2308.10848
[ "2308.01862" ]
2308.10848#18
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
6 Preprint Code Interpreter Agents: &i: Bella &}: Charlie } [Toots: |p Bing Search API © web Browser @ Query: Recently, it has become popular to verify the mathematical reasoning abilities of LLMs by observing if they can solve the â 24-Point Game." What is this game? Does it have a code-based solution? If it does, provide a Python code along with test cases and test its functionality. What are some other similar games that can be used to test the models' mathematical reasoning abilities? Round 0 Round 1 Decision-Makin, Decision-Making -Research the game and identify similar games 18: Axvevelop and test the Python code for solving the game i f i \ @ j 11. 1b: what is 24-point game?! '1. 09: Rule of 24-point game? | [> : 24-point simitar games?! | ' 12.6): Browse the Ist website | |2.(9): Browse the 1st website | @: Browse the Ist website | |1[=]: More test case and test, |3.§@J: Submit the rules '3.§2: Write code + test cases! |] !3.9): Browse the 2nd website! |2.@J: Submit the result A. : y GD: "Make a Numberâ Rule? | ' Evaluation Evaluation X Bella does not provide similar games eb(rules) 24-point game is ... (code) A Python code is written ... (similar games) Similar games include â Make a Numberâ ... Figure 4: An example process of multi-agent solving user query with three different tools. Case Study: Solving 24-Point Game and Providing Similar Games. Here, we present an example in Figure 4, illustrating how AGENTVERSE searches for the rules of 24-point game, implements the code along with test cases, and explores similar games. The task is multifaceted; thus, during decision-making stage, the agents split the task into two sub-tasks in their discussion, and each assigned to a certain agent. While agent Charlie overlooks the sub-task of identifying games similar to the 24-point game in round 0, feedback from the evaluation module rectifies this in the subsequent iteration.
2308.10848#17
2308.10848#19
2308.10848
[ "2308.01862" ]
2308.10848#19
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Ultimately, the agent group provides not only the 24-point game rules and a solving code with test cases, but also a summary of a similar game. In contrast, a standalone ReAct agent merely provides the gameâ s definition along with a code and omits the query for similar games. # 4 EMERGENT BEHAVIORS WITHIN A MULTI-AGENT GROUP Round Round 1 Round 3 Decision-Making _ Execution Decision-Making Execution Decision-Making Execution ie] A ge fe Ee ger ee ee Qo ABs (78s) OO 8) | Qee Qeo! Ape Cie euce toce | : Inventory at the End of this Round Inventory at the End of this Round Inventory at the End of this Round © BB ace EG@«@2 )) | B@xn@egur )| | B@x Ge Fn Gaga wen | Gu )| | as xe oan )| | (Qa Gre Hi as. He) "eM crarie | (ZB 2 0 us imEsieaa kane rn } 2X Bu Figure 5: An illustration of the collaborative process involving three agents crafting a bookshelf. The process begins with the decision-making and breaking down the goal into several sub-tasks, with each agent receiving an assignment. The execution results and the current environmental state are then passed to the evaluator. This process repeats until the goal of crafting a bookshelf is achieved. In the preceding section, the efficacy of AGENTVERSE has been illustrated across a spectrum of tasks that necessitate multi-agent decision-making, especially for GPT-4-based agents. Our endeavor, however, surpasses just improvements on benchmark datasets. We delve deeper into emergent collaborative behaviors exhibited by agents within realistic, embodied AI contexts. Minecraft, a sandbox game, serves as an ideal platform for such exploration due to its intricate parallelisms with real-world dynamics. In the game, agents must not just execute tasks but also plan, coordinate, and adjust to evolving situations. We task agents with collaboratively crafting a variety of items, spanning from paper and paintings to books and bookshelves. A succinct figure showcasing three agents adeptly crafting a bookshelf can be viewed in Figure 5. An elaborate visualization is placed at Appendix F, and details of the setups can be found in Appendix C.
2308.10848#18
2308.10848#20
2308.10848
[ "2308.01862" ]
2308.10848#20
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
By examining the decision-making process, we identify several emergent behaviors and categorize them into three aspects: volunteer, conformity, and destructive behaviors. Note that these behaviors not necessarily only appear in Minecraft but also in previous experiments such as tool utilization. 7 Preprint (1) Volunteer Behaviors [Alice]: Hi Bob, for this task we need to craft 2 papers. .. One of [Alice]: I've gathered all the sugar canes we need ... I can drop us can focus on gathering the sugar canes while the other one the sugar canes for you to put them into the crafting table can be ready to craft them into paper. What do you think? : and craft the paper. Please confirm if this sounds good to you. [Bob]: Yes, Alice, it sounds good.
2308.10848#19
2308.10848#21
2308.10848
[ "2308.01862" ]
2308.10848#21
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Let's meet up at the crafting â table where you can drop the sugar canes. I'll then pick them up and use them to craft the paper. I'm ready when you are. {py [Bob]: Hi Alice, sounds like a good plan. To be more efficient, how about we both gather sugar canes? ... What do you think? (1a) Time Contribution (1b) Resource Contribution | [Alice]: Great job, Bob! We have now gathered the sugar canes needed. Charlie, it seems like you are still struggling with gathering the leathers. Maybe Bob and I could help you with that. What do you think, Bob? Should we all focus on gathering the leathers? pry [Bob]: Alice, I agree with your suggestion. .. Charlie, we will help you with this task. Let's all gather the leathers so we can progress with our mission. Are you okay with this plan, Charlie? [Charlie]: Yes, Bob, ..
2308.10848#20
2308.10848#22
2308.10848
[ "2308.01862" ]
2308.10848#22
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Gathering the leathers has proven to be more challenging than expected. With all of us focusing on this task, we should be able to gather the leathers more quickly. So, for this round, let's all focus on gathering the 3 leathers. Let's do this, team! : (2) Conformity Behaviors B [Alice]: Bob, good job on getting one leather. We still need two more. Charlie, I see you've been very busy crafting, but we need to focus on gathering the leathers. For this round, let's all continue to focus on getting the leathers. Are we in agreement? gry [Bob]: Alice, I agree with your plan. We should all continue to focus on gathering the 2 remaining leathers. Charlie, are you okay with this assignment? pry [Charlie]: Yes, Alice and Bob, I agree with the plan. I realize I got sidetracked with other tasks, and I will refocus on our main objective of gathering the remaining 2 leathers. Let's continue with this mission, team! (3) D structive Behaviors ing. I suggest that [Alice]: Bob, [..] Now we need to craft 1 painti [Alice]: Bob, I'm still struggling you drop 1 wool and 4 sticks, and I will pick them up to craft of leather. I should continue to find and kill 3 cows. Once I'm done, ! the painting.
2308.10848#21
2308.10848#23
2308.10848
[ "2308.01862" ]
2308.10848#23
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
What do you think? : Twill inform you immediately. How about you? : | gy [Bob]: Alice, I just failed to craft 3 books. I will continue to focus on crafting 3 books. Let's get it done! [Bob]: That sounds good, Alice. I'll drop 1 wool and 4 sticks for you to pick up. [..] Let's do this. BB Alice actually executes: Kill Bob and collect the dropped items! yr Bob actually executes: Break the library in a village to get the dropped books instead of collecting necessary materials and craft.
2308.10848#22
2308.10848#24
2308.10848
[ "2308.01862" ]
2308.10848#24
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Figure 6: Examples of the properties emerge in the agent interactions in Minecraft. 4.1 VOLUNTEER BEHAVIORS Volunteer behaviors refer to actions intended to enhance the benefits of others in human soci- ety (Omoto & Snyder, 1995; Mowen & Sujan, 2005). We observe similar behaviors emerging in a multi-agent group as follows: Time Contribution. The agents are willing to contribute their unallocated time to enhance collabora- tion efficiency. As shown in the examples in Figure 6 (1a), Alice and Bob need to collaboratively craft 2 paper, which necessitates three sugar canes as the raw material. Initially, Alice proposes that she will collect the sugar canes while Bob waits until the materials are ready. However, this plan is suboptimal, as it offers Bob spare time. Recognizing inefficiency, Bob suggests that both gather sugar canes concurrently, leading to expedited task completion.
2308.10848#23
2308.10848#25
2308.10848
[ "2308.01862" ]
2308.10848#25
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Resource Contribution. Our analysis reveals that the agents are willing to contribute the possessed materials. As illustrated in Figure 6 (1b), at the end of the task crafting 2 paper, Alice has collected all the raw materials (sugar canes), whereas Bob possesses the crafting table essential for the paperâ s creation. In the decision-making stage, Alice suggests transferring her materials to Bob by dropping them on the ground. This enables Bob to utilize them for the intended crafting process. Assistance Contribution. In the process of accomplishing tasks, we observe that agents, upon completing their individual assignments, actively extend support to their peers, thereby expediting the overall task resolution. As shown in Figure 6 (1c), Alice and Bob have successfully completed their assigned sub-tasks, while Charlie is still struggling to gather three leathers. During the collaborative decision-making phase, Alice and Bob propose to assist Charlie in gathering.
2308.10848#24
2308.10848#26
2308.10848
[ "2308.01862" ]
2308.10848#26
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
8 Preprint These behaviors highlight how agents willingly contribute their capabilities and efforts to assist other agents, culminating in an accelerated achievement of their mutual goal. 4.2 CONFORMITY BEHAVIOR In human society, individuals tend to adjust their behavior to align with the norms or goals of a group (Cialdini & Goldstein, 2004; Cialdini & Trost, 1998), which we refer to as conformity behavior. We also observe similar behaviors within multi-agent groups. As shown in Figure 6 (2), all agents are asked to gather three pieces of leather.
2308.10848#25
2308.10848#27
2308.10848
[ "2308.01862" ]
2308.10848#27
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
However, Charlie gets sidetracked and begins crafting items that do not contribute directly to the task. In the subsequent decision-making stage, Alice and Bob critique Charlieâ s actions. Charlie acknowledges his mistake and re-focuses on the mutual tasks. The conformity behavior enables agents to align with mutual goals as work progresses. 4.3 DESTRUCTIVE BEHAVIOR Additionally, we have also observed that agents may exhibit behaviors aimed at achieving greater efficiency, which could raise safety concerns. As depicted in Figure 6 (3a) and Figure 6 (3b), an agent occasionally bypasses the procedure of gathering raw materials and resorts to harming other agents or destroying an entire village library to acquire the necessary materials. With advancements in autonomous agents, deploying them in real-world scenarios has become increasingly plausible. However, the emergence of hazardous behaviors could pose risks, especially when humans are involved in collaborative processes. Thus, designing strategies to prevent agents from adopting such hazardous behaviors is a critical area for future research. # 5 RELATED WORK Autonomous Agents. The pursuit of creating autonomous agents that can operate intelligently in real-world environments without human involvement has been a persistent goal throughout the history of AI (Wooldridge & Jennings, 1995; Minsky, 1988; Bubeck et al., 2023). Recently LLMs (Touvron et al., 2023a; OpenAI, 2023a) have opened up new opportunities to achieve this goal. These LLMs possess remarkable understanding, reasoning, and generation capabilities, allowing autonomous agents to utilize them as a backbone for handling increasingly complex scenarios (Richards & et al., 2023; Nakajima, 2023; Reworkd, 2023; Liu et al., 2023). However, even though these autonomous agents already demonstrate considerable power, they still lack certain essential human-analogous cognitive capabilities. Hence, some research designs external mechanisms that endow agents with reflection (Yao et al., 2023b; Shinn et al., 2023), task decomposition (Wei et al., 2022b; Yao et al., 2023a), and tool utilization/creation (Schick et al., 2023b; Qin et al., 2023a;b; Qian et al., 2023b) capabilities, which bring autonomous agents closer to achieving artificial general intelligence. Multi-agent System.
2308.10848#26
2308.10848#28
2308.10848
[ "2308.01862" ]
2308.10848#28
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
In human society, a well-organized group composed of individual humans can often collaboratively handle a greater workload and accomplish complex tasks with higher efficiency and effectiveness. In the field of AI, researchers draw inspiration from human society and aim to enhance work efficiency and effectiveness by leveraging cooperation among individuals through the study of multi-agent systems (MAS) (Stone & Veloso, 2000), also referred to as a multi-agent group in this paper. The multi-agent group collaboratively makes decisions and executes corresponding actions in a distributed and parallel manner to achieve the common goal, which significantly improves work efficiency and effectiveness. Previous works have leveraged multi-agent joint training to achieve this goal. Recently, some studies have attempted to leverage the intelligence and capabilities of agents for autonomous collaboration. Li et al. (2023) have conceptualized assemblies of agents as a group, and focused on exploring the potential of their cooperation. Park et al. (2023) found social behaviors autonomously emerge within a group of agents, and Du et al. (2023); Wang et al. (2023b); Zhang et al. (2023a); Qian et al. (2023a); Chan et al. (2023) further leverage multi-agent cooperation to achieve better performance on reasoning tasks. Based on these findings, we introduce a framework, denoted as AGENTVERSE, capable of leveraging group cooperation to manage more intricate scenarios. This framework can dynamically adjust its composition according to the current state, aiming to facilitate optimal decision-making and execution.
2308.10848#27
2308.10848#29
2308.10848
[ "2308.01862" ]
2308.10848#29
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
9 Preprint # 6 CONCLUSION In this study, we present AGENTVERSE, a novel and general multi-agent framework designed to emulate human group problem-solving processes. Our comprehensive experimental results highlight the efficacy of AGENTVERSE, demonstrating its enhanced performance in comparison to individual agents across a myriad of tasks. These tasks encompass general understanding, reasoning, coding, and tool utilization. Notably, AGENTVERSE consistently delivers remarkable results in addressing intricate user queries when fortified with the appropriate tools. In our investigations within the Minecraft environment, we identify both positive and negative emergent social behaviors among agents. As advancements in artificial general intelligence progress, understanding multi-agent interactions should become increasingly crucial. AGENTVERSE serves as a valuable step toward this endeavor, and we are optimistic about its potential adaptability and refinement for a wider array of tasks and contexts in the future. # REFERENCES Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alexander Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J. Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, and Mengyuan Yan.
2308.10848#28
2308.10848#30
2308.10848
[ "2308.01862" ]
2308.10848#30
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Do as I can, not as I say: Grounding language in robotic affordances. CoRR, abs/2204.01691, 2022. doi: 10.48550/arXiv.2204.01691. URL https://doi.org/10.48550/arXiv.2204.01691. Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4):105â 120, Dec. 2014. doi: 10.1609/aimag.v35i4.2513. URL https://ojs.aaai.org/aimagazine/index.php/ aimagazine/article/view/2513. Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hern´andez ´Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan A. Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christo- pher A. Choquette-Choo, Aakanksha Chowdhery, Cl´ement Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark D´ıaz, Nan Du, Ethan Dyer, Vladimir Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, and et al.
2308.10848#29
2308.10848#31
2308.10848
[ "2308.01862" ]
2308.10848#31
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Palm 2 technical report. CoRR, abs/2305.10403, 2023. doi: 10.48550/arXiv.2305.10403. URL https://doi.org/10.48550/arXiv.2305.10403. J.D. Bransford and B.S. Stein. The Ideal Problem Solver: A Guide for Improving Thinking, Learning, ISBN 978-0-7167-2205-2. URL https://books. and Creativity.
2308.10848#30
2308.10848#32
2308.10848
[ "2308.01862" ]
2308.10848#32
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
W.H. Freeman, 1993. google.com.tw/books?id=nnRxQgAACAAJ. S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco T´ulio Ribeiro, and Yi Zhang.
2308.10848#31
2308.10848#33
2308.10848
[ "2308.01862" ]
2308.10848#33
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Sparks of artificial general intelligence: Early experiments with GPT-4. CoRR, abs/2303.12712, 2023. doi: 10.48550/arXiv.2303.12712. URL https://doi.org/10. 48550/arXiv.2303.12712. Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. Chateval: Towards better llm-based evaluators through multi-agent debate, 2023. URL https://doi.org/10.48550/arXiv.2308.07201.
2308.10848#32
2308.10848#34
2308.10848
[ "2308.01862" ]
2308.10848#34
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pond´e de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, 10 # Preprint Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba.
2308.10848#33
2308.10848#35
2308.10848
[ "2308.01862" ]
2308.10848#35
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021. URL https://arxiv. org/abs/2107.03374. Robert B Cialdini and Noah J Goldstein. Social influence: Compliance and conformity. Annu. Rev. Psychol., 55:591â 621, 2004. URL https://www.annualreviews.org/doi/abs/10. 1146/annurev.psych.55.090902.142015. Robert B Cialdini and Melanie R Trost. Social influence: Social norms, conformity and compliance. 1998. URL https://psycnet.apa.org/RECORD/1998-07091-021.
2308.10848#34
2308.10848#36
2308.10848
[ "2308.01862" ]
2308.10848#36
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Jeff Clune. Ai-gas: Ai-generating algorithms, an alternate paradigm for producing general artificial intelligence. CoRR, abs/1905.10985, 2019. URL http://arxiv.org/abs/1905.10985. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman.
2308.10848#35
2308.10848#37
2308.10848
[ "2308.01862" ]
2308.10848#37
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021. URL https://arxiv.org/abs/2110.14168. Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. Palm-e: An embodied multimodal language model. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 8469â
2308.10848#36
2308.10848#38
2308.10848
[ "2308.01862" ]
2308.10848#38
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
8488. PMLR, 2023. URL https://proceedings. mlr.press/v202/driess23a.html. Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. Improving factual- ity and reasoning in language models through multiagent debate. CoRR, abs/2305.14325, 2023. doi: 10.48550/arXiv.2305.14325. URL https://doi.org/10.48550/arXiv.2305.14325. Ernst Fehr and Simon G¨achter. Cooperation and punishment in public goods experiments. American Economic Review, 90(4):980â 994, 2000. URL https://pubs.aeaweb.org/doi/pdf/ 10.1257/aer.90.4.980.
2308.10848#37
2308.10848#39
2308.10848
[ "2308.01862" ]
2308.10848#39
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Ben Goertzel and Cassio Pennachin. Artificial general intelligence, volume 2. Springer, 2007. URL https://link.springer.com/book/10.1007/978-3-540-68677-4. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. CAMEL: communicative agents for â mindâ exploration of large scale language model society. CoRR, abs/2303.17760, 2023. doi: 10.48550/arXiv.2303.17760. URL https://doi.org/10. 48550/arXiv.2303.17760. Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang.
2308.10848#38
2308.10848#40
2308.10848
[ "2308.01862" ]
2308.10848#40
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Agentbench: Evaluating llms as agents, 2023. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder, Shashank Gupta, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refinement with self-feedback. CoRR, abs/2303.17651, 2023. doi: 10.48550/arXiv.2303.17651. URL https://doi.org/10.48550/arXiv.2303.17651.
2308.10848#39
2308.10848#41
2308.10848
[ "2308.01862" ]
2308.10848#41
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
11 Preprint Shikib Mehri and Maxine Esk´enazi. Unsupervised evaluation of interactive dialog with dialogpt. In Olivier Pietquin, Smaranda Muresan, Vivian Chen, Casey Kennington, David Vandyke, Nina Dethlefs, Koji Inoue, Erik Ekstedt, and Stefan Ultes (eds.), Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGdial 2020, 1st virtual meeting, July 1-3, 2020, pp. 225â
2308.10848#40
2308.10848#42
2308.10848
[ "2308.01862" ]
2308.10848#42
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
235. Association for Computational Linguistics, 2020. URL https://aclanthology.org/2020.sigdial-1.28/. Marvin Minsky. The Society of Mind. Simon & Schuster, 1988. ISBN 0671657135. URL https: //jmvidal.cse.sc.edu/lib/minsky88a.html. John C Mowen and Harish Sujan. Volunteer behavior: A hierarchical model approach for investi- gating its trait and functional motive antecedents. Journal of consumer psychology, 15(2):170â 182, 2005. URL https://myscp.onlinelibrary.wiley.com/doi/abs/10.1207/ s15327663jcp1502_9. Yohei Nakajima.
2308.10848#41
2308.10848#43
2308.10848
[ "2308.01862" ]
2308.10848#43
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Babyagi. 2023. URL https://github.com/yoheinakajima/babyagi. [Software]. Allen M Omoto and Mark Snyder. Sustained helping without obligation: motivation, longevity Journal of personality of service, and perceived attitude change among aids volunteers. and social psychology, 68(4):671, 1995. URL https://psycnet.apa.org/record/ 1995-26640-001. OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023a. doi: 10.48550/arXiv.2303.08774. URL https://doi.org/10.48550/arXiv.2303.08774. OpenAI. Chatgpt can now see, hear, and speak, 2023b. URL https://openai.com/blog/ chatgpt-can-now-see-hear-and-speak.
2308.10848#42
2308.10848#44
2308.10848
[ "2308.01862" ]
2308.10848#44
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Joon Sung Park, Joseph C. Oâ Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. CoRR, abs/2304.03442, 2023. doi: 10.48550/arXiv.2304.03442. URL https://doi.org/10. 48550/arXiv.2304.03442.
2308.10848#43
2308.10848#45
2308.10848
[ "2308.01862" ]
2308.10848#45
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Katherine Phillips and Charles Oâ Reilly. Demography and diversity in organizations: A review of 40 years of research. Research in Organizational Behavior, 20:77â 140, 01 1998. URL https://www.researchgate.net/publication/234022034_Demography_ and_Diversity_in_Organizations_A_Review_of_40_Years_of_Research. Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. Communicative agents for software development. CoRR, abs/2307.07924, 2023a. doi: 10.48550/arXiv.2307.07924. URL https://doi.org/10.48550/arXiv.2307.07924.
2308.10848#44
2308.10848#46
2308.10848
[ "2308.01862" ]
2308.10848#46
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Cheng Qian, Chi Han, Yi R. Fung, Yujia Qin, Zhiyuan Liu, and Heng Ji. CREATOR: disentan- gling abstract and concrete reasonings of large language models through tool creation. CoRR, abs/2305.14318, 2023b. doi: 10.48550/arXiv.2305.14318. URL https://doi.org/10. 48550/arXiv.2305.14318. Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun.
2308.10848#45
2308.10848#47
2308.10848
[ "2308.01862" ]
2308.10848#47
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Tool learning with foundation models. CoRR, abs/2304.08354, 2023a. doi: 10.48550/arXiv.2304.08354. URL https://doi.org/10.48550/arXiv. 2304.08354. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023b. URL https://arxiv.org/abs/2307.16789. # Reworkd. Agentgpt, 2023. URL https://github.com/reworkd/AgentGPT. [Software].
2308.10848#46
2308.10848#48
2308.10848
[ "2308.01862" ]
2308.10848#48
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
12 Preprint Toran Bruce Richards and et al. Auto-gpt: An autonomous gpt-4 experiment, 2023. URL https: //github.com/Significant-Gravitas/Auto-GPT. [Software]. Leonard Salewski, Stephan Alaniz, Isabel Rio-Torto, Eric Schulz, and Zeynep Akata. In-context im- personation reveals large language modelsâ strengths and biases. CoRR, abs/2305.14930, 2023. doi: 10.48550/arXiv.2305.14930. URL https://doi.org/10.48550/arXiv.2305.14930.
2308.10848#47
2308.10848#49
2308.10848
[ "2308.01862" ]
2308.10848#49
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023a. doi: 10.48550/arXiv.2302.04761. URL https: //doi.org/10.48550/arXiv.2302.04761.
2308.10848#48
2308.10848#50
2308.10848
[ "2308.01862" ]
2308.10848#50
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023b. doi: 10.48550/arXiv.2302.04761. URL https: //doi.org/10.48550/arXiv.2302.04761. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei.
2308.10848#49
2308.10848#51
2308.10848
[ "2308.01862" ]
2308.10848#51
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Language models are multilingual chain-of-thought reasoners. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/pdf?id=fR3wGCk-IXp. Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao.
2308.10848#50
2308.10848#52
2308.10848
[ "2308.01862" ]
2308.10848#52
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Reflexion: Language agents with verbal reinforcement learning, 2023. URL https: //doi.org/10.48550/arXiv.2303.11366. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adri`a Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Santilli, Andreas Stuhlm¨uller, Andrew M. Dai, Andrew La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakas, and et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. CoRR, abs/2206.04615, 2022. doi: 10.48550/arXiv.2206.04615. URL https://doi.org/10. 48550/arXiv.2206.04615.
2308.10848#51
2308.10848#53
2308.10848
[ "2308.01862" ]
2308.10848#53
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F. Christiano. Learning to summarize with human feed- In Hugo Larochelle, Marcâ Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and back. Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Con- ference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 1f89885d556929e98d3ef9b86448f951-Abstract.html.
2308.10848#52
2308.10848#54
2308.10848
[ "2308.01862" ]
2308.10848#54
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Peter Stone and Manuela Veloso. Multiagent systems: A survey from a machine learning perspective. Auton. Robots, 8(3):345â 383, jun 2000. ISSN 0929-5593. doi: 10.1023/A:1008942012299. URL https://doi.org/10.1023/A:1008942012299. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aur´elien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample.
2308.10848#53
2308.10848#55
2308.10848
[ "2308.01862" ]
2308.10848#55
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Llama: Open and efficient foundation language models. CoRR, abs/2302.13971, 2023a. doi: 10.48550/arXiv.2302.13971. URL https://doi. org/10.48550/arXiv.2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar
2308.10848#54
2308.10848#56
2308.10848
[ "2308.01862" ]
2308.10848#56
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
13 # Preprint Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aur´elien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom.
2308.10848#55
2308.10848#57
2308.10848
[ "2308.01862" ]
2308.10848#57
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023b. doi: 10.48550/arXiv.2307.09288. URL https://doi.org/ 10.48550/arXiv.2307.09288. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar.
2308.10848#56
2308.10848#58
2308.10848
[ "2308.01862" ]
2308.10848#58
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Voyager: An open-ended embodied agent with large language models. CoRR, abs/2305.16291, 2023a. doi: 10.48550/arXiv.2305.16291. URL https://doi.org/ 10.48550/arXiv.2305.16291. Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. Unleashing cognitive synergy in large language models: A task-solving agent through multi-persona self- collaboration. CoRR, abs/2307.05300, 2023b. doi: 10.48550/arXiv.2307.05300. URL https: //doi.org/10.48550/arXiv.2307.05300. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le.
2308.10848#57
2308.10848#59
2308.10848
[ "2308.01862" ]
2308.10848#59
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022a. URL https://openreview.net/forum?id= gEZrGCozdqR. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou.
2308.10848#58
2308.10848#60
2308.10848
[ "2308.01862" ]
2308.10848#60
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS, 2022b. URL http://papers.nips.cc/paper_files/paper/2022/hash/ 9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html. Jimmy Wei, Kurt Shuster, Arthur Szlam, Jason Weston, Jack Urbanek, and Mojtaba Komeili. Multi-party chat: Conversational agents in group settings with humans and models. CoRR, abs/2304.13835, 2023. doi: 10.48550/arXiv.2304.13835. URL https://doi.org/10. 48550/arXiv.2304.13835.
2308.10848#59
2308.10848#61
2308.10848
[ "2308.01862" ]
2308.10848#61
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Michael J. Wooldridge and Nicholas R. Jennings. Intelligent agents: theory and practice. Knowl. Eng. Rev., 10(2):115â 152, 1995. doi: 10.1017/S0269888900008122. URL https://doi.org/10. 1017/S0269888900008122. Anita Williams Woolley, Christopher F. Chabris, Alex Pentland, Nada Hashmi, and Thomas W. Malone. Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004):686â 688, 2010. doi: 10.1126/science.1193147. URL https://www.science. org/doi/abs/10.1126/science.1193147.
2308.10848#60
2308.10848#62
2308.10848
[ "2308.01862" ]
2308.10848#62
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Anita Williams Woolley, Ishani Aggarwal, and Thomas W. Malone. Collective intelligence and group performance. Current Directions in Psychological Science, 24(6):420â 424, 2015. doi: 10.1177/0963721415599543. URL https://doi.org/10.1177/0963721415599543. Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang. Autogen: Enabling next-gen llm applications via multi- agent conversation framework, 2023. URL https://doi.org/10.48550/arXiv.2308. 08155. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. CoRR, abs/2305.10601, 2023a. doi: 10.48550/arXiv.2305.10601. URL https://doi.org/10. 48550/arXiv.2305.10601.
2308.10848#61
2308.10848#63
2308.10848
[ "2308.01862" ]
2308.10848#63
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
14 Preprint Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Confer- ence on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023b. URL https://openreview.net/pdf?id=WE_vluYUL-X. Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, and Chuang Gan.
2308.10848#62
2308.10848#64
2308.10848
[ "2308.01862" ]
2308.10848#64
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Building cooperative embodied agents modularly with large language models. CoRR, abs/2307.02485, 2023a. doi: 10.48550/arXiv.2307.02485. URL https: //doi.org/10.48550/arXiv.2307.02485. Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, and Yong- bin Li. Wider and deeper llm networks are fairer llm evaluators. arXiv preprint arXiv:2308.01862, 2023b. URL https://doi.org/10.48550/arXiv.2308.01862. Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, and Graham Neubig.
2308.10848#63
2308.10848#65
2308.10848
[ "2308.01862" ]
2308.10848#65
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Webarena: A realistic web environment for building autonomous agents. CoRR, abs/2307.13854, 2023. doi: 10.48550/arXiv.2307.13854. URL https://doi.org/10.48550/arXiv.2307.13854. 15 Preprint # A CONFIGURATIONS OF THE EXPERIMENTS Datasets and Evaluation Metrics Our evaluation assesses different aspects of agents, including general understanding and reasoning capabilities, coding capabilities and tool utilization capabilities.
2308.10848#64
2308.10848#66
2308.10848
[ "2308.01862" ]
2308.10848#66
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
â ¢ General Understanding Capabilities: We utilize two datasets. The first one is a Dialogue response dataset, FED (Mehri & Esk´enazi, 2020), where given a multi-round chat history, the agent or agent group is required to generate the next chat. Following previous work (Madaan et al., 2023), we utilize GPT-4 as the evaluator to score the agent-generated response against the human-written ones, and report the agentâ s win rate. The second dataset is Commongen- Challenge (Madaan et al., 2023), which is a constrained generation dataset where given 20 concepts, the agent is required to generate a coherent and grammatically correct paragraph containing as many concepts as possible. We report the average percentage of the covered concepts.
2308.10848#65
2308.10848#67
2308.10848
[ "2308.01862" ]
2308.10848#67
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
â ¢ General Reasoning Capabilities: We utilize the English subset of MGSM (Shi et al., 2023), which is a subset of GSM-8k (Cobbe et al., 2021), to evaluate the agentsâ mathematical reasoning capabilities. It is a dataset containing grade school math problems. We report the percentage of the correct answers. And we use the logic grid puzzles task from BigBench (Srivastava et al., 2022), which contains logic problems that requires multi-step logic reasoning, to assess the agentsâ logical reasoning capabilities. We report the accuracy. â ¢ Coding Capabilities: We utilize Humaneval (Chen et al., 2021), which is a code completion dataset, and report Pass@1 metric1 â ¢ Tool Utilization Capabilities: Since automatic evaluation on the performance of tool utilization is difficult, and there is currently no relevant benchmark, we craft 10 complex instructions and manually assess the performance. The instructions are listed in Appendix B. Expert Recruitment For tasks including dialogue response, code completion, and constrained generation, four agents is recruited into the system. For the task of mathematical reasoning, we limited the number to two agents. This decision was based on our observation that an increase in the number of reviewers for mathematical reasoning tasks correlates with a higher likelihood of them giving erroneous critiques, leading to incorrect solutions by the solver. We have a discussion on this topic in Section 3.1. For tool utilization, we recruit two or three agents to engage in collaborative decision-making and action execution depending on the specific task. The detailed setups are listed at Appendix B. Currently the number of experts is pre-defined by us for each task. We are seeking a way to automate this decision as well. Collaborative Decision-Making For tasks in coding and general understanding and reasoning, we use the vertical structure because all these tasks require only one response as the answer, and the solver in the vertical structure can be responsible for answering. For tool utilization, we use the horizontal structure because the agents should clarify their own sub-tasks in the discussion. Action Execution For the Humaneval code completion dataset benchmarked with GPT-4, we incorporate an additional agent during the action execution stage to craft unit testing code (in an zero-shot manner). Subsequently, the generated code is subjected to unit testing, and the testing results are conveyed as the environment state to the evaluation module.
2308.10848#66
2308.10848#68
2308.10848
[ "2308.01862" ]
2308.10848#68
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Regarding the constrained generation dataset, Commongen-Challenge, the agent-generated response undergoes a concept coverage check. Any missing concepts are then passed to the evaluation module as the environment state. In the context of tool utilization, each agent iteratively calls the tool in the ReAct manner, up to a maximum of 10 iterations. Upon reaching the final iteration, the agent is forced to draw a conclusion regarding the result, labeling the taskâ s status as either â pendingâ or â finishedâ . These conclusions are then forwarded to the evaluator for assessment.
2308.10848#67
2308.10848#69
2308.10848
[ "2308.01862" ]
2308.10848#69
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
1The method for calculating Pass@1 differs from the approach in Chen et al. (2021). Instead of generating multiple responses and calculating an unbiased estimator, we directly employ the first response to compute the Pass@1. 16 Preprint Evaluation To facilitate a feedback loop, an agent was tasked with the role of evaluator. This agent, provided with the initial problem p and the decisions A made during the collaborative decision- making stage, is charged with determining the correctness of those decisions. In cases where the decision is identified as erroneous, feedback is channeled back to the expert recruitment stage. If the decision meets the accuracy criteria, it is determined as the final answer to p. While our current configuration employs an agent for evaluation, we acknowledge the potential of human evaluators and intend to incorporate such experiments in future endeavors. B EXPERIMENT DETAILS FOR MULTI-AGENT TOOL USING B.1 SETUPS This section provides specific implementation details for enabling multiple agents in AGENTVERSE to collaboratively utilize tools to accomplish userâ s query. Unless specified herein, the implementation adheres to the standard procedures defined in the other experiments. Collaborative Decision-Making Agents recruited during the Expert Recruitment stage engage in collaborative discussions regarding the assigned task using a horizontal communication structure. In this configuration, agents communicate in a predetermined sequential order. At the conclusion of their discussions, an additional agent is designated as the â summarizerâ and is responsible for consolidating the discussed sub-tasks for each participant. Action Execution During action execution, we follow the ReAct prompting technique to let the agents write down their thought, and then call the tool, and we return the tool response to the agent. The agent again write down the thought and call the tools. The process iterates until the agent decide to submit its conclusion, or a maximum iteration number, which we set to be 10, is reached. In practice, we use OpenAIâ s function call to handle the tools. # B.2 RESULTS For action execution, the ReAct prompting technique is employed. Agents articulate their thought process, invoke the respective tool, and subsequently receive feedback from the tool. This cycle of reflection and tool interaction continues iteratively. The process concludes either when the agent finalizes its execution with its conclusion or after a pre-set maximum number of iterationsâ set (10 in our experiments). We utilize OpenAIâ s function call to handle the tool calls.
2308.10848#68
2308.10848#70
2308.10848
[ "2308.01862" ]
2308.10848#70
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Instruction 1 Recently, it has become popular in the AI field to verify the mathematical reasoning abilities of large language models by observing if they can solve the â 24-Point Game.â What is this game? Does it have a code-based solution? If it does, provide a Python code along with test cases and test its functionality. What are some other similar games that can be used to test the modelsâ mathematical reasoning abilities? Single ReAct Agent: (The generated code is omitted) The â 24-Point Gameâ is a mathematical game where four numbers must be used with the basic arithmetic operations (addition, subtraction, multiplication, and division) to make the number 24. The Python code I developed generates all possible combinations of the four numbers and operations to find a solution that equals 24. The code was tested with different sets of numbers, and it correctly identified whether a solution existed and what it was. This task tested my ability to understand the rules of a game and to develop and test a program that solves it. (game rule â , code â , similar games â ) AGENTVERSE (2 agents): (The generated code is omitted) The â 24-Point Gameâ is a mathematical game where four numbers must be used with the basic arithmetic operations (addition, subtraction, multiplication, and division) to make the number 24. A Python code was developed and tested that generates all possible combinations of the four numbers and operations to find a solution that equals 24. The code was tested with different sets of numbers, and it correctly identified whether a solution existed and what it was.
2308.10848#69
2308.10848#71
2308.10848
[ "2308.01862" ]
2308.10848#71
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Similar games to the â 24-Point Gameâ that can be used to test mathematical reasoning abilities include â Make a Numberâ and â Math Safeâ . â Make a Numberâ is a game where players are given 4 numbers and 4 operations and they have to use these to create a target number. â Math Safeâ is a game where players use four given single-digit numbers and any of 17 # Preprint the four operations to reach a target number and open a safe. Both these games require the use of operations and logical thinking to reach a target number, similar to the â 24-Point Gameâ . (game rule â , code â , similar games â ) Instruction 2 (Additional API:
2308.10848#70
2308.10848#72
2308.10848
[ "2308.01862" ]
2308.10848#72
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Weather) I want to hold a party at somewhere around Tsinghua University tomorrow. I need you to look for some best places for holding a party nearby, and tell me whether the weather is good for holding a party tomorrow. Also, I want to know what activities can be considered in my party. Help me search the web. Single ReAct Agent: There are several restaurants near Tsinghua University that could be good places for a party, including Lan Yang ShuFang, YiLan ShaoKao, Guan Shi JiChi, Dong Lai Shun, and Xibu Mahua Beef Noodle. The weather in Beijing tomorrow is expected to be overcast with a temperature of 26 degrees Celsius, which might be suitable for a party. Some activities that can be considered for the party include the Saran Wrap Game, Post It Note Game, Most Likely To, Never Have I Ever, Would You Rather, and many more. (â
2308.10848#71
2308.10848#73
2308.10848
[ "2308.01862" ]
2308.10848#73
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
) AGENTVERSE (3 agents): The weather in the Tsinghua University area tomorrow is expected to be overcast with a temperature of 26 degrees Celsius, which is suitable for holding a party. The best places for a party near Tsinghua University include the Summer Palace, Wudaokou, Zhongguancun, and Peking University, as well as several restaurants nearby. For party activities, you could consider games such as the Saran Wrap Game, the Post It Note Game, Howâ s Yours?, Most Likely To, Never Have I Ever, Would You Rather?, Heads Up!, Iâ m Hosting a Party..., Two Truths and a Lie, Sticker Stalker, Great Minds Think Alike, murder mystery games, Who Am I?, the Newlywed Game, Truth or Dare, Spoons, Pictionary, Kiss Marry Kill, the Donâ t Say Yes game, Cards Against Humanity, What Do You Meme, Drawing Without Dignity, and Thatâ s What She Said. These games are suitable for various party occasions and there are also suggestions for icebreakers for guests who donâ t know each other well. (â
2308.10848#72
2308.10848#74
2308.10848
[ "2308.01862" ]
2308.10848#74
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
provide more details) Instruction 3 (Additional API: Billboard) Iâ m currently analyzing what is popular on the website. Can you help me find the recent trending stuff. It could be anything, like trending news, products, books, movies, music, etc. Give a summarization for me. Single ReAct Agent: The recent trending products on the web include Apple AirPods Pro, Diamond Dazzle Cleaner Pen, Ugg Slippers, Squishmallows, Winix Air Purifier, ThisWorx Car Vacuum Cleaner, All-Clad Cookware, Eufy Robot Vacuum, Renpho Eye Massager, Crocs, Throw blankets, Chuckit! Sport Dog Ball Launcher, Apple AirTags, Revlon Hair Dryer Brush, Lego Sets, KitchenAid Stand Mixer, Winc Wine Subscription, Saatva Mattress, Home Chef Subscription, and MasterClass Subscription. Unfortunately, we were unable to find specific information on recent trending books. (trending product â
2308.10848#73
2308.10848#75
2308.10848
[ "2308.01862" ]
2308.10848#75
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
) AGENTVERSE (3 agents): Here is a summary of the trending topics on the website: Music: 1. â Heat Wavesâ by Glass Animals 2. â We Donâ t Talk About Brunoâ by Carolina Gaitan, Mauro Castillo, Adassa, Rhenzy Feliz, Diane Guerrero, Stephanie Beatriz & Encanto Cast 3. â Super Gremlinâ by Kodak Black 4. â Slime You Outâ by Drake Featuring SZA 5. â Paint The Town Redâ by Doja Cat 6. â Snoozeâ by SZA
2308.10848#74
2308.10848#76
2308.10848
[ "2308.01862" ]
2308.10848#76
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
# Movies: 1. â No One Will Save Youâ 2. â A Haunting in Veniceâ 3. â Expend4blesâ # Books: 1. â The Midnight Libraryâ 2. â The Invisible Life of Addie LaRueâ 3. â The Seven Husbands of Evelyn Hugoâ 18 # Preprint # Products: 1. Laser hair removal 2. Head scarfs 3. Detangling hair brushes 4. Hair wigs 5. Straight-leg jeans 6. Shapewear 7. Athleisure 8. Abdominal crunch straps 9. Swimsuits 10. Power tool accessories 11. Dog toys 12. Shot glasses 13. Costume shoes 14. Slipcovers 15. Wall plates and covers 16. Baby shirts 17. Curtain and drape rings 18. Cabinet knobs and handles 19. Watch accessories 20. Tumblers 21. Model train sets 22. Signage 23. Wedding dresses 24. Table runners
2308.10848#75
2308.10848#77
2308.10848
[ "2308.01862" ]
2308.10848#77
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Please note that the popularity of these items can vary based on the region and demographic of the consumers. (music â , movie â , books â , products â ) Instruction 4 Iâ m planning a two-week vacation to Japan next month. Help me plan my itinerary. I want to visit Tokyo, Kyoto, and Osaka. Look for the top tourist attractions in each city, and also suggest the best mode of travel between these cities. Additionally, find out the weather forecast for the month Iâ ll be visiting.
2308.10848#76
2308.10848#78
2308.10848
[ "2308.01862" ]
2308.10848#78
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Single ReAct Agent: The top tourist attractions in Tokyo are the Tokyo Skytree, Senso-ji Temple, Meiji Shrine, Tokyo Disneyland, Ueno Park, and Odaiba. In Kyoto, the top attractions are Nijo Castle, Fushimi-Inari Taisha Shrine, Kinkaku-ji: The Golden Pavilion, Kiyomizu-dera Temple, Sanj¯usangen- d¯o Temple, and Kyoto Imperial Palace. In Osaka, tourists can visit Osaka Castle, Shitenn¯o-ji Temple, Osaka Aquarium Kaiyukan, Tempozan Ferris Wheel and Harbor Village, and Universal Studios Japan. These attractions offer a mix of historical, cultural, and modern experiences for visitors. (tourist attractions â , best mode of travel between cities â , weather forecast â ) AGENTVERSE (3 agents):
2308.10848#77
2308.10848#79
2308.10848
[ "2308.01862" ]
2308.10848#79
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Your two-week vacation to Japan is shaping up to be quite exciting. In Tokyo, you can visit the Tokyo Skytree, Senso-ji Temple, Meiji Shrine, Tokyo Disneyland and DisneySea, and the Odaiba area. In Kyoto, the top attractions are Nijo Castle, Fushimi-Inari Taisha Shrine, Kinkaku-ji (Golden Pavilion), Kiyomizu-dera Temple, and Sanj¯usangen-d¯o Temple. While in Osaka, you should visit Osaka Castle, Shitenn¯o-ji Temple, Osaka Aquarium Kaiyukan, and Universal Studios Japan. As for transportation, the best way to travel between these cities is by train, specifically the Shinkansen bullet train. You can also consider flights with ANA or JAL, or if youâ re on a budget, the night bus or LCC flights by Jetstar and Peach. Regarding the weather, itâ s difficult to provide an exact forecast for the next month, but weâ ve gathered the average weather data for these cities based on previous years. Please remember to check the weather forecast closer to your departure date for a more accurate prediction. Enjoy your trip! (tourist attractions â , best mode of travel between cities â , weather forecast â ) 19 # Preprint
2308.10848#78
2308.10848#80
2308.10848
[ "2308.01862" ]
2308.10848#80
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Instruction 5 I am planning a date with my girlfriend this week, please search for a good movie theater and a restaurant near Tsinghua University in Beijing and recommend a good movie to watch. Please search the web. Single ReAct Agent: Your date near Tsinghua University in Beijing is all set. For dining, you can choose from Lan Yang ShuFang, YiLan ShaoKao, Guan Shi JiChi (Zhongguancun), or Dong Lai Shun (Zhongguancun HaiTu).
2308.10848#79
2308.10848#81
2308.10848
[ "2308.01862" ]
2308.10848#81
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
As for the movie, there are several highly-rated ones showing in 2023. Some of these include M3GAN, Plane, Missing, Infinity Pool, John Wick: Chapter 4, Creed III, Scream VI, Air, Suzume, Rise, Are You There God? Itâ s Me, Margaret, Showing Up, and The Covenant. Enjoy your date! (restaurant â , movie â , theater â ) AGENTVERSE (3 agents): Your date is all set! For dining, we recommend either Jiasuo Restaurant or Zijingyuan, both located conveniently on the Tsinghua University campus. For your movie, we suggest â
2308.10848#80
2308.10848#82
2308.10848
[ "2308.01862" ]
2308.10848#82
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
No Bearsâ , which is currently popular and highly rated. You can watch it at the Maple Garden Drive-In Cinema, one of the top-rated movie theaters near Tsinghua University. Enjoy your date! (restaurant â , movie â , theater â ) Instruction 6 I want to kick off a book club with my friends. Can you tell me the top 5 bestselling books this month, gather the content summary for each, and find online platforms where we can buy or borrow them? Single ReAct Agent: I was unable to find the top 5 bestselling books for October 2023. The websites I visited did not provide the required information. Therefore, I am unable to provide the content summary for each of the top 5 bestselling books and find online platforms where they can be bought or borrowed. (â
2308.10848#81
2308.10848#83
2308.10848
[ "2308.01862" ]
2308.10848#83
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
) AGENTVERSE (2 agents):The top 5 bestsellers this month can be bought or borrowed from several online platforms. These include OverDrive, which allows you to borrow books for free through your local library or school. Other platforms include the Amazon Kindle Store, Barnes & Nobleâ s Nook Book Store, Fictionwise, eBooks.com, eReader.com, Google Play Book Store, Powellâ s Books, as well as eBook lending and borrowing services like eBookFling, Lendle, and BookLending.com. (online platforms â , top 5 bestselling books â , content summary â ) Instruction 7 I am planning to buy a new car. Could you help me compare the features and prices of the latest models of Tesla, Ford, and Toyota? Include details about range, charging time, safety features, and after-sales service. Also, provide a brief analysis of the pros and cons of each car.
2308.10848#82
2308.10848#84
2308.10848
[ "2308.01862" ]
2308.10848#84
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Single ReAct Agent: The 2023 Tesla Model 3 starts at $41,880 and goes up to $54,880. It comes with standard features such as heated front seats, navigation, and Teslaâ s Autopilot semi-autonomous driv- ing system. In terms of safety features, the Model 3 includes standard automated emergency braking, lane-departure warning, and adaptive cruise control. Teslaâ s after-sales service is characterized by a vertically integrated format that eliminates third-party intermediaries and relies on over-the-air (OTA) update capabilities and a mobile service fleet. The companyâ s OTA update capabilities will open up new subscription-based aftersales revenue streams, while its closed-loop system allows access to vehicle data for driving-related feedback loops and predictive maintenance. (Tesla â , Other brands â ) AGENTVERSE (3 agents):
2308.10848#83
2308.10848#85
2308.10848
[ "2308.01862" ]
2308.10848#85
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
The starting price for the 2023 Ford Mustang Mach-E is around $43,895 and it takes approximately 38 hours to fully charge using a 120V outlet, 14 hours using a 240V outlet, and around 45 minutes to 80% capacity with DC fast charging. The starting price for the 2023 Toyota bZ4X is approximately $42,000. The charging time varies depending on the method: 52 hours for Level 1 charging, 26 hours or 18.75 hours for Level 2 charging, and 30-45 minutes to 80% capacity for Level 3 DC fast charging. Unfortunately, the price for the 2023 Tesla Model 3 was not found, but the charging time can be as quick as 15 minutes with a 250 kW Tesla Supercharger. At home, using a standard 120-volt outlet takes around three to four days for a full charge, while a 240-volt outlet with a NEMA 14-50 adapter reduces charging time to 10 or 11 hours. A Tesla Wall Connector can fully charge the car in about five hours.
2308.10848#84
2308.10848#86
2308.10848
[ "2308.01862" ]
2308.10848#86
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
In terms of safety, all three models come equipped with a variety of features such as anti-lock brakes, stability control, front-impact airbags, side impact airbags, overhead airbags, knee airbags, pretensioners, and a security system. The Tesla Model 3 also includes autopilot and collision warning features. 20 # Preprint Regarding after-sales service, Tesla offers a comprehensive warranty and roadside assistance with service centers across the globe and mobile service. Toyotaâ s after-sales service is known for being top-notch and continually improving with expert technicians and quality genuine parts. However, Fordâ s after-sales service has received negative reviews, with customers reporting poor customer support and inefficient handling of issues. Please consider these factors when making your decision. (Prices â , charging time, safety features, after-sales service) Instruction 8 Iâ
2308.10848#85
2308.10848#87
2308.10848
[ "2308.01862" ]
2308.10848#87
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
ve recently taken an interest in DIY home projects. Search for beginner-friendly DIY projects that can be completed over the weekend. Also, provide a list of materials required and a step-by-step guide for each project. Single ReAct Agent: I found a variety of beginner-friendly DIY home projects from three different websites. These projects include painting a room, replacing cabinet pulls, repairing torn window screens, fixing interior doors, balancing furnace dampers, repairing leaky faucets, refreshing caulk in the bathroom, building garage storage shelves, constructing a fire pit, hanging a ceiling fan, installing a wall-mounted television, and cleaning out sink traps. These projects are suitable for beginners and can help improve DIY skills while boosting confidence. (DIY projects â , list of materials â ) AGENTVERSE (2 agents):
2308.10848#86
2308.10848#88
2308.10848
[ "2308.01862" ]
2308.10848#88
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
The team has successfully found a variety of beginner-friendly DIY home projects that can be completed over the weekend. These projects range from painting a room, replacing cabinet pulls, installing a wall-mounted television, building a fire pit, to more complex tasks like kitchen remodels and furniture updates. The materials required for these projects are basic and easy to source, including tools like a roller and pan for painting, a cordless drill for replacing cabinet pulls, a stud finder for installing a wall-mounted television, and concrete work tools for building a fire pit. Detailed, step-by-step guides for each project have been provided to ensure a smooth DIY experience. Happy DIY-ing! (DIY projects â , list of materials â )
2308.10848#87
2308.10848#89
2308.10848
[ "2308.01862" ]
2308.10848#89
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Instruction 9 Winter is coming, and I want to update my wardrobe. Search for the latest winter fashion trends for men and women. Also, suggest some online stores where I can buy trendy winter clothing. Single ReAct Agent: The latest winter fashion trends for both men and women this year include suiting, bold shoulders, western touches, maxis, electric hues, bombers, mismatched maximalism, corseting, cut-outs, Y2K fashion, and chunky boots. You can purchase these trendy winter clothes at online stores like PrettyLittleThing and Windsor. (fashion trends â , online stores â ) AGENTVERSE (3 agents):
2308.10848#88
2308.10848#90
2308.10848
[ "2308.01862" ]