doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.10837
37
0.0205 0.0325 0.0164 0.0203 0.0267 0.0387 0.0387 0.0508 0.0508 0.0493 0.0501 0.0131 0.0206 0.0099 0.0124 0.0163 0.0249 0.0244 0.0379 0.0381 0.0367 0.0361 0.0347 0.0512 0.0283 0.0347 0.0407 0.0605 0.0647 0.0664 0.0667 0.0645 0.0650 0.0176 0.0266 0.0137 0.0170 0.0208 0.0318 0.0327 0.0429 0.0446 0.0416 0.0407 0.0166 0.0321 0.0097 0.0116 0.0228 0.0463 0.0443 0.0608 0.0676 0.0587 0.0630 0.0107 0.0221 0.0059 0.0071 0.0140 0.0306 0.0294 0.0507 0.0583 0.0486 0.0523 0.0270 0.0497 0.0176 0.0203 0.0381 0.0675 0.0700 0.0688
2308.10837#37
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
37
Figure 6: Examples of the properties emerge in the agent interactions in Minecraft. 4.1 VOLUNTEER BEHAVIORS Volunteer behaviors refer to actions intended to enhance the benefits of others in human soci- ety (Omoto & Snyder, 1995; Mowen & Sujan, 2005). We observe similar behaviors emerging in a multi-agent group as follows: Time Contribution. The agents are willing to contribute their unallocated time to enhance collabora- tion efficiency. As shown in the examples in Figure 6 (1a), Alice and Bob need to collaboratively craft 2 paper, which necessitates three sugar canes as the raw material. Initially, Alice proposes that she will collect the sugar canes while Bob waits until the materials are ready. However, this plan is suboptimal, as it offers Bob spare time. Recognizing inefficiency, Bob suggests that both gather sugar canes concurrently, leading to expedited task completion. Resource Contribution. Our analysis reveals that the agents are willing to contribute the possessed materials. As illustrated in Figure 6 (1b), at the end of the task crafting 2 paper, Alice has collected all the raw materials (sugar canes), whereas Bob possesses the crafting table essential for the paper’s creation. In the decision-making stage, Alice suggests transferring her materials to Bob by dropping them on the ground. This enables Bob to utilize them for the intended crafting process.
2308.10848#37
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10848
38
Assistance Contribution. In the process of accomplishing tasks, we observe that agents, upon completing their individual assignments, actively extend support to their peers, thereby expediting the overall task resolution. As shown in Figure 6 (1c), Alice and Bob have successfully completed their assigned sub-tasks, while Charlie is still struggling to gather three leathers. During the collaborative decision-making phase, Alice and Bob propose to assist Charlie in gathering. 8 Preprint These behaviors highlight how agents willingly contribute their capabilities and efforts to assist other agents, culminating in an accelerated achievement of their mutual goal. 4.2 CONFORMITY BEHAVIOR In human society, individuals tend to adjust their behavior to align with the norms or goals of a group (Cialdini & Goldstein, 2004; Cialdini & Trost, 1998), which we refer to as conformity behavior. We also observe similar behaviors within multi-agent groups. As shown in Figure 6 (2), all agents are asked to gather three pieces of leather. However, Charlie gets sidetracked and begins crafting items that do not contribute directly to the task. In the subsequent decision-making stage, Alice and Bob critique Charlie’s actions. Charlie acknowledges his mistake and re-focuses on the mutual tasks. The conformity behavior enables agents to align with mutual goals as work progresses. 4.3 DESTRUCTIVE BEHAVIOR
2308.10848#38
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10848
39
4.3 DESTRUCTIVE BEHAVIOR Additionally, we have also observed that agents may exhibit behaviors aimed at achieving greater efficiency, which could raise safety concerns. As depicted in Figure 6 (3a) and Figure 6 (3b), an agent occasionally bypasses the procedure of gathering raw materials and resorts to harming other agents or destroying an entire village library to acquire the necessary materials. With advancements in autonomous agents, deploying them in real-world scenarios has become increasingly plausible. However, the emergence of hazardous behaviors could pose risks, especially when humans are involved in collaborative processes. Thus, designing strategies to prevent agents from adopting such hazardous behaviors is a critical area for future research. # 5 RELATED WORK
2308.10848#39
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
40
Methods Sports Beauty Toys BLUE4 ROUGE1 ROUGE2 ROUGEL BLUE4 ROUGE1 ROUGE2 ROUGEL BLUE4 ROUGE1 ROUGE2 ROUGEL w/o hints 0.5305 0.4793 0.7112 1.0407 RecSysLLM 1.2673 Attn2Seq NRT PETER P5 12.2800 11.0723 12.8944 14.1589 16.7132 1.2107 1.1304 1.3283 2.1220 2.8980 9.1312 7.6674 9.8635 10.6096 13.0104 0.7889 0.8295 1.1541 0.9742 1.5230 12.6590 12.7815 14.8497 16.4530 19.0032 1.6820 1.8543 2.1413 1.8858 3.0422 9.7481 9.9477 11.4143 11.8765 14.7471 1.6238 1.9084 1.9861 2.3185 2.9923 13.2245 13.5231 14.2716 15.3474 16.7823 2.9942 3.6708 3.6718 3.7209 4.8372 10.7398
2308.10837#40
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
40
# 5 RELATED WORK Autonomous Agents. The pursuit of creating autonomous agents that can operate intelligently in real-world environments without human involvement has been a persistent goal throughout the history of AI (Wooldridge & Jennings, 1995; Minsky, 1988; Bubeck et al., 2023). Recently LLMs (Touvron et al., 2023a; OpenAI, 2023a) have opened up new opportunities to achieve this goal. These LLMs possess remarkable understanding, reasoning, and generation capabilities, allowing autonomous agents to utilize them as a backbone for handling increasingly complex scenarios (Richards & et al., 2023; Nakajima, 2023; Reworkd, 2023; Liu et al., 2023). However, even though these autonomous agents already demonstrate considerable power, they still lack certain essential human-analogous cognitive capabilities. Hence, some research designs external mechanisms that endow agents with reflection (Yao et al., 2023b; Shinn et al., 2023), task decomposition (Wei et al., 2022b; Yao et al., 2023a), and tool utilization/creation (Schick et al., 2023b; Qin et al., 2023a;b; Qian et al., 2023b) capabilities, which bring autonomous agents closer to achieving artificial general intelligence.
2308.10848#40
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
41
13.5231 14.2716 15.3474 16.7823 2.9942 3.6708 3.6718 3.7209 4.8372 10.7398 11.1867 11.7010 12.1312 15.0231 w/ hints 2.4627 1.4689 RecSysLLM 3.7232 1.4303 RecSysLLM 3.9842 PETER+ P5 P5 24.1181 23.5476 30.1129 23.3810 30.2913 5.1937 5.3926 5.0232 5.3239 5.8923 18.4105 17.5852 20.0020 17.4913 20.3821 3.2606 1.8765 4.8232 1.9031 5.0021 25.5541 25.1183 26.9832 25.1763 27.3854 5.9668 6.0764 6.2382 6.1980 6.7281 19.7168 19.4488 21.4842 19.5188 22.7439 4.7919 3.8933 5.9323 3.5861 6.2912 28.3083 27.9916 29.3232 28.1369 30.2948 9.4520 9.5896 9.4234 9.7562 10.0329 22.7017
2308.10837#41
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
41
Multi-agent System. In human society, a well-organized group composed of individual humans can often collaboratively handle a greater workload and accomplish complex tasks with higher efficiency and effectiveness. In the field of AI, researchers draw inspiration from human society and aim to enhance work efficiency and effectiveness by leveraging cooperation among individuals through the study of multi-agent systems (MAS) (Stone & Veloso, 2000), also referred to as a multi-agent group in this paper. The multi-agent group collaboratively makes decisions and executes corresponding actions in a distributed and parallel manner to achieve the common goal, which significantly improves work efficiency and effectiveness. Previous works have leveraged multi-agent joint training to achieve this goal. Recently, some studies have attempted to leverage the intelligence and capabilities of agents for autonomous collaboration. Li et al. (2023) have conceptualized assemblies of agents as a group, and focused on exploring the potential of their cooperation. Park et al. (2023) found social behaviors autonomously emerge within a group of agents, and Du et al. (2023); Wang et al. (2023b); Zhang et al. (2023a); Qian et al.
2308.10848#41
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10848
42
within a group of agents, and Du et al. (2023); Wang et al. (2023b); Zhang et al. (2023a); Qian et al. (2023a); Chan et al. (2023) further leverage multi-agent cooperation to achieve better performance on reasoning tasks. Based on these findings, we introduce a framework, denoted as AGENTVERSE, capable of leveraging group cooperation to manage more intricate scenarios. This framework can dynamically adjust its composition according to the current state, aiming to facilitate optimal decision-making and execution.
2308.10848#42
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
43
Table 6: Performance on review summarization (%). The shadow refers to the test on unseen prompts in a zero-shot manner. Methods Sports Beauty Toys BLUE2 ROUGE1 ROUGE2 ROUGEL BLUE2 ROUGE1 ROUGE2 ROUGEL BLUE2 ROUGE1 ROUGE2 ROUGEL 2.1581 0.7779 2.6910 RecSysLLM 4.2823 T0 GPT-2 P5 2.2695 4.4534 12.0314 14.8343 0.5694 1.0033 3.2921 4.3984 1.6221 1.9236 10.7274 12.4833 1.2871 0.5879 1.9325 3.3821 1.2750 3.3844 8.2909 9.8103 0.3904 0.6756 1.4321 2.8543 0.9592 1.3956 7.4000 10.4003 2.2296 0.6221 1.7833 4.0320 2.4671 3.7149 8.7222 12.2932 0.6482 0.6629 1.3210 3.2943 1.8424 1.4813 7.6134 10.4092
2308.10837#43
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
43
9 Preprint # 6 CONCLUSION In this study, we present AGENTVERSE, a novel and general multi-agent framework designed to emulate human group problem-solving processes. Our comprehensive experimental results highlight the efficacy of AGENTVERSE, demonstrating its enhanced performance in comparison to individual agents across a myriad of tasks. These tasks encompass general understanding, reasoning, coding, and tool utilization. Notably, AGENTVERSE consistently delivers remarkable results in addressing intricate user queries when fortified with the appropriate tools. In our investigations within the Minecraft environment, we identify both positive and negative emergent social behaviors among agents. As advancements in artificial general intelligence progress, understanding multi-agent interactions should become increasingly crucial. AGENTVERSE serves as a valuable step toward this endeavor, and we are optimistic about its potential adaptability and refinement for a wider array of tasks and contexts in the future. # REFERENCES
2308.10848#43
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10848
44
# REFERENCES Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alexander Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J. Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, and Mengyuan Yan. Do as I can, not as I say: Grounding language in robotic affordances. CoRR, abs/2204.01691, 2022. doi: 10.48550/arXiv.2204.01691. URL https://doi.org/10.48550/arXiv.2204.01691.
2308.10848#44
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
45
Methods Sports Beauty Toys HR@1 HR@5 NDCG@5 HR@10 NDCG@10 HR@1 HR@5 NDCG@5 HR@10 NDCG@10 HR@1 HR@5 NDCG@5 HR@10 NDCG@10 0.0314 0.0351 0.0331 0.0641 RecSysLLM 0.0654 0.0726 RecSysLLM 0.0892 BPR-MF BPR-MLP SimpleX P5 P5 0.1404 0.1520 0.2362 0.1794 0.2008 0.1955 0.2029 0.0848 0.0927 0.1505 0.1229 0.1438 0.1355 0.1502 0.2563 0.2671 0.3290 0.2598 0.2984 0.2802 0.3001 0.1220 0.1296 0.1800 0.1488 0.1692 0.1627 0.1703 0.0311 0.0317 0.0325 0.0588 0.0618 0.0608 0.6072 0.1426 0.1392 0.2247 0.1573 0.1612 0.1564 0.1502 0.0857 0.0848 0.1441 0.1089 0.1110 0.1096
2308.10837#45
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10837
46
0.1573 0.1612 0.1564 0.1502 0.0857 0.0848 0.1441 0.1089 0.1110 0.1096 0.1097 0.2573 0.2542 0.3090 0.2325 0.2209 0.2300 0.2317 0.1224 0.1215 0.1711 0.1330 0.1302 0.1332 0.1302 0.0233 0.0252 0.0268 0.0386 0.0370 0.0389 0.0327 0.1066 0.1142 0.1958 0.1122 0.1301 0.1147 0.1423 0.0641 0.0688 0.1244 0.0756 0.0808 0.0767 0.0825 0.2003 0.2077 0.2662 0.1807 0.1902 0.1863 0.1926 0.0940 0.0988 0.1469 0.0975 0.0998 0.0997 0.1028
2308.10837#46
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
46
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hern´andez ´Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan A. Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christo- pher A. Choquette-Choo, Aakanksha Chowdhery, Cl´ement Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark D´ıaz, Nan Du, Ethan Dyer, Vladimir Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag,
2308.10848#46
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
47
IDs based on their order of occurrence in the dataset. This type of simplistic representation cannot capture semantic information about the items. In contrast, our RecSysLLM model represents all items as text strings. The textual rep- resentation enables our large language model to understand and capture nuanced interrelationships between items much more effectively. We believe this is the primary reason why our model outperformed P5 across most cases. The textual representation in our model empowers it to ingest semantic details and identify meaningful connections that cannot be derived from IDs alone. # Applications in real-world dataset Dataset The data used in this work was collected from Alipay, a mo- bile payment platform in China. We extracted user behavior logs, including bills, search queries, and page visits for sev- eral recommendation tasks. Each user sequence consists of the user’s 500 most recent interactions, spanning over one year of history for some users. The user sequences are used to model evolving user interests and capture both long- and short-term preferences. The training set contains 200, 000 sequences, and the test set contains 10, 000 sequences. The large-scale real-world dataset enables the modeling of com- plex user behavior and preferences for various recommenda- tion tasks. The hierarchical categories and sequential inter- actions provide rich signals for understanding user interests.
2308.10837#47
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10837
48
Implementation Details Our RecSysLLM model for Chinese language tasks lever- ages the powerful ChatGLM-6B (Du et al. 2021) model as a foundation. ChatGLM-6B is an open-source bilingual language model with 6.2 billion parameters, trained on a trillion-token corpus comprised primarily of Chinese text with some English. The model architecture is based on the General Language Model (GLM) framework. Similarly, our approach builds on this pre-trained ChatGLM-6B founda- tion by utilizing LoRA to adapt the model to our specific recommender system tasks. We set the rank of Lora to 8, which is a proper coefficient chosen by the ablation study.
2308.10837#48
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
48
J.D. Bransford and B.S. Stein. The Ideal Problem Solver: A Guide for Improving Thinking, Learning, ISBN 978-0-7167-2205-2. URL https://books. and Creativity. W.H. Freeman, 1993. google.com.tw/books?id=nnRxQgAACAAJ. S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco T´ulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with GPT-4. CoRR, abs/2303.12712, 2023. doi: 10.48550/arXiv.2303.12712. URL https://doi.org/10. 48550/arXiv.2303.12712.
2308.10848#48
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
49
Sequential Recommendation. Task Description. In this section, we conduct two se- quential recommendation tasks to evaluate the performance of our model, i.e., next-item prediction and candidate rec- ommendation. For next-item prediction, the model directly predicts the next item a user will interact with based on their historical interactions and profiles. For candidate rec- ommendation, given a user’s interaction history, profiles, and a list of candidate items where only one is positive, the model chooses the correct next item. We have bench- marked our model on the Amazon Sports, Beauty, and Toys datasets and demonstrated superior recommendation capa- bilities compared to other baseline recommender systems. Here, we compare our RecSysLLM to the powerful gen- erative models ChatGPT and the recently announced GPT- 4. We also compare our method against a basic fine-tuning approach of ChatGLM on our recommendation tasks. This allows us to analyze the improvements gained by our spe- cialized techniques that are tailored for the recommendation systems based on LLM. By evaluating against a simple fine- tuning baseline, we can quantify the benefits of our proposed approach and demonstrate that our architectural choices and training methodology confer meaningful advantages on rec- ommendation performance compared to just fine-tuning a large language model out-of-the-box.
2308.10837#49
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
49
Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. Chateval: Towards better llm-based evaluators through multi-agent debate, 2023. URL https://doi.org/10.48550/arXiv.2308.07201. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pond´e de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, 10 # Preprint
2308.10848#49
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10848
50
10 # Preprint Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021. URL https://arxiv. org/abs/2107.03374.
2308.10848#50
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
51
is a surprising result, as we expected the larger GPT-4 model to achieve superior performance compared to ChatGPT on this recommendation task due to its greater parameter size and pretraining scale. However, GPT-4 did not exhibit par- ticularly strong results and was not decisively superior to ChatGPT. There are several potential explanations for why GPT-4 underperformed expectations on the next item predic- tion. First, the dataset and evaluation methodology used for this task may not have fully exercised GPT-4’s strengths in areas like few-shot learning and knowledge recall. Second, GPT-4’s more powerful generative capabilities may have caused it to diverge too far from the tight distributions of the recommendation data. There could be a mismatch between GPT-4’s broad natural language generation skills and the specialized prediction required by the recommender system task. In summary, our specialized RecSysLLM demonstrates that simply utilizing a larger pre-trained language model is not the only path to improved recommendation performance. The model architecture and pretraining objectives also play a vital role. By designing a model specifically for the rec- ommendation, focusing the
2308.10837#51
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
51
Robert B Cialdini and Noah J Goldstein. Social influence: Compliance and conformity. Annu. Rev. Psychol., 55:591–621, 2004. URL https://www.annualreviews.org/doi/abs/10. 1146/annurev.psych.55.090902.142015. Robert B Cialdini and Melanie R Trost. Social influence: Social norms, conformity and compliance. 1998. URL https://psycnet.apa.org/RECORD/1998-07091-021. Jeff Clune. Ai-gas: Ai-generating algorithms, an alternate paradigm for producing general artificial intelligence. CoRR, abs/1905.10985, 2019. URL http://arxiv.org/abs/1905.10985. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021. URL https://arxiv.org/abs/2110.14168.
2308.10848#51
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
52
recommendation performance. The model architecture and pretraining objectives also play a vital role. By designing a model specifically for the rec- ommendation, focusing the pretraining on recommendation data, and tightly bounding the final fine-tuning, our RecSys- LLM is able to match or exceed the performance of even much larger general language models like GPT-4 for next- item prediction. These results highlight the importance of specialized model design in addition to scale for advancing recommendation systems.
2308.10837#52
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
52
Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. Palm-e: An embodied multimodal language model. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 8469–8488. PMLR, 2023. URL https://proceedings. mlr.press/v202/driess23a.html.
2308.10848#52
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
53
Candidate Recommendation. For candidate recommen- dation in Table 9, our RecSysLLM consistently outperforms both ChatGPT and the naive ChatGLM fine-tuning across metrics. This demonstrates the effectiveness of our special- ized approach for this task. In contrast to the next item re- sults, this time, GPT-4 achieves the overall best performance on candidate recommendation. In candidate recommenda- tion, given a user’s interaction history, profile, and a list of candidate items where only one is the ground truth next in- teraction, the model must choose the correct item from the candidates. With a constrained set of options provided, GPT- 4 is able to give full play to its powerful reasoning and de- duction capabilities. The limited choice set prevents GPT- 4’s generative tendencies from leading it astray. As a result, GPT-4 is able to leverage its scale and pretraining to achieve the best overall performance on candidate recommendation. In summary, by providing GPT-4 a focused set of candidates, we can elicit its strengths in logical reasoning while avoiding over-generation. This allows GPT-4 to achieve state-of-the- art results on candidate
2308.10837#53
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
53
Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. Improving factual- ity and reasoning in language models through multiagent debate. CoRR, abs/2305.14325, 2023. doi: 10.48550/arXiv.2305.14325. URL https://doi.org/10.48550/arXiv.2305.14325. Ernst Fehr and Simon G¨achter. Cooperation and punishment in public goods experiments. American Economic Review, 90(4):980–994, 2000. URL https://pubs.aeaweb.org/doi/pdf/ 10.1257/aer.90.4.980. Ben Goertzel and Cassio Pennachin. Artificial general intelligence, volume 2. Springer, 2007. URL https://link.springer.com/book/10.1007/978-3-540-68677-4.
2308.10848#53
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
54
we can elicit its strengths in logical reasoning while avoiding over-generation. This allows GPT-4 to achieve state-of-the- art results on candidate recommendation, showcasing the benefits of its scale and pretraining. Our specialized RecSys- LLM still exceeds the general language models on this task, demonstrating the value of recommendation-specific mod- eling. But these results highlight how large generative LMs like GPT-4 can excel given the right setup.
2308.10837#54
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
54
Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. CAMEL: communicative agents for ”mind” exploration of large scale language model society. CoRR, abs/2303.17760, 2023. doi: 10.48550/arXiv.2303.17760. URL https://doi.org/10. 48550/arXiv.2303.17760. Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. Agentbench: Evaluating llms as agents, 2023.
2308.10848#54
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
55
Conclusion The focus of this paper is to design a novel paradigm of pre- training recommendation models based on large language models. We introduce a novel mask mechanism, span or- der, and positional encoding to inject inter- and intra-entity # Table 8: Performance on next item recommendation. Methods HR@5 NDCG@5 HR@10 NDCG@10 ChatGPT GPT-4 ChatGLM+SFT RecSysLLM 0.4326 0.3846 0.2654 0.3805 0.3208 0.2890 0.2091 0.3072 0.5110 0.4674 0.3729 0.4756 0.3465 0.3159 0.2513 0.4091 Table 9: Performance on candidate recommendation task. Methods HR@1 HR@5 NDCG@5 HR@10 NDCG@10 ChatGPT GPT-4 ChatGLM+SFT RecSysLLM 0.3786 0.7079 0.2984 0.4965 0.5550 0.8154 0.7012 0.7435 0.4715 0.7671 0.6826 0.7032 0.6424 0.8560 0.7621 0.7728 0.5001 0.7804 0.7038 0.7237
2308.10837#55
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
55
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder, Shashank Gupta, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refinement with self-feedback. CoRR, abs/2303.17651, 2023. doi: 10.48550/arXiv.2303.17651. URL https://doi.org/10.48550/arXiv.2303.17651. 11 Preprint
2308.10848#55
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
56
knowledge into the LLM. Although our method follows the architecture of generative language models (GLM) to some extent, the core ideas of special designs for entities in recommendation tasks can be extended to other large lan- guage models. The experiments conducted on public and industrial datasets demonstrate the effectiveness and poten- tial of our proposed model on recommendation systems and related applications. The results show improvements over strong baselines, indicating that encoding entity relation- ships during pretraining can meaningfully improve down- stream performance. While we validate our approach on a select set of datasets, further experiments on a wider range of tasks would better reveal the strengths and limitations of the method. In particular, evaluating the approach across a more diverse set of domains could shed light on how ro- bust the learned representations are. Additionally, from the perspective of causal inference (Yao et al. 2021; Chu et al. 2023), there are likely further improvements to be made in terms of how semantic connections between entities are cap- tured and injected into the model.
2308.10837#56
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
56
11 Preprint Shikib Mehri and Maxine Esk´enazi. Unsupervised evaluation of interactive dialog with dialogpt. In Olivier Pietquin, Smaranda Muresan, Vivian Chen, Casey Kennington, David Vandyke, Nina Dethlefs, Koji Inoue, Erik Ekstedt, and Stefan Ultes (eds.), Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGdial 2020, 1st virtual meeting, July 1-3, 2020, pp. 225–235. Association for Computational Linguistics, 2020. URL https://aclanthology.org/2020.sigdial-1.28/. Marvin Minsky. The Society of Mind. Simon & Schuster, 1988. ISBN 0671657135. URL https: //jmvidal.cse.sc.edu/lib/minsky88a.html. John C Mowen and Harish Sujan. Volunteer behavior: A hierarchical model approach for investi- gating its trait and functional motive antecedents. Journal of consumer psychology, 15(2):170– 182, 2005. URL https://myscp.onlinelibrary.wiley.com/doi/abs/10.1207/ s15327663jcp1502_9.
2308.10848#56
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
57
References Andreas, J. 2022. Language models as agent models. arXiv preprint arXiv:2212.01681. Bao, K.; Zhang, J.; Zhang, Y.; Wang, W.; Feng, F.; and He, X. 2023. Tallrec: An effective and efficient tuning frame- work to align large language model with recommendation. arXiv preprint arXiv:2305.00447. Bodon, F.; and R´onyai, L. 2003. Trie: an alternative data structure for data mining algorithms. Mathematical and Computer Modelling, 38(7-9): 739–751. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Ad- vances in neural information processing systems, 33: 1877– 1901. Chen, Z. 2023. PALR: Personalization Aware LLMs for Recommendation. arXiv preprint arXiv:2305.07622. Cheng, H.-T.; Koc, L.; Harmsen, J.;
2308.10837#57
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
57
Yohei Nakajima. Babyagi. 2023. URL https://github.com/yoheinakajima/babyagi. [Software]. Allen M Omoto and Mark Snyder. Sustained helping without obligation: motivation, longevity Journal of personality of service, and perceived attitude change among aids volunteers. and social psychology, 68(4):671, 1995. URL https://psycnet.apa.org/record/ 1995-26640-001. OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023a. doi: 10.48550/arXiv.2303.08774. URL https://doi.org/10.48550/arXiv.2303.08774. OpenAI. Chatgpt can now see, hear, and speak, 2023b. URL https://openai.com/blog/ chatgpt-can-now-see-hear-and-speak.
2308.10848#57
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10848
58
Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. CoRR, abs/2304.03442, 2023. doi: 10.48550/arXiv.2304.03442. URL https://doi.org/10. 48550/arXiv.2304.03442. Katherine Phillips and Charles O’Reilly. Demography and diversity in organizations: A review of 40 years of research. Research in Organizational Behavior, 20:77–140, 01 1998. URL https://www.researchgate.net/publication/234022034_Demography_ and_Diversity_in_Organizations_A_Review_of_40_Years_of_Research. Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. Communicative agents for software development. CoRR, abs/2307.07924, 2023a. doi: 10.48550/arXiv.2307.07924. URL https://doi.org/10.48550/arXiv.2307.07924.
2308.10848#58
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
59
Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learn- ing Phrase Representations using RNN Encoder–Decoder In Proceedings of the for Statistical Machine Translation. 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), 1724–1734. Chu, Z.; Ding, H.; Zeng, G.; Huang, Y.; Yan, T.; Kang, Y.; and Li, S. 2022. Hierarchical capsule prediction network for marketing campaigns effect. In Proceedings of the 31st ACM International Conference on Information & Knowl- edge Management, 3043–3051. Chu, Z.; Huang, J.; Li, R.; Chu, W.; and Li, S. 2023. Causal effect estimation: Recent advances, challenges, and oppor- tunities. arXiv preprint arXiv:2302.00848. Dai, S.; Shao, N.; Zhao, H.; Yu, W.; Si, Z.; Xu, C.; Sun, Z.; Zhang, X.; and Xu, J. 2023. Uncovering
2308.10837#59
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10837
60
Zhao, H.; Yu, W.; Si, Z.; Xu, C.; Sun, Z.; Zhang, X.; and Xu, J. 2023. Uncovering ChatGPT’s arXiv preprint Capabilities in Recommender Systems. arXiv:2305.02182. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805. Dong, L.; Huang, S.; Wei, F.; Lapata, M.; Zhou, M.; and Xu, K. 2017. Learning to generate product reviews from attributes. In EACL. Du, Z.; Qian, Y.; Liu, X.; Ding, M.; Qiu, J.; Yang, Z.; and Tang, J. 2021. Glm: General language model pre- training with autoregressive blank infilling. arXiv preprint arXiv:2103.10360. Friedman, L.; Ahuja, S.; Allen, D.; Tan, T.; Sidahmed, H.; Long, C.; Xie, J.; Schubiner, G.; Patel, A.; Lara, H.;
2308.10837#60
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
60
Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun. Tool learning with foundation models. CoRR, abs/2304.08354, 2023a. doi: 10.48550/arXiv.2304.08354. URL https://doi.org/10.48550/arXiv. 2304.08354.
2308.10848#60
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
61
Allen, D.; Tan, T.; Sidahmed, H.; Long, C.; Xie, J.; Schubiner, G.; Patel, A.; Lara, H.; et al. 2023. Leveraging Large Language Models in arXiv preprint Conversational Recommender Systems. arXiv:2305.07961. Gao, Y.; Sheng, T.; Xiang, Y.; Xiong, Y.; Wang, H.; and Zhang, J. 2023. Chat-rec: Towards interactive and explain- able llms-augmented recommender system. arXiv preprint arXiv:2303.14524. Geng, S.; Liu, S.; Fu, Z.; Ge, Y.; and Zhang, Y. 2022. Rec- ommendation as language processing (rlp): A unified pre- train, personalized prompt & predict paradigm (p5). In Pro- ceedings of the 16th ACM Conference on Recommender Sys- tems, 299–315. Gu, J.; Zhao, H.; Xu, H.; Nie, L.; Mei, H.; and Yin, W. 2023. Robustness of Learning from Task Instructions. In Findings of ACL. Hidasi, B.; Karatzoglou,
2308.10837#61
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
61
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023b. URL https://arxiv.org/abs/2307.16789. # Reworkd. Agentgpt, 2023. URL https://github.com/reworkd/AgentGPT. [Software]. 12 Preprint Toran Bruce Richards and et al. Auto-gpt: An autonomous gpt-4 experiment, 2023. URL https: //github.com/Significant-Gravitas/Auto-GPT. [Software]. Leonard Salewski, Stephan Alaniz, Isabel Rio-Torto, Eric Schulz, and Zeynep Akata. In-context im- personation reveals large language models’ strengths and biases. CoRR, abs/2305.14930, 2023. doi: 10.48550/arXiv.2305.14930. URL https://doi.org/10.48550/arXiv.2305.14930.
2308.10848#61
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
62
H.; and Yin, W. 2023. Robustness of Learning from Task Instructions. In Findings of ACL. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; and Tikk, D. 2015. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; and Tikk, D. 2016. Session-based Recommendations with Recurrent Neural Networks. In ICLR. Hou, Y.; Zhang, J.; Lin, Z.; Lu, H.; Xie, R.; McAuley, J.; and Zhao, W. X. 2023. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845.
2308.10837#62
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
62
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023a. doi: 10.48550/arXiv.2302.04761. URL https: //doi.org/10.48550/arXiv.2302.04761. Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023b. doi: 10.48550/arXiv.2302.04761. URL https: //doi.org/10.48550/arXiv.2302.04761.
2308.10848#62
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
63
Hu, E. J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; and Chen, W. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. Hui, B.; Zhang, L.; Zhou, X.; Wen, X.; and Nian, Y. 2022. Personalized recommendation system based on knowledge embedding and historical behavior. Applied Intelligence, 1– 13. Jiang, C.; Xue, S.; Zhang, J.; Liu, L.; Zhu, Z.; and Hao, H. 2022. Learning Large-scale Universal User Representation with Sparse Mixture of Experts. Kang, W.-C.; and McAuley, J. 2018. Self-attentive sequen- In 2018 IEEE international confer- tial recommendation. ence on data mining (ICDM), 197–206. IEEE. Kang, W.-C.; Ni, J.; Mehta, N.; Sathiamoorthy, M.; Hong, L.; Chi, E.; and Cheng, D. Z. 2023. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Predic- tion.
2308.10837#63
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
63
Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. Language models are multilingual chain-of-thought reasoners. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/pdf?id=fR3wGCk-IXp. Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning, 2023. URL https: //doi.org/10.48550/arXiv.2303.11366.
2308.10848#63
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
64
L.; Chi, E.; and Cheng, D. Z. 2023. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Predic- tion. arXiv preprint arXiv:2305.06474. Koren, Y.; Bell, R.; and Volinsky, C. 2009. Matrix factoriza- tion techniques for recommender systems. Computer, 42(8): 30–37. Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Im- agenet classification with deep convolutional neural net- works. Advances in neural information processing systems, 25. Li, L.; Zhang, Y.; and Chen, L. 2021. Personalized Trans- In Proceedings former for Explainable Recommendation. of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Con- ference on Natural Language Processing (Volume 1: Long Papers), 4947–4957. Li, P.; Wang, Z.; Ren, Z.; Bing, L.; and Lam, W. 2017. Neural rating regression with abstractive tips generation for recommendation. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in In- formation
2308.10837#64
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
64
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adri`a Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Santilli, Andreas Stuhlm¨uller, Andrew M. Dai, Andrew La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin
2308.10848#64
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
65
Neural rating regression with abstractive tips generation for recommendation. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in In- formation Retrieval, 345–354. Li, S.; and Zhao, H. 2021. A survey on representation learning for user modeling. In Proceedings of the Twenty- Ninth International Conference on International Joint Con- ferences on Artificial Intelligence, 4997–5003. Lin, J.; Dai, X.; Xi, Y.; Liu, W.; Chen, B.; Li, X.; Zhu, C.; Guo, H.; Yu, Y.; Tang, R.; et al. 2023. How Can Recom- mender Systems Benefit from Large Language Models: A Survey. arXiv preprint arXiv:2306.05817. Liu, J.; Liu, C.; Lv, R.; Zhou, K.; and Zhang, Y. 2023a. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149. Liu, Q.; Chen, N.; Sakai, T.; and Wu, X.-M. 2023b. A First Look at LLM-Powered Generative News Recommendation. arXiv
2308.10837#65
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10837
66
N.; Sakai, T.; and Wu, X.-M. 2023b. A First Look at LLM-Powered Generative News Recommendation. arXiv preprint arXiv:2305.06566. Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Ma, C.; Kang, P.; and Liu, X. 2019. Hierarchical gating net- works for sequential recommendation. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 825–833.
2308.10837#66
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
66
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F. Christiano. Learning to summarize with human feed- In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and back. Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Con- ference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 1f89885d556929e98d3ef9b86448f951-Abstract.html. Peter Stone and Manuela Veloso. Multiagent systems: A survey from a machine learning perspective. Auton. Robots, 8(3):345–383, jun 2000. ISSN 0929-5593. doi: 10.1023/A:1008942012299. URL https://doi.org/10.1023/A:1008942012299.
2308.10848#66
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
67
Mao, K.; Zhu, J.; Wang, J.; Dai, Q.; Dong, Z.; Xiao, X.; and He, X. 2021. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In Proceedings of the 30th ACM In- ternational Conference on Information & Knowledge Man- agement, 1243–1252. Muhamed, A.; Keivanloo, I.; Perera, S.; Mracek, J.; Xu, Y.; Cui, Q.; Rajagopalan, S.; Zeng, B.; and Chilimbi, T. 2021. CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models. In NeurIPS Efficient Nat- ural Language and Speech Processing Workshop. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Pro- cessing Systems, 35: 27730–27744. Qiu, Z.; Wu, X.; Gao, J.; and Fan, W. 2021.
2308.10837#67
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
67
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aur´elien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971, 2023a. doi: 10.48550/arXiv.2302.13971. URL https://doi. org/10.48550/arXiv.2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar 13 # Preprint
2308.10848#67
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
68
Information Pro- cessing Systems, 35: 27730–27744. Qiu, Z.; Wu, X.; Gao, J.; and Fan, W. 2021. U-BERT: Pre- training user representations for improved recommendation. In Proceedings of the AAAI Conference on Artificial Intelli- gence, volume 35, 4320–4327. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.; et al. ???? Improving language understanding by generative pre-training. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I.; et al. 2019. Language models are unsupervised multitask learners. OpenAI blog. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; and Liu, P. J. 2020. Explor- ing the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1): 5485–5551. Rasley, J.; Rajbhandari, S.; Ruwase, O.; and He, Y. 2020. Deepspeed: System optimizations enable training
2308.10837#68
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
68
13 # Preprint Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aur´elien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023b. doi: 10.48550/arXiv.2307.09288. URL https://doi.org/ 10.48550/arXiv.2307.09288.
2308.10848#68
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
69
5485–5551. Rasley, J.; Rajbhandari, S.; Ruwase, O.; and He, Y. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Pro- ceedings of the 26th ACM SIGKDD International Confer- ence on Knowledge Discovery & Data Mining, 3505–3506. Rendle, S.; Freudenthaler, C.; Gantner, Z.; and Schmidt- Thieme, L. 2009. BPR: Bayesian Personalized Ranking from Implicit Feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI ’09, 452–461. Arlington, Virginia, USA: AUAI Press. ISBN 9780974903958. Sanh, V.; Webson, A.; Raffel, C.; Bach, S.; Sutawika, L.; Alyafeai, Z.; Chaffin, A.; Stiegler, A.; Raja, A.; Dey, M.; Bari, M. S.; Xu, C.; Thakker, U.; Sharma, S. S.; Szczechla, E.; Kim, T.; Chhablani, G.; Nayak, N.; Datta, D.;
2308.10837#69
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
69
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. CoRR, abs/2305.16291, 2023a. doi: 10.48550/arXiv.2305.16291. URL https://doi.org/ 10.48550/arXiv.2305.16291. Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. Unleashing cognitive synergy in large language models: A task-solving agent through multi-persona self- collaboration. CoRR, abs/2307.05300, 2023b. doi: 10.48550/arXiv.2307.05300. URL https: //doi.org/10.48550/arXiv.2307.05300.
2308.10848#69
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
70
U.; Sharma, S. S.; Szczechla, E.; Kim, T.; Chhablani, G.; Nayak, N.; Datta, D.; Chang, J.; Jiang, M. T.-J.; Wang, H.; Manica, M.; Shen, S.; Yong, Z. X.; Pandey, H.; Bawden, R.; Wang, T.; Neeraj, T.; Rozen, J.; Sharma, A.; Santilli, A.; Fevry, T.; Fries, J. A.; Teehan, R.; Scao, T. L.; Biderman, S.; Gao, L.; Wolf, T.; and Rush, A. M. 2022. Multitask Prompted Training Enables Zero- In International Conference on Shot Task Generalization. Learning Representations. Schuster, M.; and Paliwal, K. K. 1997. Bidirectional recur- rent neural networks. IEEE transactions on Signal Process- ing, 45(11): 2673–2681. Sheu, H.-S.; Chu, Z.; Qi, D.; and Li, S. 2021. Knowledge- guided article embedding refinement for session-based news
2308.10837#70
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
70
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022a. URL https://openreview.net/forum?id= gEZrGCozdqR. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS, 2022b. URL http://papers.nips.cc/paper_files/paper/2022/hash/ 9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html.
2308.10848#70
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
71
recommendation. and Learning Systems, 33(12): 7921–7927. Shi, X.; Xue, S.; Wang, K.; Zhou, F.; Zhang, J. Y.; Zhou, J.; Tan, C.; and Mei, H. 2023. Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning. arXiv preprint arXiv:2305.16646. Sun, F.; Liu, J.; Wu, J.; Pei, C.; Lin, X.; Ou, W.; and Jiang, P. 2019. BERT4Rec: Sequential recommendation with bidi- rectional encoder representations from transformer. In Pro- ceedings of the 28th ACM international conference on infor- mation and knowledge management, 1441–1450. Tang, J.; and Wang, K. 2018. Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the eleventh ACM international conference on web search and data mining, 565–573. Tsai, C. F.; Zhou, X.; Liu, S. S.; Li, J.; Yu, M.; and Mei, H. 2023. Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions.
2308.10837#71
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
71
Jimmy Wei, Kurt Shuster, Arthur Szlam, Jason Weston, Jack Urbanek, and Mojtaba Komeili. Multi-party chat: Conversational agents in group settings with humans and models. CoRR, abs/2304.13835, 2023. doi: 10.48550/arXiv.2304.13835. URL https://doi.org/10. 48550/arXiv.2304.13835. Michael J. Wooldridge and Nicholas R. Jennings. Intelligent agents: theory and practice. Knowl. Eng. Rev., 10(2):115–152, 1995. doi: 10.1017/S0269888900008122. URL https://doi.org/10. 1017/S0269888900008122. Anita Williams Woolley, Christopher F. Chabris, Alex Pentland, Nada Hashmi, and Thomas W. Malone. Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004):686–688, 2010. doi: 10.1126/science.1193147. URL https://www.science. org/doi/abs/10.1126/science.1193147.
2308.10848#71
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
72
Li, J.; Yu, M.; and Mei, H. 2023. Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions. arXiv preprint. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. At- tention is all you need. Advances in neural information pro- cessing systems, 30. Wang, W.; Lin, X.; Feng, F.; He, X.; and Chua, T.-S. 2023. Generative recommendation: Towards next-generation rec- ommender paradigm. arXiv preprint arXiv:2304.03516. Wang, X.; Zhou, K.; Wen, J.-R.; and Zhao, W. X. 2022. Towards unified conversational recommender systems via knowledge-enhanced prompt learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1929–1937. Wu, C.; Wu, F.; Qi, T.; and Huang, Y. 2021. Empower- ing news
2308.10837#72
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
72
Anita Williams Woolley, Ishani Aggarwal, and Thomas W. Malone. Collective intelligence and group performance. Current Directions in Psychological Science, 24(6):420–424, 2015. doi: 10.1177/0963721415599543. URL https://doi.org/10.1177/0963721415599543. Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang. Autogen: Enabling next-gen llm applications via multi- agent conversation framework, 2023. URL https://doi.org/10.48550/arXiv.2308. 08155. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. CoRR, abs/2305.10601, 2023a. doi: 10.48550/arXiv.2305.10601. URL https://doi.org/10. 48550/arXiv.2305.10601. 14 Preprint
2308.10848#72
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
73
Data Mining, 1929–1937. Wu, C.; Wu, F.; Qi, T.; and Huang, Y. 2021. Empower- ing news recommendation with pre-trained language mod- In Proceedings of the 44th International ACM SIGIR els. Conference on Research and Development in Information Retrieval, 1652–1656. Wu, L.; Zheng, Z.; Qiu, Z.; Wang, H.; Gu, H.; Shen, T.; Qin, C.; Zhu, C.; Zhu, H.; Liu, Q.; et al. 2023. A Survey on Large Language Models for Recommendation. arXiv preprint arXiv:2305.19860. Xiao, S.; Liu, Z.; Shao, Y.; Di, T.; Middha, B.; Wu, F.; and Xie, X. 2022. Training large-scale news recommenders with pretrained language models in the loop. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discov- ery and Data Mining, 4215–4225. Xie, S.; Qiu, J.; Pasad, A.; Du, L.; Qu, Q.; and Mei, H. 2022. Hidden State Variability of Pretrained Language Mod-
2308.10837#73
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
73
14 Preprint Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Confer- ence on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023b. URL https://openreview.net/pdf?id=WE_vluYUL-X. Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, and Chuang Gan. Building cooperative embodied agents modularly with large language models. CoRR, abs/2307.02485, 2023a. doi: 10.48550/arXiv.2307.02485. URL https: //doi.org/10.48550/arXiv.2307.02485.
2308.10848#73
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
74
Qiu, J.; Pasad, A.; Du, L.; Qu, Q.; and Mei, H. 2022. Hidden State Variability of Pretrained Language Mod- els Can Guide Computation Reduction for Transfer Learn- ing. In Findings of EMNLP. Xue, S.; Shi, X.; Chu, Z.; Wang, Y.; Zhou, F.; Hao, H.; Jiang, C.; Pan, C.; Xu, Y.; Zhang, J. Y.; Wen, Q.; Zhou, J.; and Mei, H. 2023. EasyTPP: Towards Open Benchmarking the Temporal Point Processes. Xue, S.; Shi, X.; Hao, H.; Ma, L.; Zhang, J.; Wang, S.; and Wang, S. 2021. A Graph Regularized Point Process Model In 2021 International For Event Propagation Sequence. Joint Conference on Neural Networks (IJCNN), 1–7.
2308.10837#74
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
74
Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, and Yong- bin Li. Wider and deeper llm networks are fairer llm evaluators. arXiv preprint arXiv:2308.01862, 2023b. URL https://doi.org/10.48550/arXiv.2308.01862. Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, and Graham Neubig. Webarena: A realistic web environment for building autonomous agents. CoRR, abs/2307.13854, 2023. doi: 10.48550/arXiv.2307.13854. URL https://doi.org/10.48550/arXiv.2307.13854. 15 Preprint # A CONFIGURATIONS OF THE EXPERIMENTS Datasets and Evaluation Metrics Our evaluation assesses different aspects of agents, including general understanding and reasoning capabilities, coding capabilities and tool utilization capabilities.
2308.10848#74
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
75
Xue, S.; Shi, X.; Zhang, Y. J.; and Mei, H. 2022. HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon Prediction of Event Sequences. In Advances in Neural In- formation Processing Systems. Yao, L.; Chu, Z.; Li, S.; Li, Y.; Gao, J.; and Zhang, A. 2021. A survey on causal inference. ACM Transactions on Knowl- edge Discovery from Data (TKDD), 15(5): 1–46. Yao, S.; Tan, J.; Chen, X.; Zhang, J.; Zeng, X.; and Yang, K. 2022. ReprBERT: Distilling BERT to an Efficient Representation-Based Relevance Model for E-Commerce. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 4363–4371. Yoneda, T.; Fang, J.; Li, P.; Zhang, H.; Jiang, T.; Lin, S.; Picker, B.; Yunis, D.; Mei, H.; and Walter, M. R. 2023. Statler: State-Maintaining Language Models for Embodied Reasoning. arXiv
2308.10837#75
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
75
Datasets and Evaluation Metrics Our evaluation assesses different aspects of agents, including general understanding and reasoning capabilities, coding capabilities and tool utilization capabilities. • General Understanding Capabilities: We utilize two datasets. The first one is a Dialogue response dataset, FED (Mehri & Esk´enazi, 2020), where given a multi-round chat history, the agent or agent group is required to generate the next chat. Following previous work (Madaan et al., 2023), we utilize GPT-4 as the evaluator to score the agent-generated response against the human-written ones, and report the agent’s win rate. The second dataset is Commongen- Challenge (Madaan et al., 2023), which is a constrained generation dataset where given 20 concepts, the agent is required to generate a coherent and grammatically correct paragraph containing as many concepts as possible. We report the average percentage of the covered concepts.
2308.10848#75
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
76
D.; Mei, H.; and Walter, M. R. 2023. Statler: State-Maintaining Language Models for Embodied Reasoning. arXiv preprint. Yu, Z.; Lian, J.; Mahmoody, A.; Liu, G.; and Xie, X. 2019. Adaptive User Modeling with Long and Short-Term Prefer- ences for Personalized Recommendation. In IJCAI, 4213– 4219. Zhang, J.; Xie, R.; Hou, Y.; Zhao, W. X.; Lin, L.; and Wen, J.-R. 2023. Recommendation as instruction follow- ing: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001. Zhang, T.; Zhao, P.; Liu, Y.; Sheng, V. S.; Xu, J.; Wang, D.; Liu, G.; and Zhou, X. 2019. Feature-level Deeper Self- Attention Network for Sequential Recommendation. In IJ- CAI, 4320–4326. Tiny-Attention Zhao, H.; Tan, H.; and Mei, H. 2022. Adapter: Contexts Are More Important Than the Number of Parameters. In EMNLP. Zhao, H.; Wang,
2308.10837#76
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
76
• General Reasoning Capabilities: We utilize the English subset of MGSM (Shi et al., 2023), which is a subset of GSM-8k (Cobbe et al., 2021), to evaluate the agents’ mathematical reasoning capabilities. It is a dataset containing grade school math problems. We report the percentage of the correct answers. And we use the logic grid puzzles task from BigBench (Srivastava et al., 2022), which contains logic problems that requires multi-step logic reasoning, to assess the agents’ logical reasoning capabilities. We report the accuracy. • Coding Capabilities: We utilize Humaneval (Chen et al., 2021), which is a code completion dataset, and report Pass@1 metric1 • Tool Utilization Capabilities: Since automatic evaluation on the performance of tool utilization is difficult, and there is currently no relevant benchmark, we craft 10 complex instructions and manually assess the performance. The instructions are listed in Appendix B.
2308.10848#76
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
77
Tan, H.; and Mei, H. 2022. Adapter: Contexts Are More Important Than the Number of Parameters. In EMNLP. Zhao, H.; Wang, K.; Yu, M.; and Mei, H. 2023. Explicit Planning Helps Language Models in Logical Reasoning. arXiv preprint. Zhou, K.; Wang, H.; Zhao, W. X.; Zhu, Y.; Wang, S.; Zhang, F.; Wang, Z.; and Wen, J.-R. 2020. S3-rec: Self-supervised learning for sequential recommendation with mutual infor- mation maximization. In Proceedings of the 29th ACM In- ternational Conference on Information & Knowledge Man- agement, 1893–1902.
2308.10837#77
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
77
Expert Recruitment For tasks including dialogue response, code completion, and constrained generation, four agents is recruited into the system. For the task of mathematical reasoning, we limited the number to two agents. This decision was based on our observation that an increase in the number of reviewers for mathematical reasoning tasks correlates with a higher likelihood of them giving erroneous critiques, leading to incorrect solutions by the solver. We have a discussion on this topic in Section 3.1. For tool utilization, we recruit two or three agents to engage in collaborative decision-making and action execution depending on the specific task. The detailed setups are listed at Appendix B. Currently the number of experts is pre-defined by us for each task. We are seeking a way to automate this decision as well. Collaborative Decision-Making For tasks in coding and general understanding and reasoning, we use the vertical structure because all these tasks require only one response as the answer, and the solver in the vertical structure can be responsible for answering. For tool utilization, we use the horizontal structure because the agents should clarify their own sub-tasks in the discussion. Action Execution For the Humaneval code completion dataset benchmarked with GPT-4, we incorporate an additional agent during the action execution stage to craft unit testing code (in an zero-shot manner). Subsequently, the generated code is subjected to unit testing, and the testing results are conveyed as the environment state to the evaluation module.
2308.10848#77
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10848
78
Regarding the constrained generation dataset, Commongen-Challenge, the agent-generated response undergoes a concept coverage check. Any missing concepts are then passed to the evaluation module as the environment state. In the context of tool utilization, each agent iteratively calls the tool in the ReAct manner, up to a maximum of 10 iterations. Upon reaching the final iteration, the agent is forced to draw a conclusion regarding the result, labeling the task’s status as either ”pending” or ”finished”. These conclusions are then forwarded to the evaluator for assessment. 1The method for calculating Pass@1 differs from the approach in Chen et al. (2021). Instead of generating multiple responses and calculating an unbiased estimator, we directly employ the first response to compute the Pass@1. 16 Preprint Evaluation To facilitate a feedback loop, an agent was tasked with the role of evaluator. This agent, provided with the initial problem p and the decisions A made during the collaborative decision- making stage, is charged with determining the correctness of those decisions. In cases where the decision is identified as erroneous, feedback is channeled back to the expert recruitment stage. If the decision meets the accuracy criteria, it is determined as the final answer to p. While our current configuration employs an agent for evaluation, we acknowledge the potential of human evaluators and intend to incorporate such experiments in future endeavors.
2308.10848#78
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
79
Motivation Compared with recommendation models based on large lan- guage models (LLMs), conventional recommendation mod- els (Hidasi et al. 2015; Tang and Wang 2018; Kang and McAuley 2018; Sun et al. 2019; Geng et al. 2022) trained from scratch using architectures like Transformer (Vaswani et al. 2017), Bert (Devlin et al. 2018), RNN (Schuster and Paliwal 1997), CNN (Krizhevsky, Sutskever, and Hin- ton 2012) have several key limitations. First, they lack a deep understanding of context and semantics that comes from pretraining a large model on diverse corpora. As a result, they struggle to truly comprehend user preferences and behavioral sequences. Second, they have minimal abil- ity to generate novel, high-quality recommendations since they are not optimized for free-form text generation. LLMs, in contrast, can produce human-like recommendations by leveraging their generative capabilities. Third, conventional models have difficulty effectively leveraging multiple data modalities like text, images, audio, etc. LLMs are adept at multimodal processing due to pretraining objectives that learn connections between modalities. Finally,
2308.10837#79
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
79
B EXPERIMENT DETAILS FOR MULTI-AGENT TOOL USING B.1 SETUPS This section provides specific implementation details for enabling multiple agents in AGENTVERSE to collaboratively utilize tools to accomplish user’s query. Unless specified herein, the implementation adheres to the standard procedures defined in the other experiments. Collaborative Decision-Making Agents recruited during the Expert Recruitment stage engage in collaborative discussions regarding the assigned task using a horizontal communication structure. In this configuration, agents communicate in a predetermined sequential order. At the conclusion of their discussions, an additional agent is designated as the ”summarizer” and is responsible for consolidating the discussed sub-tasks for each participant. Action Execution During action execution, we follow the ReAct prompting technique to let the agents write down their thought, and then call the tool, and we return the tool response to the agent. The agent again write down the thought and call the tools. The process iterates until the agent decide to submit its conclusion, or a maximum iteration number, which we set to be 10, is reached. In practice, we use OpenAI’s function call to handle the tools. # B.2 RESULTS
2308.10848#79
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
80
multiple data modalities like text, images, audio, etc. LLMs are adept at multimodal processing due to pretraining objectives that learn connections between modalities. Finally, LLMs can seamlessly adapt to new downstream recommendation tasks through simple fine-tuning, whereas conventional models require extensive retraining. For example, BERT4Rec (Sun et al. 2019) employs deep bidirectional self-attention to model user behavior sequences. They are trained solely based on the recommendation data without the general knowledge corpus, resulting in a limited understanding and reasoning of behavior sequence data and an inability to em- power downstream tasks better. In summary, recommenda- tion models based on pretrained LLMs are more contextual, creative, versatile, and adaptable compared to conventional models trained from scratch.
2308.10837#80
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
80
# B.2 RESULTS For action execution, the ReAct prompting technique is employed. Agents articulate their thought process, invoke the respective tool, and subsequently receive feedback from the tool. This cycle of reflection and tool interaction continues iteratively. The process concludes either when the agent finalizes its execution with its conclusion or after a pre-set maximum number of iterations—set (10 in our experiments). We utilize OpenAI’s function call to handle the tool calls. Instruction 1 Recently, it has become popular in the AI field to verify the mathematical reasoning abilities of large language models by observing if they can solve the ”24-Point Game.” What is this game? Does it have a code-based solution? If it does, provide a Python code along with test cases and test its functionality. What are some other similar games that can be used to test the models’ mathematical reasoning abilities?
2308.10848#80
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
81
Current Development Although the application of LLMs like ChatGPT in recom- mendation has not been widely explored yet, some novel in- vestigations have emerged recently that show their promis- ing potential in this domain. There are mainly three cate- gories. (1) LLM as a recommendation system. First, Unlike tradi- tional recommendation methods, they do not retrain a new model, relying only on the prompts of LLM (Liu et al. 2023a; Gao et al. 2023; Dai et al. 2023; Chen 2023) or slight fine-tuning (Zhang et al. 2023; Kang et al. 2023; Bao et al. 2023) to convert recommendation tasks into natural language tasks. They always design a set of prompts on recommendation scenarios, including rating prediction, se- quential recommendation, direct recommendation, explana- tion generation, and review summarization. They explore the use of few-shot prompting to inject interaction information that contains user potential interest to help LLM better un- derstand user needs and interests. (2) LLM as supplementary information via embeddings or tokens. This modeling paradigm (Wu et al. 2021; Qiu et al. 2021; Yao et al. 2022; Muhamed et al. 2021; Xiao et al. 2022) views the language model as a feature extractor,
2308.10837#81
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
81
Single ReAct Agent: (The generated code is omitted) The ’24-Point Game’ is a mathematical game where four numbers must be used with the basic arithmetic operations (addition, subtraction, multiplication, and division) to make the number 24. The Python code I developed generates all possible combinations of the four numbers and operations to find a solution that equals 24. The code was tested with different sets of numbers, and it correctly identified whether a solution existed and what it was. This task tested my ability to understand the rules of a game and to develop and test a program that solves it. (game rule ✔, code ✔, similar games ✖)
2308.10848#81
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
82
which feeds the features of items and users into LLMs and outputs corresponding embeddings. A traditional RS model can utilize knowledge-aware embeddings for various rec- ommendation tasks. This approach (Liu et al. 2023b; Wang et al. 2022, 2023) generates tokens based on the inputted items’ and users’ features. The generated tokens capture po- tential preferences through semantic mining, which can be integrated into the decision-making process of a recommen- dation system. (3) LLM as Agent. As an agent, the large model assists in scheduling the entire recommendation model for recom- mendations and is responsible for pipeline control. Specif- ically, these models (Andreas 2022; Bao et al. 2023; Hou et al. 2023; Lin et al. 2023; Gao et al. 2023; Friedman et al. 2023) help to adapt LLM to the recommendation domain, coordinate user data collection, feature engineering, feature encoder, scoring/ranking function.
2308.10837#82
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
82
AGENTVERSE (2 agents): (The generated code is omitted) The ”24-Point Game” is a mathematical game where four numbers must be used with the basic arithmetic operations (addition, subtraction, multiplication, and division) to make the number 24. A Python code was developed and tested that generates all possible combinations of the four numbers and operations to find a solution that equals 24. The code was tested with different sets of numbers, and it correctly identified whether a solution existed and what it was. Similar games to the ’24-Point Game’ that can be used to test mathematical reasoning abilities include ’Make a Number’ and ’Math Safe’. ’Make a Number’ is a game where players are given 4 numbers and 4 operations and they have to use these to create a target number. ’Math Safe’ is a game where players use four given single-digit numbers and any of 17 # Preprint the four operations to reach a target number and open a safe. Both these games require the use of operations and logical thinking to reach a target number, similar to the ’24-Point Game’. (game rule ✔, code ✔, similar games ✔)
2308.10848#82
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
83
Challenges Compared to superficially leveraging large language mod- els, our purpose is built on the large language model, maxi- mizing the preservation of knowledge and logical reasoning abilities from the original large language model to ensure the inference for the behavioral sequences and fluent gen- eration of downstream sub-tasks, while also achieving the recommendation function by learning user profile features and user behavior sequences. The crucial aspect of harness- ing the power of language models in enhancing recommen- dation quality is the utilization of their high-quality repre- sentations of textual features and their extensive coverage of external knowledge to establish correlations between items and users. (Wu et al. 2023). Therefore, we need to preserve the tokenization, parameters, and architecture of the large language model as much as possible. For example, Pretrain, Personalized Prompt, and Predict Paradigm (P5) (Geng et al. 2022) is established upon a basic encoder–decoder frame- work with Transformer blocks to build both the encoder and decoder. Although it is built on T5 (Raffel et al. 2020), it modified the structure of the model by adding additional positional encodings and whole-word embeddings, which will partially destroy the original knowledge in the language model.
2308.10837#83
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
83
Instruction 2 (Additional API: Weather) I want to hold a party at somewhere around Tsinghua University tomorrow. I need you to look for some best places for holding a party nearby, and tell me whether the weather is good for holding a party tomorrow. Also, I want to know what activities can be considered in my party. Help me search the web. Single ReAct Agent: There are several restaurants near Tsinghua University that could be good places for a party, including Lan Yang ShuFang, YiLan ShaoKao, Guan Shi JiChi, Dong Lai Shun, and Xibu Mahua Beef Noodle. The weather in Beijing tomorrow is expected to be overcast with a temperature of 26 degrees Celsius, which might be suitable for a party. Some activities that can be considered for the party include the Saran Wrap Game, Post It Note Game, Most Likely To, Never Have I Ever, Would You Rather, and many more. (✔)
2308.10848#83
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
84
Notably, there is a difference in the format of the data. Large language models are trained on vast amounts of logically structured text, with consistent reasoning, logical thought processes, and proper grammar. In contrast, recom- mendation systems analyze digital user features, fixed item entities, and incoherent behavioral sequences. Additionally, The purpose of training data for large language models is to teach the model how to understand language and generate new text that is similar to the training data. Conversely, the purpose of user behavioral sequence data in recommenda- tion systems is to dig a deeper understanding of user prefer- ences, behavior sequences, and relationships between them so that to provide personalized recommendations. Therefore, building a recommendation system on top of a large language model that retains the LLM’s knowledge and logical reasoning abilities, while also achieving the rec- ommendation function by learning user profile features and user behavior sequences poses significant challenges.
2308.10837#84
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
84
AGENTVERSE (3 agents): The weather in the Tsinghua University area tomorrow is expected to be overcast with a temperature of 26 degrees Celsius, which is suitable for holding a party. The best places for a party near Tsinghua University include the Summer Palace, Wudaokou, Zhongguancun, and Peking University, as well as several restaurants nearby. For party activities, you could consider games such as the Saran Wrap Game, the Post It Note Game, How’s Yours?, Most Likely To, Never Have I Ever, Would You Rather?, Heads Up!, I’m Hosting a Party..., Two Truths and a Lie, Sticker Stalker, Great Minds Think Alike, murder mystery games, Who Am I?, the Newlywed Game, Truth or Dare, Spoons, Pictionary, Kiss Marry Kill, the Don’t Say Yes game, Cards Against Humanity, What Do You Meme, Drawing Without Dignity, and That’s What She Said. These games are suitable for various party occasions and there are also suggestions for icebreakers for guests who don’t know each other well. (✔provide more details)
2308.10848#84
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
85
Baselines in Benchmark Experiments To showcase our competence in a wide range of recommendation-related tasks, we employ representative approaches for different tasks, including Rating Prediction, Direct Recommendation, Sequential Recommendation, Ex- planation Generation, and Review Summarization, that have been previously used by (Geng et al. 2022). The summary of baseline methods for five different task families is provided in Table 10. Rating Prediction. This task involves incorporating user- item rating data as part of the training set, where item rat- ings are represented numerically. The model is asked ques- tions with prompts, and it outputs corresponding rating val- ues. The baselines for this task are MF (Koren, Bell, and Volinsky 2009) and MLP (Cheng et al. 2016). Direct Recommendation. For direct recommendation, we employ classic algorithms BPR-MF (Rendle et al. 2009), BPR-MLP (Cheng et al. 2016) and SimpleX (Mao et al. 2021) as baselines. They showcase the effectiveness of di- rect recommendation tasks when utilizing non-semantic in- formation as features. This allows us to gain a more compre- hensive understanding of the potential of recommendations given by
2308.10837#85
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
85
Instruction 3 (Additional API: Billboard) I’m currently analyzing what is popular on the website. Can you help me find the recent trending stuff. It could be anything, like trending news, products, books, movies, music, etc. Give a summarization for me. Single ReAct Agent: The recent trending products on the web include Apple AirPods Pro, Diamond Dazzle Cleaner Pen, Ugg Slippers, Squishmallows, Winix Air Purifier, ThisWorx Car Vacuum Cleaner, All-Clad Cookware, Eufy Robot Vacuum, Renpho Eye Massager, Crocs, Throw blankets, Chuckit! Sport Dog Ball Launcher, Apple AirTags, Revlon Hair Dryer Brush, Lego Sets, KitchenAid Stand Mixer, Winc Wine Subscription, Saatva Mattress, Home Chef Subscription, and MasterClass Subscription. Unfortunately, we were unable to find specific information on recent trending books. (trending product ✔) AGENTVERSE (3 agents): Here is a summary of the trending topics on the website: Music: 1. ’Heat Waves’ by Glass Animals 2. ’We Don’t Talk About Bruno’ by Carolina Gaitan, Mauro Castillo, Adassa, Rhenzy Feliz, Diane Guerrero, Stephanie Beatriz & Encanto Cast
2308.10848#85
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
86
tasks when utilizing non-semantic in- formation as features. This allows us to gain a more compre- hensive understanding of the potential of recommendations given by LLM-based models. Sequential Recommendation. The sequential recommen- dation task utilizes the user’s historical interaction sequences as input to predict the next item. We compare our proposed approaches with representative baselines in the field. Among that, some models aim to model the Markov Chain of user interactions by way of neural network architectures like con- volutional neural networks, recurrent neural networks, and attention-based modules. Caser (Tang and Wang 2018) em- ploys convolutional neural networks to model user inter- ests. HGN (Ma, Kang, and Liu 2019) adopts hierarchical gating networks to capture user behaviors from both long and short-term perspectives. GRU4Rec (Hidasi et al. 2016) utilizes recurrent neural network to model the user click history sequence. SASRec (Kang and McAuley 2018) and FDSA (Zhang et al. 2019) use self-attention modules to model feature transition patterns for sequential recommen- dation and the former combine
2308.10837#86
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
86
Guerrero, Stephanie Beatriz & Encanto Cast 3. ’Super Gremlin’ by Kodak Black 4. ’Slime You Out’ by Drake Featuring SZA 5. ’Paint The Town Red’ by Doja Cat 6. ’Snooze’ by SZA # Movies: 1. ’No One Will Save You’ 2. ’A Haunting in Venice’ 3. ’Expend4bles’ # Books: 1. ’The Midnight Library’ 2. ’The Invisible Life of Addie LaRue’ 3. ’The Seven Husbands of Evelyn Hugo’ 18 # Preprint # Products: 1. Laser hair removal 2. Head scarfs 3. Detangling hair brushes 4. Hair wigs 5. Straight-leg jeans 6. Shapewear 7. Athleisure 8. Abdominal crunch straps 9. Swimsuits 10. Power tool accessories 11. Dog toys 12. Shot glasses 13. Costume shoes 14. Slipcovers 15. Wall plates and covers 16. Baby shirts 17. Curtain and drape rings 18. Cabinet knobs and handles 19. Watch accessories 20. Tumblers 21. Model train sets 22. Signage 23. Wedding dresses 24. Table runners
2308.10848#86
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
87
2018) and FDSA (Zhang et al. 2019) use self-attention modules to model feature transition patterns for sequential recommen- dation and the former combine RNN-based approaches to retain the sequential properties of items. BERT4Rec (Sun et al. 2019) adopts the BERT-style masked language mod- eling to learn the relations among items from the perspec- tive of bidirectional representations in the recommendation. It started to use methods in neural language processing, but BERT did not have a strong semantic understanding capac- ity at that time. S3-Rec (Zhou et al. 2020) leverages self- supervised objectives to enhance the discovery of correla- tions among different items and their attributes. Explanation Generation. We evaluate the task of expla- nation generation by comparing the performance of several baseline models. Attn2Seq (Dong et al. 2017) and NRT (Li et al. 2017) utilizes the neural network to encode attributes of user and item into vectors and then invokes an attention mechanism or GRU (Cho et al. 2014) to generate reviews conditioned on the attribute vector. PETER (Li, Zhang, and Chen 2021) use Transformer architecture
2308.10837#87
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
87
Please note that the popularity of these items can vary based on the region and demographic of the consumers. (music ✔, movie ✔, books ✔, products ✔) Instruction 4 I’m planning a two-week vacation to Japan next month. Help me plan my itinerary. I want to visit Tokyo, Kyoto, and Osaka. Look for the top tourist attractions in each city, and also suggest the best mode of travel between these cities. Additionally, find out the weather forecast for the month I’ll be visiting. Single ReAct Agent: The top tourist attractions in Tokyo are the Tokyo Skytree, Senso-ji Temple, Meiji Shrine, Tokyo Disneyland, Ueno Park, and Odaiba. In Kyoto, the top attractions are Nijo Castle, Fushimi-Inari Taisha Shrine, Kinkaku-ji: The Golden Pavilion, Kiyomizu-dera Temple, Sanj¯usangen- d¯o Temple, and Kyoto Imperial Palace. In Osaka, tourists can visit Osaka Castle, Shitenn¯o-ji Temple, Osaka Aquarium Kaiyukan, Tempozan Ferris Wheel and Harbor Village, and Universal Studios Japan. These attractions offer a mix of historical, cultural, and modern experiences for visitors. (tourist attractions ✔, best mode of travel between cities ✖, weather forecast ✖)
2308.10848#87
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10848
88
AGENTVERSE (3 agents): Your two-week vacation to Japan is shaping up to be quite exciting. In Tokyo, you can visit the Tokyo Skytree, Senso-ji Temple, Meiji Shrine, Tokyo Disneyland and DisneySea, and the Odaiba area. In Kyoto, the top attractions are Nijo Castle, Fushimi-Inari Taisha Shrine, Kinkaku-ji (Golden Pavilion), Kiyomizu-dera Temple, and Sanj¯usangen-d¯o Temple. While in Osaka, you should visit Osaka Castle, Shitenn¯o-ji Temple, Osaka Aquarium Kaiyukan, and Universal Studios Japan. As for transportation, the best way to travel between these cities is by train, specifically the Shinkansen bullet train. You can also consider flights with ANA or JAL, or if you’re on a budget, the night bus or LCC flights by Jetstar and Peach. Regarding the weather, it’s difficult to provide an exact forecast for the next month, but we’ve gathered the average weather data for these cities based on previous years. Please remember to check the weather forecast closer to your departure date for a more accurate prediction. Enjoy your trip! (tourist attractions ✔, best mode of travel between cities ✔, weather forecast ✔) 19 # Preprint
2308.10848#88
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
89
Table 10: The summary of baseline methods for five differ- ent task families. Rating Pre MF (Koren, Bell, and Volinsky 2009) MLP (Cheng et al. 2016) Direct Rec BPR-MF (Rendle et al. 2009) SimpleX (Mao et al. 2021) BPR-MLP (Cheng et al. 2016) Sequential Rec Caser (Tang and Wang 2018) GRU4Rec (Hidasi et al. 2016) FDSA (Zhang et al. 2019) S3-Rec (Zhou et al. 2020) HGN (Ma, Kang, and Liu 2019) BERT4Rec (Sun et al. 2019) SASRec (Kang and McAuley 2018) BERT4Rec (Sun et al. 2019) Explanation Gen Attn2Seq (Dong et al. 2017) PETER (Li, Zhang, and Chen 2021) NRT (Li et al. 2017) PETER+ Review Sum T0 (Sanh et al. 2022) GPT-2 (Radford et al. 2019) 0.505 (16, 0.4989) (8, 0.4965) 0.485 0.465 (32, 0.462) 0.445 ® = (4, 0.4276) = 0.425 0.405 0.385 (2, 0.3709) 0.365 rank
2308.10837#89
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
89
19 # Preprint Instruction 5 I am planning a date with my girlfriend this week, please search for a good movie theater and a restaurant near Tsinghua University in Beijing and recommend a good movie to watch. Please search the web. Single ReAct Agent: Your date near Tsinghua University in Beijing is all set. For dining, you can choose from Lan Yang ShuFang, YiLan ShaoKao, Guan Shi JiChi (Zhongguancun), or Dong Lai Shun (Zhongguancun HaiTu). As for the movie, there are several highly-rated ones showing in 2023. Some of these include M3GAN, Plane, Missing, Infinity Pool, John Wick: Chapter 4, Creed III, Scream VI, Air, Suzume, Rise, Are You There God? It’s Me, Margaret, Showing Up, and The Covenant. Enjoy your date! (restaurant ✔, movie ✔, theater ✖)
2308.10848#89
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
90
rank Figure 3: The HR@1 with different rank r of LoRA. modified attention mask. The variant PETER+ takes a hint feature word to augment the process of generating explana- tions. Review Related. For review summarization, we adopt pre- trained T0 (Sanh et al. 2022) and GPT-2 (Radford et al. 2019) as baselines. The latter model parameters were ob- tained from Hugging Face1, which is a big platform to share models, datasets, and applications. Further Analysis in the real-world dataset In addition to optimizing the recommendation performance, it is also important to understand why large language mod- els like ChatGPT and GPT-4 are able to effectively conduct recommendation tasks in the first place. To explore this fur- ther, we provide several real-world case studies in Figure 4, where we systematically probe and dissect the reason- ing process of these models when making recommendations, using carefully designed prompt-based queries. This anal- ysis sheds light on the strengths and limitations of relying solely on the knowledge and reasoning capabilities embed- ded in large pre-trained language models for recommenda- tion tasks, and points towards potential areas for improve- ment.
2308.10837#90
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
90
AGENTVERSE (3 agents): Your date is all set! For dining, we recommend either Jiasuo Restaurant or Zijingyuan, both located conveniently on the Tsinghua University campus. For your movie, we suggest ’No Bears’, which is currently popular and highly rated. You can watch it at the Maple Garden Drive-In Cinema, one of the top-rated movie theaters near Tsinghua University. Enjoy your date! (restaurant ✔, movie ✔, theater ✔) Instruction 6 I want to kick off a book club with my friends. Can you tell me the top 5 bestselling books this month, gather the content summary for each, and find online platforms where we can buy or borrow them? Single ReAct Agent: I was unable to find the top 5 bestselling books for October 2023. The websites I visited did not provide the required information. Therefore, I am unable to provide the content summary for each of the top 5 bestselling books and find online platforms where they can be bought or borrowed. (✖)
2308.10848#90
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10848
91
AGENTVERSE (2 agents):The top 5 bestsellers this month can be bought or borrowed from several online platforms. These include OverDrive, which allows you to borrow books for free through your local library or school. Other platforms include the Amazon Kindle Store, Barnes & Noble’s Nook Book Store, Fictionwise, eBooks.com, eReader.com, Google Play Book Store, Powell’s Books, as well as eBook lending and borrowing services like eBookFling, Lendle, and BookLending.com. (online platforms ✔, top 5 bestselling books ✖, content summary ✖) Instruction 7 I am planning to buy a new car. Could you help me compare the features and prices of the latest models of Tesla, Ford, and Toyota? Include details about range, charging time, safety features, and after-sales service. Also, provide a brief analysis of the pros and cons of each car.
2308.10848#91
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
92
Chinese MABE fo KARA IYI: BEA J] ko, RAE A I, BEA Kd, RAE A A, REAM, BAH), BAL 8, RAE A, RAE HH, RAE DID, OR English Based on the known entity sequence words , beauty products, home daily necessities, beauty products, home daily necessities, home daily necessities, beauty products, beauty products, home daily Interaction Rea F | necessities, home daily necessities, home daily necessities, snacks, large-sized clothing, shoes, large- Sequences RR, KID, HF, XM, XMM, AMA, AMA, WAM, | 04 clothing, large-sized clothing, large-sized clothing, large-sized clothing, beauty products, large- LMI, EMP BER TEAL ESHER, BME | © ; an ; : Pains sized clothing. Predict the top 5 entities that the user may interact with next in descending order of probability Ground Truth | #4 Ar Beauty products GPT-3.5-turbo fas A is, RAED A, KM, HP, HRA Ao Beauty products, Home daily necessities, Large-sized clothing, Snacks, Shoes Result » BARS: AERA P, MAA SRLT DK, HA
2308.10837#92
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]
2308.10848
92
Single ReAct Agent: The 2023 Tesla Model 3 starts at $41,880 and goes up to $54,880. It comes with standard features such as heated front seats, navigation, and Tesla’s Autopilot semi-autonomous driv- ing system. In terms of safety features, the Model 3 includes standard automated emergency braking, lane-departure warning, and adaptive cruise control. Tesla’s after-sales service is characterized by a vertically integrated format that eliminates third-party intermediaries and relies on over-the-air (OTA) update capabilities and a mobile service fleet. The company’s OTA update capabilities will open up new subscription-based aftersales revenue streams, while its closed-loop system allows access to vehicle data for driving-related feedback loops and predictive maintenance. (Tesla ✔, Other brands ✖)
2308.10848#92
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework \framework that can collaboratively and dynamically adjust its composition as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that \framework framework can effectively deploy multi-agent groups that outperform a single agent. Furthermore, we delve into the emergence of social behaviors among individual agents within a group during collaborative task accomplishment. In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups. Our codes for \framework will soon be released at \url{https://github.com/OpenBMB/AgentVerse}.
http://arxiv.org/pdf/2308.10848
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou
cs.CL
Under review. Code at https://github.com/OpenBMB/AgentVerse/
null
cs.CL
20230821
20231023
[ { "id": "2308.01862" }, { "id": "2307.16789" } ]
2308.10837
93
Ao Beauty products, Home daily necessities, Large-sized clothing, Snacks, Shoes Result » BARS: AERA P, MAA SRLT DK, HA ARGU, THRAMP MAAR HAREM, 22 GPT-3.5-turbo EI—#, RFRAKSARAKWR—H, ... AMM AL FRAP, KALLA Beauty products: In the known entity sequence, beauty products appeared four times, and alternated. This may indicate that the user is interested in beauty products but has already made some purchases. ‘They may consider making more purchases in the future, .... Large-sized clothing: In the known entity sequence, large-sized clothing appeared six times in a row. This may indicate that the user has already purchased a lot of clothing and may consider buying other types of products next. Large-sized clothing, Beauty products, Home daily necessities, Snacks, Shoes Explanation A, MAR Hy, THR RAP CAME TRSEMR, RFRIRAE EM EK ee Rah Bee GPT-4 Result AMS, REAM, WARS, HAS, BE APF: 1. KM: ALC HRAPAAP, MPS GPT4 KM” REGKRRS, ALAM A
2308.10837#93
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized recommendations by utilizing large language models (LLM). However, effectively integrating LLM's commonsense knowledge and reasoning abilities into recommendation systems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and inference. This allows RecSysLLM to leverage LLMs' capabilities for recommendation tasks in an efficient, unified framework. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models.
http://arxiv.org/pdf/2308.10837
Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li
cs.IR
13 pages, 4 figures
null
cs.IR
20230821
20230821
[ { "id": "1810.04805" }, { "id": "2305.07001" }, { "id": "2305.19860" }, { "id": "2212.01681" }, { "id": "2305.06566" }, { "id": "2305.07961" }, { "id": "2302.00848" }, { "id": "2305.00447" }, { "id": "2305.02182" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "1511.06939" }, { "id": "2103.10360" }, { "id": "2303.14524" }, { "id": "1711.05101" }, { "id": "2304.10149" }, { "id": "2305.06474" }, { "id": "2306.05817" }, { "id": "2305.16646" }, { "id": "2106.09685" }, { "id": "2305.07622" } ]