doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.11432 | 128 | [71] Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022. [72] Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, and Stephen MacNeil. Memory sandbox: Transparent and interactive memory management for conversational agents. arXiv preprint arXiv:2308.01542, 2023.
[73] Sajed Jalil, Suzzana Rafi, Thomas D LaToza, Kevin Moran, and Wing Lam. ChatGPT and software testing education: Promises & perils. In 2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pages 4130â4137. IEEE, 2023.
[74] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1â38, 2023. | 2308.11432#128 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 129 | [75] Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, and He Liang. Cgmi: Config- urable general multi-agent interaction framework, 2023.
[76] Oliver P John, Eileen M Donahue, and Robert L Kentle. Big five inventory. Journal of Personality and Social Psychology, 1991.
[77] John A Johnson. Measuring thirty facets of the five factor model with a 120-item public domain inventory: Development of the ipip-neo-120. Journal of research in personality, 51:78â89, 2014.
[78] Sungmin Kang, Juyeon Yoon, and Shin Yoo. Large language models are few-shot testers: In 2023 IEEE/ACM 45th International Exploring LLM-based general bug reproduction. Conference on Software Engineering (ICSE), pages 2312â2323. IEEE, 2023.
[79] Yeonghun Kang and Jihan Kim. Chatmof: An autonomous ai system for predicting and generating metal-organic frameworks. arXiv preprint arXiv:2308.01423, 2023. | 2308.11432#129 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 130 | [80] Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, et al. Mrkl systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. arXiv preprint arXiv:2205.00445, 2022.
29
[81] Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023.
[82] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â22213, 2022.
[83] Grgur KovaËc, Rémy Portelas, Peter Ford Dominey, and Pierre-Yves Oudeyer. The socialai school: Insights from developmental psychology towards artificial socio-cultural agents. arXiv preprint arXiv:2307.07871, 2023. | 2308.11432#130 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 131 | [84] Ranjay Krishna, Donsuk Lee, Li Fei-Fei, and Michael S Bernstein. Socially situated artificial intelligence enables learning from human interaction. Proceedings of the National Academy of Sciences, 119(39):e2115730119, 2022.
[85] Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, et al. Evaluating human- language model interaction. arXiv preprint arXiv:2212.09746, 2022.
[86] Chao Li, Xing Su, Chao Fan, Haoying Han, Cong Xue, and Chunmo Zheng. Quantify- ing the impact of large language models on collective opinion dynamics. arXiv preprint arXiv:2308.03313, 2023.
[87] Cheng Li, Jindong Wang, Kaijie Zhu, Yixuan Zhang, Wenxin Hou, Jianxun Lian, and Xing Xie. Emotionprompt: Leveraging psychology for large language models enhancement via emotional stimulus. arXiv preprint arXiv:2307.11760, 2023. | 2308.11432#131 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 132 | [88] Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for" mind" exploration of large scale language model society. arXiv preprint arXiv:2303.17760, 2023.
[89] Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian. The hitchhikerâs guide to program analysis: A journey with large language models. arXiv preprint arXiv:2308.00245, 2023. [90] Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. Api-bank: A benchmark for tool-augmented LLMs. arXiv preprint arXiv:2304.08244, 2023. [91] Siyu Li, Jin Yang, and Kui Zhao. Are you in a masquerade? exploring the behavior and impact of large language model driven social bots in online social networks. arXiv preprint arXiv:2307.10337, 2023. | 2308.11432#132 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 133 | [92] Xinnian Liang, Bing Wang, Hui Huang, Shuangzhi Wu, Peihao Wu, Lu Lu, Zejun Ma, and Zhoujun Li. Unleashing infinite-length input capacity for large-scale language models with self-controlled memory system. arXiv preprint arXiv:2304.13343, 2023.
[93] Yaobo Liang, Chenfei Wu, Ting Song, Wenshan Wu, Yan Xia, Yu Liu, Yang Ou, Shuai Lu, Lei Ji, Shaoguang Mao, et al. Taskmatrix. ai: Completing tasks by connecting foundation models with millions of apis. arXiv preprint arXiv:2303.16434, 2023.
[94] Yuanzhi Liang, Linchao Zhu, and Yi Yang. Tachikuma: Understading complex interac- tions with multi-character and novel objects by large language models. arXiv preprint arXiv:2307.12573, 2023.
[95] Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny. Codehelp: Using large language models with guardrails for scalable support in programming classes. arXiv preprint arXiv:2308.06921, 2023. | 2308.11432#133 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 134 | [96] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
[97] Bill Yuchen Lin, Yicheng Fu, Karina Yang, Prithviraj Ammanabrolu, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. arXiv preprint arXiv:2305.17390, 2023. [98] Jessy Lin, Nicholas Tomlin, Jacob Andreas, and Jason Eisner. Decision-oriented dialogue for
human-ai collaboration. arXiv preprint arXiv:2305.20076, 2023.
[99] Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, and Qin Chen. Agentsims: An open-source sandbox for large language model evaluation. arXiv preprint arXiv:2308.04026, 2023.
30 | 2308.11432#134 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 135 | 30
[100] Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. LLM+P: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477, 2023.
[101] Hao Liu, Carmelo Sferrazza, and Pieter Abbeel. Chain of hindsight aligns language models with feedback. arXiv preprint arXiv:2302.02676, 3, 2023.
[102] Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960, 2023.
[103] Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. Agentbench: Evaluating LLMs as agents. arXiv preprint arXiv:2308.03688, 2023. | 2308.11432#135 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 136 | [104] Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, et al. BOLAA: Benchmarking and orchestrating LLM-augmented autonomous agents. arXiv preprint arXiv:2308.05960, 2023. [105] Zilin Ma, Yiyang Mei, and Zhaoyuan Su. Understanding the benefits and challenges of using large language model-based conversational agents for mental well-being support. arXiv preprint arXiv:2307.15810, 2023.
[106] Aman Madaan, Niket Tandon, Peter Clark, and Yiming Yang. Memory-assisted prompt editing to improve GPT-3 after deployment. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2833â2861, 2022. | 2308.11432#136 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 137 | [107] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023.
[108] Zhao Mandi, Shreeya Jain, and Shuran Song. Roco: Dialectic multi-robot collaboration with large language models. arXiv preprint arXiv:2307.04738, 2023.
[109] Jordan K Matelsky, Felipe Parodi, Tony Liu, Richard D Lange, and Konrad P Kording. A large language model-assisted education tool to provide feedback on open-ended responses. arXiv preprint arXiv:2308.02439, 2023.
[110] Nikhil Mehta, Milagro Teruel, Patricio Figueroa Sanz, Xin Deng, Ahmed Hassan Awadallah, and Julia Kiseleva. Improving grounded language understanding in a collaborative environment by interacting with agents through help feedback. arXiv preprint arXiv:2304.10750, 2023. | 2308.11432#137 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 138 | [111] Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. Augmented language models: a survey. arXiv preprint arXiv:2302.07842, 2023.
[112] Ning Miao, Yee Whye Teh, and Tom Rainforth. SelfCheck: Using LLMs to zero-shot check their own step-by-step reasoning. arXiv preprint arXiv:2308.00436, 2023. | 2308.11432#138 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 139 | [113] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015. [114] Ali Modarressi, Ayyoob Imani, Mohsen Fayyaz, and Hinrich Schütze. RET-LLM: Towards a general read-write memory for large language models. arXiv preprint arXiv:2305.14322, 2023.
[115] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. WebGPT: Browser- assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. [116] Nathalia Nascimento, Paulo Alencar, and Donald Cowan. Self-adaptive large language model
(llm)-based multiagent systems. arXiv preprint arXiv:2307.06187, 2023. | 2308.11432#139 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 140 | (llm)-based multiagent systems. arXiv preprint arXiv:2307.06187, 2023.
[117] Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Ko- dama, and Jun Deguchi. Simplyretrieve: A private and lightweight retrieval-centric generative ai tool. arXiv preprint arXiv:2308.03983, 2023.
[118] Kolby Nottingham, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Hannaneh Hajishirzi, Sameer Singh, and Roy Fox. Do embodied agents dream of pixelated sheep?: Embodied decision making using language guided world modelling. arXiv preprint arXiv:2301.12050, 2023.
31 | 2308.11432#140 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 141 | 31
[119] Oluwatosin Ogundare, Srinath Madasu, and Nathanial Wiggins. Industrial engineering with large language models: A case study of ChatGPTâs performance on oil & gas problems. arXiv preprint arXiv:2304.14354, 2023. [120] OpenAI. GPT-4 technical report, 2023. [121] Joon Sung Park, Joseph C. OâBrien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. In In the 36th Annual ACM Symposium on User Interface Software and Technology (UIST â23), UIST â23, New York, NY, USA, 2023. Association for Computing Machinery.
[122] Joon Sung Park, Lindsay Popowski, Carrie Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Social simulacra: Creating populated prototypes for social computing systems. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, pages 1â18, 2022.
[123] Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023. | 2308.11432#141 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 142 | [124] Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. Communicative agents for software development. arXiv preprint arXiv:2307.07924, 2023.
[125] Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint arXiv:2304.08354, 2023.
[126] Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. ToolLLM: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023.
[127] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Lan- guage models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. | 2308.11432#142 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 143 | [128] Shreyas Sundara Raman, Vanya Cohen, Eric Rosen, Ifrah Idrees, David Paulius, and Stefanie Tellex. Planning with large language models via corrective re-prompting. arXiv preprint arXiv:2211.09935, 2022.
[129] Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, and Niko Suender- hauf. Sayplan: Grounding large language models using 3d scene graphs for scalable task planning. arXiv preprint arXiv:2307.06135, 2023.
[130] Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Xingyu Zeng, and Rui Zhao. TPTU: Task planning and tool usage of large language model-based AI agents. arXiv preprint arXiv:2308.03427, 2023. | 2308.11432#143 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 144 | [131] Mustafa Safdari, Greg Serapio-GarcÃa, Clément Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, and Maja Matari´c. Personality traits in large language models. arXiv preprint arXiv:2307.00184, 2023.
[132] Swarnadeep Saha, Peter Hase, and Mohit Bansal. Can language models teach weaker agents? teacher explanations improve students via theory of mind. arXiv preprint arXiv:2306.09299, 2023.
[133] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
[134] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. | 2308.11432#144 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 145 | [135] Dale Schuurmans. Memory augmented large language models are computationally universal. arXiv preprint arXiv:2301.04589, 2023.
[136] Melanie Sclar, Sachin Kumar, Peter West, Alane Suhr, Yejin Choi, and Yulia Tsvetkov. Minding language modelsâ(lack of) theory of mind: A plug-and-play multi-character belief tracker. arXiv preprint arXiv:2306.00924, 2023.
[137] Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Lu Wang, Ruoxi Jia, and Ming Jin. Algo- rithm of thoughts: Enhancing exploration of ideas in large language models. arXiv preprint arXiv:2308.10379, 2023.
32
[138] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. HuggingGPT: Solving ai tasks with ChatGPT and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023. | 2308.11432#145 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 146 | [139] Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023.
[140] Yubo Shu, Hansu Gu, Peng Zhang, Haonan Zhang, Tun Lu, Dongsheng Li, and Ning Gu. Rah! recsys-assistant-human: A human-central recommendation framework with large language models. arXiv preprint arXiv:2308.09904, 2023.
[141] Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M Sadler, Wei-Lun Chao, and Yu Su. LLM-Planner: Few-shot grounded planning for embodied agents with large language models. arXiv preprint arXiv:2212.04088, 2022.
[142] Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, and Sujian Li. Restgpt: Connecting large language models with real-world restful apis, 2023. | 2308.11432#146 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 147 | [143] Ruoxi Sun, Sercan O Arik, Hootan Nakhost, Hanjun Dai, Rajarishi Sinha, Pengcheng Yin, and Tomas Pfister. Sql-palm: Improved large language modeladaptation for text-to-sql. arXiv preprint arXiv:2306.00739, 2023.
[144] DÃdac SurÃs, Sachit Menon, and Carl Vondrick. ViperGPT: Visual inference via python execution for reasoning. arXiv preprint arXiv:2303.08128, 2023.
[145] Melanie Swan, Takashi Kido, Eric Roland, and Renato P dos Santos. Math agents: Computational infrastructure, mathematical embedding, and genomics. arXiv preprint arXiv:2307.02502, 2023.
[146] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. | 2308.11432#147 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 148 | [147] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
[148] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023.
[149] Lei Wang. Recagent. https://github.com/RUC-GSAI/YuLan-Rec, 2023. [150] Lei Wang, Jingsen Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, and Ji-Rong Wen. Recagent: A novel simulation paradigm for recommender systems. arXiv preprint arXiv:2306.02552, 2023. | 2308.11432#148 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 149 | [151] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
[152] Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, and Yingzhen Yang. Recmind: Large language model powered agent for recommendation. arXiv preprint arXiv:2308.14296, 2023.
[153] Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. Aligning large language models with human: A survey. arXiv preprint arXiv:2307.12966, 2023.
[154] Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. arXiv preprint arXiv:2302.01560, 2023. | 2308.11432#149 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 150 | [155] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â24837, 2022.
[156] Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, and Navid Ghaffarzadegan. Epidemic modeling with generative agents. arXiv preprint arXiv:2307.04986, 2023.
33
[157] Jimmy Wu, Rika Antonova, Adam Kan, Marion Lepert, Andy Zeng, Shuran Song, Jeannette Bohg, Szymon Rusinkiewicz, and Thomas Funkhouser. Tidybot: Personalized robot assistance with large language models. arXiv preprint arXiv:2305.05658, 2023.
[158] Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang. AutoGen: Enabling next-gen LLM applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023. | 2308.11432#150 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 151 | [159] Yue Wu, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Yuanzhi Li, Tom Mitchell, and Shrimai Prabhumoye. Plan, eliminate, and trackâlanguage models are good teachers for embodied agents. arXiv preprint arXiv:2305.02412, 2023.
[160] Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, and Haibin Yan. Embodied task planning with large language models. arXiv preprint arXiv:2307.01848, 2023.
[161] Yuchen Xia, Manthan Shenoy, Nasser Jazdi, and Michael Weyrich. Towards autonomous system: flexible modular production system enhanced with large language model agents. arXiv preprint arXiv:2304.14721, 2023.
[162] Jiannan Xiang, Tianhua Tao, Yi Gu, Tianmin Shu, Zirui Wang, Zichao Yang, and Zhiting Hu. Language models meet world models: Embodied experiences enhance language models. arXiv preprint arXiv:2305.10626, 2023. | 2308.11432#151 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 152 | [163] Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, and Dongkuan Xu. Gentopia: A collaborative platform for tool-augmented LLMs. arXiv preprint arXiv:2308.04030, 2023.
[164] Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, and Dongkuan Xu. Rewoo: Decoupling reasoning from observations for efficient augmented language models. arXiv preprint arXiv:2305.18323, 2023.
[165] Yuxuan Lei Jing Yao Defu Lian Xing Xie Xu Huang, Jianxun Lian. Recommender ai agent: Integrating large language models for interactive recommendations. arXiv preprint arXiv:2308.16505, 2023.
[166] Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, and Xia Hu. Harnessing the power of LLMs in practice: A survey on ChatGPT and beyond. arXiv preprint arXiv:2304.13712, 2023. | 2308.11432#152 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 153 | [167] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023.
[168] Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744â20757, 2022.
[169] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023.
[170] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022. | 2308.11432#153 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 154 | [171] Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, et al. Retroformer: Retrospective large language agents with policy gradient optimization. arXiv preprint arXiv:2308.02151, 2023.
[172] Ceyao Zhang, Kaijie Yang, Siyi Hu, Zihao Wang, Guanghe Li, Yihang Sun, Cheng Zhang, Zhaowei Zhang, Anji Liu, Song-Chun Zhu, Xiaojun Chang, Junge Zhang, Feng Yin, Yitao Liang, and Yaodong Yang. Proagent: Building proactive cooperative ai with large language models, 2023.
[173] Chenrui Zhang, Lin Liu, Jinpeng Wang, Chuyuan Wang, Xiao Sun, Hongyu Wang, and Mingchen Cai. Prefer: Prompt ensemble learning via feedback-reflect-refine. arXiv preprint arXiv:2308.12033, 2023. | 2308.11432#154 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 155 | [174] Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, and Kai Yu. Large language model is semi-parametric reinforcement learning agent. arXiv preprint arXiv:2306.07929, 2023.
34
[175] Danyang Zhang, Lu Chen, Zihan Zhao, Ruisheng Cao, and Kai Yu. Mobile-Env: An evaluation platform and benchmark for interactive agents in LLM era. arXiv preprint arXiv:2305.08144, 2023.
[176] Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B Tenenbaum, Tianmin Shu, and Chuang Gan. Building cooperative embodied agents modularly with large language models. arXiv preprint arXiv:2307.02485, 2023.
[177] Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, and Gao Huang. Expel: Llm agents are experiential learners, 2023. | 2308.11432#155 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 156 | [178] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
[179] Wanjun Zhong, Lianghong Guo, Qiqi Gao, and Yanlin Wang. Memorybank: Enhancing large language models with long-term memory. arXiv preprint arXiv:2305.10250, 2023.
[180] Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al. Webarena: A realistic web environment for building autonomous agents. arXiv preprint arXiv:2307.13854, 2023.
[181] Wei Zhou, Xiangyu Peng, and Mark Riedl. Dialogue shaping: Empowering agents through npc interaction. arXiv preprint arXiv:2307.15833, 2023. | 2308.11432#156 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 157 | [182] Xuanhe Zhou, Guoliang Li, and Zhiyuan Liu. Llm as dba. arXiv preprint arXiv:2308.05481, 2023.
[183] Andrew Zhu, Lara J Martin, Andrew Head, and Chris Callison-Burch. Calypso: Llms as dungeon mastersâ assistants. arXiv preprint arXiv:2308.07540, 2023.
[184] Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, et al. Ghost in the minecraft: Generally capable agents for open-world enviroments via large language models with text-based knowledge and memory. arXiv preprint arXiv:2305.17144, 2023.
[185] Mingchen Zhuge, Haozhe Liu, Francesco Faccio, Dylan R Ashley, Róbert Csordás, Anand Gopalakrishnan, Abdullah Hamdi, Hasan Abed Al Kader Hammoud, Vincent Herrmann, Kazuki Irie, et al. Mindstorms in natural language-based societies of mind. arXiv preprint arXiv:2305.17066, 2023. | 2308.11432#157 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 158 | [186] Terry Yue Zhuo, Zhuang Li, Yujin Huang, Yuan-Fang Li, Weiqing Wang, Gholamreza Haffari, and Fatemeh Shiri. On robustness of prompt-based semantic parsing with large pre-trained language model: An empirical study on codex. arXiv preprint arXiv:2301.12868, 2023. [187] Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, and Diyi Yang. arXiv preprint
Can large language models transform computational social science? arXiv:2305.03514, 2023.
35 | 2308.11432#158 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.10837 | 0 | 3 2 0 2
g u A 1 2 ] R I . s c [
1 v 7 3 8 0 1 . 8 0 3 2 : v i X r a
# Leveraging Large Language Models for Pre-trained Recommender Systems
Zhixuan Chu*1, Hongyan Hao*1, Xin Ouyang1, Simeng Wang1, Yan Wang1, Yue Shen1, Jinjie Gu1, Qing Cui1, Longfei Li1, Siqiao Xue1, James Y Zhang1, Sheng Li2 1Ant Group 2University of Virginia {chuzhixuan.czx, hongyanhao.hhy, xin.oyx, simeng.wsm, luli.wy, zhanying, jinjie.gujj, cuiqing.cq, longyao.llf, siqiao.xsq, james.z}@antgroup.com, [email protected]
# Abstract | 2308.10837#0 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 0 | 3 2 0 2
t c O 3 2 ] L C . s c [
3 v 8 4 8 0 1 . 8 0 3 2 : v i X r a
Preprint
# AGENTVERSE: FACILITATING MULTI-AGENT COLLAB- ORATION AND EXPLORING EMERGENT BEHAVIORS
Weize Chen!*, Yusheng Su!*, Jingwei Zuo', Cheng Yang*â¢, Chenfei Yuanâ, Chi-Min Chan', Heyang Yu', Yaxi Luâ, Yi-Hsin Hungâ, Chen Qianâ, Yujia Qin!, Xin Cong', Ruobing Xie*, Zhiyuan Liu'â¢, Maosong Sun!, Jie Zhou* ! Department of Computer Science and Technology, Tsinghua University 2 School of Economics and Management, Tsinghua University 3 School of Computer Science, Beijing University of Posts and Telecommunications 4 Pattern Recognition Center, WeChat AI, Tencent Inc. [email protected], [email protected]
# ABSTRACT | 2308.10848#0 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 1 | # Abstract
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized rec- ommendations by utilizing large language models (LLM). However, commonsense knowledge and reasoning abilities into recommendation sys- tems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and in- ference. This allows RecSysLLM to leverage LLMsâ capabil- ities for recommendation tasks in an efficient, unified frame- work. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models. | 2308.10837#1 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 1 | # ABSTRACT
Autonomous agents empowered by Large Language Models (LLMs) have under- gone significant improvements, enabling them to generalize across a broad spec- trum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework AGENTVERSE that can effectively orchestrate a collaborative group of expert agents as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that AGENTVERSE can proficiently deploy multi-agent groups that outperform a single agent. Extensive experiments on text understanding, reasoning, coding, tool utiliza- tion, and embodied AI confirm the effectiveness of AGENTVERSE. Moreover, our analysis of agent interactions within AGENTVERSE reveals the emergence of spe- cific collaborative behaviors, contributing to heightened group efficiency. Our code has been released at https://github.com/OpenBMB/AgentVerse/.
# INTRODUCTION | 2308.10848#1 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 2 | Introduction The realm of recommendation has gained considerable at- tention in recent years due to its ability to drive business growth and enhance user engagement. Recent advancements in recommender systems have shifted towards incorporating diverse information and catering to a broader range of ap- plication scenarios, rather than focusing on task-specific ar- chitectures. This shift has been driven by the need for more comprehensive and personalized recommendations, as well as the availability of new data sources and knowledge (Geng et al. 2022; Chu et al. 2022; Hui et al. 2022; Sheu et al. 2021; Li and Zhao 2021; Jiang et al. 2022; Xue et al. 2021). In addition, with the advent of the Large Language Model (LLM) (Radford et al., 2019; Brown et al. 2020; Ouyang et al. 2022), we have witnessed an unprecedented surge in the capabilities of natural language processing. The power of LLM lies in its ability to understand and generate human- like language. LLM has also enabled the extraction of im- plicit knowledge from text data (Gu et al. 2023; Yoneda et al. 2023; Zhao et al. 2023). This newfound capability of LLM has opened up exciting avenues for the integration of seman- tic information into recommender systems and provides a wealth of insights into user preferences and behaviors (Shi | 2308.10837#2 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 2 | The pursuit of creating intelligent and autonomous agents that can seamlessly assist humans and operate in real-world settings has been a foundational goal in artificial intelligence (Wooldridge & Jennings, 1995; Minsky, 1988; Bubeck et al., 2023). The recent advance of Large Language Models (LLMs) (OpenAI, 2023a; Anil et al., 2023; Touvron et al., 2023b) has created newfound avenues in this domain. These LLMs, especially GPT-4 (OpenAI, 2023a), are particularly adept in comprehending human intent and executing commands. They have demonstrated remarkable proficiency in domains such as language understanding, vision (OpenAI, 2023b), and coding (Bubeck et al., 2023). By harnessing the power of LLMs, autonomous agents can make more nuanced decisions and perform actions with an unprecedented degree of autonomy (Zhou et al., 2023). Agents like AutoGPT (Richards & et al., 2023), BabyAGI (Nakajima, 2023), and AgentGPT (Reworkd, 2023), are inspiring examples. Furthermore, recent research has endowed autonomous agents with more human-analogous cognitive mechanisms, | 2308.10848#2 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 3 | et al. 2023; Zhao, Tan, and Mei 2022). As a result, incorpo- rating LLM into recommender systems has become a cru- cial step toward providing a powerful and comprehensive paradigm for recommendation tasks. In the following, we will discuss the new generation of recommendation model paradigms from two directions, i.e., the unified pre-trained recommendation model and the combination of LLM and recommendation model. | 2308.10837#3 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 3 | AgentGPT (Reworkd, 2023), are inspiring examples. Furthermore, recent research has endowed autonomous agents with more human-analogous cognitive mechanisms, spanning from reflection (Yao et al., 2023b; Shinn et al., 2023), task decomposition (Wei et al., 2022b; Yao et al., 2023a), and tool utilization (Schick et al., 2023b; Qin et al., 2023a;b; Qian et al., 2023b). These advancements edge us closer to realizing the concept of artificial general intelligence (AGI) (Goertzel & Pennachin, 2007; Clune, 2019) that can generalize across a broader range of tasks. | 2308.10848#3 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 4 | On the one hand, training a pre-trained recommendation model can help overcome the limitations of existing recom- mendation approaches that require designing task-specific architectures and training objectives. Traditional recommen- dation methods have focused on a single task, such as per- sonalized product recommendations, contextual advertising, customer segmentation, and so on, making them less adapt- able to new tasks and limiting their ability to generalize to new domains. By training a pre-trained recommenda- tion model, we can leverage the power of pre-trained mod- els to learn generalizable representations of user behavior and product characteristics (Tsai et al. 2023; Zhao, Tan, and Mei 2022) that can be applied to a variety of recommen- dation tasks. Overall, a pre-trained recommendation model provides a flexible and scalable solution that can be adapted to a variety of recommendation tasks. Since recommenda- tion tasks usually share a common userâitem pool, features, behavioral sequences, and other contextual information, we believe it is promising to merge even more recommendation tasks into a unified framework so that they can implicitly transfer knowledge to benefit each other and enable general- ization to other unseen tasks (Xie et al. 2022). | 2308.10837#4 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 4 | However, complex real-world tasks often require cooperation among individuals to achieve better effectiveness. Throughout history, numerous studies have delved into methods for enhancing col- laboration among humans to improve work efficiency and effectiveness (Woolley et al., 2010; Fehr & G¨achter, 2000). More recently, with the evolution of autonomous agents towards AGI, extensive research conceptualizes the assemblies of agents as a society or group (Li et al., 2023), and focuses on exploring the potential of their cooperation. For example, Park et al. (2023) found emergent
âThe first two authors contributed equally. | & Corresponding author.
1
Preprint
. Collaborative Decision-Making : (2) On Evaluation | New State! Action Execution N 2D | Agents: GE ON 2 == £5 | : - Goal New State! | Actions: AAV Feedback Ml: Worker > (BG z& FA. engineer New State
Figure 1: An illustration of the AGENTVERSE. | 2308.10848#4 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 5 | On the other hand, integrating LLMs into recommenda- tion systems has several significant advantages. These ad- vantages are linked to the LLMâs capabilities in thinking, reasoning, and discovering implicit relationships within tex- tual data based on the entailment of wealthy background knowledge and logical chains. (1) By leveraging the seman- tic information in natural language data, LLMs can help the recommendation system understand and infer the re- lationship between user features and behavioral sequences and among entities in behavioral sequences. This allows the recommendation system to understand the userâs needs and preferences in a more comprehensive way. (2) Another ben- efit of integrating LLMs into recommendation systems is the ability to leverage the implicit knowledge that is hidden in
*These authors contributed equally. | 2308.10837#5 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 5 | Figure 1: An illustration of the AGENTVERSE.
social behaviors in multi-agent life simulation. Du et al. (2023); Wang et al. (2023b); Zhang et al. (2023a); Qian et al. (2023a); Chan et al. (2023) also underscored the enhanced decision-making of collaborating agents during collaborative problem-solving. However, a limitation in these studies is their narrow focus on specific and limited tasks, leaving the generalizability of their findings uncertain. An additional constraint is their static approach to agent collaboration, where agentsâ roles and capabilities remain rigid, hindering adaptability. | 2308.10848#5 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 6 | *These authors contributed equally.
the models. LLMs are trained on vast amounts of textual data and can help to understand the relationships between different concepts and ideas. By incorporating LLMs into recommendation systems, this implicit knowledge can be used to generate more divergent and logical recommenda- tions. This can lead to more creative and unexpected rec- ommendations that the user may not have considered oth- erwise. (3) By leveraging the natural language processing capabilities of LLMs, recommendation tasks that previously required separate specialized systems can now be integrated into a unified framework. The pretrained knowledge and few-shot learning abilities of LLMs allow recommendation models to be rapidly adapted to new domains with lim- ited data. Overall, the natural language processing power and versatility of LLMs can help merge more recommenda- tion tasks into a unified framework. Furthermore, a compre- hensive survey on recommendations and LLMs is provided in the Appendix. This survey covers the motivation behind them, current development, and challenges. | 2308.10837#6 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 6 | To address this problem, we introduce AGENTVERSE. This general multi-agent framework simulates the problem-solving procedures of human groups, and allows for dynamic adjustment of group members based on current progress. Specifically, AGENTVERSE splits the problem-solving process into four pivotal stages as shown in Figure 1: (1) Expert Recruitment: Determine and adjust the agent groupâs composition based on the ongoing problem-solving progression. (2) Collaborative Decision-Making: Engage the selected agents in joint discussions to devise problem-solving strategies. (3) Action Execution: Agents interact with their environment to implement the devised actions. (4) Evaluation - Assess the differences between the current state and desired outcomes. If the current state is unsatisfactory, feedback is given to the next iteration for further refinement.
We conduct extensive experiments and case studies in diverse aspects including text understanding, reasoning, coding, tool utilization and embodied AI to show the effectiveness of AGENTVERSE. Additionally, we highlight the social behaviors that emerge from the multi-agent collaboration, and discuss their advantages and potential risks. In summary, our contributions are:
⢠Inspired by the collaborative process of a human team, we propose AGENTVERSE as an effective framework for promoting collaboration among multiple agents in problem-solving. | 2308.10848#6 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 7 | However, constructing a robust and integrated recommen- dation system that fully utilizes large language modelsâ im- mense knowledge and reasoning capacities poses several key challenges. Directly training a pre-trained recommen- dation model from scratch is not only a waste of time and data collection efforts but also lacks general common sense and reasoning capabilities that underpin modern large lan- guage models. Meanwhile, directly fine-tuning a pre-trained LLM model on recommendation data also has drawbacks. Recommendation data has distinct characteristics - such as fixed entities and sequential user behaviors - that differ from the raw text corpora used to train language models. As such, fine-tuning may erase much of the capabilities specific to recommendation tasks. Therefore, we propose a novel pre- trained recommendation paradigm (RecSysLLM) based on the pre-trained large language model through unique designs for recommendation in three phases, i.e., data phase, training phase, and inference phase. Our model retains the reasoning ability and rich knowledge contained in large language mod- els while integrating the recommendation-specific knowl- edge. It directly inherits the parameters and framework of the original large language model but also designs and ex- | 2308.10837#7 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 7 | ⢠Inspired by the collaborative process of a human team, we propose AGENTVERSE as an effective framework for promoting collaboration among multiple agents in problem-solving.
We conduct extensive experiments to show that AGENTVERSE effectively improve the agentsâ understanding, reasoning, coding, tool utilizing capabilities and their potential in embodied AI. ⢠In the multi-agent collaboration, especially within tool utilization and Minecraft game playing, agents manifest certain emergent behaviors. For example, (1) volunteer behaviors, characterized by agents offering assistance to peers, thus improving team efficiency; (2) conformity behaviors, where agents adjust their deviated behaviors to align with the common goal under the critics from others; (3) destructive behaviors, occasionally leading to undesired and detrimental outcomes.
# 2 AGENTVERSE FRAMEWORK
A problem-solving process is a sequence of iterative stages within a human group (Bransford & Stein, 1993). Initially, the group assesses the difference between the current state and the desired goal, dynamically adjusting its composition to enhance collaboration in decision-making, and subsequently
2
Preprint | 2308.10848#7 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 8 | els while integrating the recommendation-specific knowl- edge. It directly inherits the parameters and framework of the original large language model but also designs and ex- tends some mechanisms in the data phase (textualization and sampling), training phase (mask, position, and order- ing), and inference phase (dynamic position infilling). These modifications do not discard the tokenization, parameters, structure, or previously learned knowledge in the LLM. On this basis, recommendation data is used to fine-tune it. The significant advantage of this pre-trained recommenda- tion model is that it can utilize the reasoning capabilities and rich knowledge of large language models while incor- porating domain-specific knowledge of the recommenda- tion system through parameter-efficient fine-tuning of user- profiles and behavioral sequences data. Another crucial ben- efit of this model is that it can be easily adapted to differ- ent downstream recommendation sub-tasks. We evaluate the proposed model on extensive benchmark datasets and real- world scenarios. The experimental results demonstrate its effectiveness in improving the quality of recommendations. Overall, our proposed pre-trained recommendation model | 2308.10837#8 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 8 | 2
Preprint
executing well-informed actions. In order to enhance the effectiveness of an autonomous multi-agent group in achieving their goals, we simulate the problem-solving processes of a human group to propose the AGENTVERSE framework, which is composed of four crucial stages: Expert Recruit- ment, Collaborative Decision-Making, Action Execution, and Evaluation, as shown in Figure 1. The entire process can be modeled as a Markov decision process (MDP), characterized as a tuple (S, A, T , R, G). This encompasses the autonomous agent and environment state space S, solution and action space A, transition function T : S Ã A â S, reward function R, and goal space G.
2.1 EXPERT RECRUITMENT | 2308.10848#8 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 9 | provides a promising approach for building recommenda- tion systems that are efficient, effective, and unified.
RecSysLLM Pretraining Mechanism To fully take advantage of LLM and domain knowledge in recommendation tasks, we need to modify the LLM and fine-tune the existing LLM to get a pre-trained recommenda- tion model. However, the conventional large language mod- els are trained on general knowledge and coherent corpus, and the framework of the model is not designed for behav- ioral sequence data and recommendation tasks. To address these two points, we make modifications from three phases, i.e., data, training, and inference phases, to transform a con- ventional pre-trained language model into a pre-trained rec- ommendation model. The whole framework is illustrated in Figure 1. This pre-trained recommendation model has been employed in real-world applications in Chinese scenarios, so we take the GLM (Du et al. 2021) as an example to introduce the RecSysLLM pretraining mechanism, which is bilingual in Chinese and English. Our model can also be adapted to other large language models with minor modifications.
# Data Phase | 2308.10837#9 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 9 | 2.1 EXPERT RECRUITMENT
Expert Recruitment stage determines the composition of a multi-agent group, playing an important role in deciding the upper bounds of the groupâs capabilities. Empirical evidence suggests that diversity within human groups introduces varied viewpoints, enhancing the groupâs performance across different tasks (Woolley et al., 2015; Phillips & OâReilly, 1998). Parallel findings from recent research suggest that designating specific roles for autonomous agents, similar to recruiting experts to form a group, can augment their efficacy (Li et al., 2023; Salewski et al., 2023; Qian et al., 2023a). Current methodologies for assigning role descriptions to autonomous agents predominantly involve manual assignment, necessitating prior knowledge and understanding of the task. Consequently, the scalability remains ambiguous, especially in the face of diverse and intricate problem contexts. | 2308.10848#9 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 10 | # Data Phase
In the data phase, textualizing tabular data is often the eas- iest and most straightforward approach for implementing large language models. For the pre-training of RecSysLLM, we first textualize conventional tabular data, such as user features stored in a table with rows and columns into text. Since large language models are originally trained on tex- tual data, text-based features can be easily combined with text-based behavioral sequences and other text information, which helps our model better capture the relationship be- tween features and behavioral sequences. In addition, textu- alizing tabular data allows for greater flexibility in how they are used in the following tasks. | 2308.10837#10 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 10 | In view of this, AGENTVERSE automates expert recruitment to make agent configuration more scalable. For a given goal g â G, a particular agent Mr is prompted as the ârecruiterâ, similar to a human resource manager. Instead of relying on pre-defined expert descriptions, Mr dynamically generates a set of expert descriptions based on g. The different agents prompted with these different expert descriptions then form an expert group M = Mr(g) on the given goal g. Notably, the composition of a multi-agent group will be dynamically adjusted based on feedback from the evaluation stage (Section 2.4). This allows AGENTVERSE to employ the most suitable group based on the current state to make better decisions in future rounds.
2.2 COLLABORATIVE DECISION-MAKING
This stage engages expert agents in collaborative decision-making. To facilitate effective decision- making, previous research has investigated the impact of different communication structures among agents (Chan et al., 2023; Zhang et al., 2023b; Wu et al., 2023). We focus on two typical communica- tion structures: horizontal structure and vertical structure, respectively.
_& Sr) | 2308.10848#10 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 11 | Compared with ordinary language texts, the training texts in the recommendation system should take into account the interests and preferences of users from different periods (Yu et al. 2019). Long-term preferences are usually stable and reflect the general preferences of a user. These preferences do not change frequently over time, but they lack time- liness and may not reflect current interests. On the other hand, short-term preferences tend to change frequently over time and are more reflective of a userâs current interests. We aim to use different periods of preferences to provide accu- rate and relevant recommendations to users, which can bal- ance the userâs general interests with their current needs. Therefore, we sample behavioral sequences in long-term preferences (10%), medium-term preferences (30%), and short-term preferences (60%). Long-term preferences cap- ture the userâs preferences that have remained consistent for an extended period of time, typically spanning over sev- eral months or years. Medium-term preferences capture the userâs preferences that have developed and changed over a shorter period of time, typically spanning over several weeks or months. Short-term preferences can improve recommen- dation accuracy by providing the system with the userâs most recent preferences, spanning over several days or hours. | 2308.10837#11 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 11 | _& Sr)
) In this democratic structure, each agent, denoted as mi â M, Horizontal Structure ( shares and refines its decision ami. The groupâs collective decision, A = f ({ami}i) â A, emerges as an integration of individual agentsâ decisions using a function f , which might involve techniques like summarization or ensemble. This structure is especially effective in scenarios like consulting and tool using. | 2308.10848#11 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 12 | Training Phase Entities 1 *2 *3 X4 %5 X6 X7 Aili, ey LLM er &2 1 &2 1 1 I Masks %~1%2%3%4%5%6%7 1 oy â 1 M] 4 [M] X¢ x: Division [M] â4 IM]X%âX7 | I X1X_Xz Xs Bidirectional X1 Xz X3 [E] Xs [E] trtt tt ry rs rw tetttt ttt [M] %4 [M] %@ %7 [S] x1 X2 X3 [S] X5 Inter-position 1 2 3 4 511113 38 Intra-position 0 0 0 12123441 2 [a oy Autoregressive Encoder Blank Infilling Inference Phase 5 X6 7 [E] Autoregressive Judgment Unknown Beforehand
Figure 1: This is the framework of RecSysLLM based on a pre-trained generative language model (GLM). To transform the GLM into a specialized model for recommendation systems, several modifications are made while preserving the core knowl- edge and capabilities of the original language model architecture, such as the new mask mechanism, span order, positional encoding, dynamic position mechanism, and so on.
# Training Phase | 2308.10837#12 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 12 | Vertical Structure ( ) Conversely, vertical structure has a clear division of roles. An agent, termed the solver mâ, proposes an initial decision aâ 0. Other agents, as reviewers, provide feedback on this proposal, prompting iterative refinements by the solver until a consensus is reached among reviewers or a set number of iterations is exhausted. The final decision A is given as A = aâ k â A, with k indicating the number of refinements. Vertical structure is preferable for tasks like math problem-solving and software development, where only one refined decision is required.
2.3 ACTION EXECUTION
In the decision-making stage, agents collaboratively contribute to a group decision A containing actions that need to be executed in the current environment. Within the action execution stage, agents then execute the collectively-decided actions in the environment. Depending on the implementation, some agents might not perform any execution. As a result of these actions, the state of the environment transitions from sold to snew = T (sold, A).
3
# Preprint
Table 1: The results on different tasks that evaluate the agentsâ general capabilities. | 2308.10848#12 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 13 | To be consistent with the architecture of GLM, our model is still trained by optimizing an autoregressive blank infilling objective based on an input text x = [x1, · · · , xn]. Differ- ent from the general language text in GLM, our input text is composed of user features and behavioral sequences. Al- though textualized user features and behavioral sequences are also composed of multiple tokens, they often represent a complete meaning as a whole. If they are split into differ- ent parts, like regular text, they will lose their unique mean- ing. In addition, the LLMâs power comes from the way it tokenizes and processes text. It has been trained on a vast amount of data and has learned to recognize patterns and relationships between tokens, enabling it to identify entities accurately and extract information. If we were to create a new tokenization method, we would lose the LLMâs power. Therefore, to maintain the LLMâs power and supplement the new knowledge in the recommendation data, it is best to leverage the existing tokenization and enhance it with addi- tional information and capabilities rather than create a new tokenization. In the following, we name the attributes in user features and items in the | 2308.10837#13 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 13 | 3
# Preprint
Table 1: The results on different tasks that evaluate the agentsâ general capabilities.
GPT-3.5-Turbo GPT-4 Task CoT Solo Group CoT Solo Group Conversation (FED) Creative Writing (Commongen-Challenge) Mathematical Reasoning (MGSM) Logical Reasoning (Logic Grid Puzzles) 81.6 76.6 80.4 - 81.1 93.6 82.4 - 85.1 92.3 80.8 - 95.4 95.9 95.2 59.5 95.8 99.0 96.0 64.0 96.8 99.1 95.2 66.5
# 2.4 EVALUATION
The evaluation stage is vital for AGENTVERSE, guiding improvements for subsequent rounds. At this stage, the feedback mechanism R assesses the difference between the current state snew and the desired goal g â G. It then offers verbal feedback r = R(snew, g), detailing areas of shortcoming and suggesting ways to enhance performance. R can either be defined by humans (in a human-in-the-loop (Amershi et al., 2014) setting) or an agent for automatic feedback, depending on the implementation. | 2308.10848#13 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 14 | and enhance it with addi- tional information and capabilities rather than create a new tokenization. In the following, we name the attributes in user features and items in the behavioral sequences as entities, which means that they are complete units and have fixed meanings. Therefore, as shown in the âEntitiesâ of Figure 1, our data are composed of plain language text and entities, where (x1, x2, and x3) have merged to form e1 and (x6 and x7) to form e2. x4 and x5 are separate tokens. | 2308.10837#14 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 14 | If the goal g remains unmet, the feedback r returns to the initial expert recruitment stage. In the next round, the expert recruitment stage will consider both feedback r and the goal g to adjust the groupâs composition, aiming to evolve a more effective multi-agent group according to the current progress.
# 3 EXPERIMENTS
To validate the superiority of AGENTVERSE in facilitating agent collaboration over standalone agents, we design four experimental tasks. Each task is designed to assess distinct aspects of an agent group: general understanding and reasoning capabilities, coding capabilities, tool utilization capabilities, and their potential in Embodied AI. Our findings, which are detailed in this section, consistently highlight the superior performance of AGENTVERSE across these varied and multi-faceted tasks. Of particular interest is the emergence of unique collaborative behaviors within agent groups. While this section focuses on the advantages of multi-agent setups, a deeper exploration of these emergent behaviors will be presented in Section 4. | 2308.10848#14 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 15 | Mask Mechanism. To inject the new knowledge of rec- ommendation tasks based on the original LLM, we follow the principle in the LLM and design the new mask mecha- nism and position strategies. Similar to the GLM (Du et al. 2021), multiple text spans {s1, · · · , sm} are sampled, where each span si corresponds to a series of consecutive tokens [si,1, · · · , si,li ] in x. Each span is replaced with a single [MASK] token. The remaining text and [MASK]s form a corrupted text xcorrupt. In the GLM, since there is no ex- istence of entity, the tokens can be randomly sampled into spans. However, in our model, the multiple and consecutive tokens composing an entity should not be split into different parts. In other words, the tokens of an entity are treated as | 2308.10837#15 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 15 | Setups. In all the experiments, we evaluate the performance of agents driven by GPT-3.5-Turbo- 0613 and GPT-4-0613 across various tasks. All the experiments are done in zero-shot setting. For all the quantitative experiments in this section, we compare three settings: (1) CoT: The CoT(chain-of-thought) agent; (2) Solo: Using AGENTVERSE with a single agent in the decision- making stage. Compared with CoT, Solo additionally incorporates the expert recruitment, action execution, and evaluation modules; (3) Group: Implementing AGENTVERSE with multiple agents collaborating during the decision-making. More detailed experimental setups for each task can be found in Appendix A.
3.1 GENERAL UNDERSTANDING AND REASONING CAPABILITIES
To assess the agentsâ general understanding and reasoning capabilities, we use four datasets: FED (Mehri & Esk´enazi, 2020), Commongen Challenge (Madaan et al., 2023), MGSM (Shi et al., 2023), and Logic Grid Puzzles (Srivastava et al., 2022). Detailed descriptions of these datasets and metrics can be found in Appendix A. The first two datasets are used to measure the agentsâ text understanding and creative writing abilities, while the latter two focus on examining the agentsâ reasoning abilities, including mathematical and logical reasoning. | 2308.10848#15 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 16 | a whole. The [MASK] mechanism will not break the com- plete entities, which will highlight the whole structure of entities and help to capture the interrelationship between en- tities. For example, as shown in the âMasksâ of Figure 1, x1, x2, and x3 composing the e1 are blocked as a whole and sin- gle token x5 is also blocked. Therefore, we form the xcorrupt with [M], x4, [M], x6, and x7 in the âDivisionâ of Figure 1. language process- ing tasks, we adopt the multi-task pretraining setup (Du et al. 2021) with entity-level [M], sentence-level [sM], and document-level [gM]. Specifically, entity-level refers to the randomly blanking out continuous spans of tokens from the input text, following the idea of autoencoding, which captures the interdependencies between entities. Sentence level restricts that the masked spans must be full sentences. Document-level is to sample a single span whose length is sampled from a uniform distribution over 50%â100% of the original length. The objective aims for long text generation. | 2308.10837#16 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 16 | Experimental Results. The results in Table 1 show that agents assembled by AGENTVERSE (Solo and Group setups) consistently outperform the standalone CoT agent, irrespective of the LLM used. In our preliminary evaluations, GPT-3.5-Turbo struggles with accurately handling the logic grid puzzles dataset; therefore, we omit the result of GPT-3.5-Turbo on logical reasoning.
Interestingly, for GPT-3.5-Turbo, the Group setup underperforms the Solo setup in two of three tasks, indicating that the discussion in decision-making might adversely impact performance for agents based on GPT-3.5-Turbo in certain contexts. Delving deeper into this observation, one predominant factor surfaces: the susceptibility to erroneous feedback. A recurring pattern observed in the Group
4
# Preprint | 2308.10848#16 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 17 | Span Order. We implement the autoregressive blank in- filling objective with the following techniques. The input x is divided into two parts: one part is the corrupted text xcorrupt, and the other consists of the masked spans. Our model automatically learns a bidirectional encoder for the first part and a unidirectional decoder for the second part in a unified model. The model predicts the missing tokens in the spans from the corrupted text in an autoregressive manner, which means when predicting the missing tokens in a span, the model has access to the corrupted text and the previously predicted spans. Instead of randomly permuting the order of the spans in the original GLM (Du et al. 2021), we keep all spans in chronological order to keep the interrelationship among different entities. Formally, we define the pretraining objective of a length-m index sequence [1, 2, ..., m] as
m S7 log p(s; latcormpts $1 i=l 8i-1;9) (1)
i=1
Positional Encoding. To enable autoregressive genera- tion, each span is padded with special tokens [START] and [END], for input and output, respectively. To be consistent with the original LLM, we cannot arbitrarily modify, add, or | 2308.10837#17 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 17 | Consulting with Group Setup Round 0: 1.@ Achemical engineer 2.48 A civil engineer 3.â¬9An environmental scientist Round 1: 1. A chemical engineer 2. @ An economist 3. © ALawyer Consulting with Solo Setup Round 0 â© Decision-Making: Round 1 {4B Decision-Making: Round 0 Decision-Making: Round 1 Decision-Making: 1. Location Selection: Consideration of regional weather conditions; Closer to the hydrogen production source to minimize transportation; Checking the site soil properties for storage tank stability [...] 2. Compression Tech: Use metal hydrides, carbon nanotubes, and complex hydrides for enhanced storage efficiency; thermal management system optimization; examination of metal hydridesâ regeneration and cycling life, carbon nanotubesâ hydrogen uptake and release §. Safty Measures: nstalation of ek detection systems; assessing potential cascading failure events; pressure relief devices, fire protection (..] 4. Green Energy Sources: Development of efficient fuel cells to convert stored hydrogen into electricity. 5. Infrastructure Development: Ensuring a solid foundation and structurally sound station design 6. Environmental Considerations: Conducting an environmental impact assessment 1. [ol | 2308.10848#17 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10848 | 18 | electricity. 5. Infrastructure Development: Ensuring a solid foundation and structurally sound station design 6. Environmental Considerations: Conducting an environmental impact assessment 1. [ol 2b) 6. Environmental Considerations: |... proper waste management systems, consideration of greenhouse gas âemissions during hydrogen production, a comprehensive disaster management plan, biodiversity conservation during facility establishment and operation 7. Community Engagement: Consider Public consultations 8. Government Support: Compliance with state, federal guidelines on hazardous material storage [...] 9. Economic Considerations: Detailed cost-benefit analysis, analyzing economic impact on local communities, potential market opportunities [...] 10. Collaboration with Universities and Research Institutions: collaborate with local universities and research institutions 11. Legal Considerations: obtaining a legal audit 1. Location Selection: Find an optimal location that complies with safety regulations and is close to major transportation routes 2. Compression Tech: Use state-of-the- art compression technology to maximize storage capacity 3. Safety Measures: Implement strict safety measures, including pressure relief devices, leak detection systems, and automatic shutdown features. 4, Green Energy Sources: To make the storage station more sustainable, consider using renewable energy | 2308.10848#18 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 19 | The inter-position id represents the position in the cor- rupted text xcorrupt. For the masked spans, it is the position of the corresponding [MASK] token. For the intra-position id, we follow the essential meaning in the original LLM, which still refers to the intra-position. Instead of the scope of the whole span, we extend it into a finer granularity. For the en- tities, it represents the intra-relationship among entities. As shown in Figure 1, for separate tokens (not in the entities) in the encoder part ([M], x4, [M]), their intra-position ids are 0. For consecutive tokens in the entities (x6 and x7), they are numbered in chronological order. For tokens in the autore- gressive blank infilling part, they range from 1 to the length of the entities including [S], such as (entities: [S], x1, x2, x3 â 1, 2, 3, 4) and (independent token: [S], x5 â 1, 2 ). The two positional ids are projected into two vectors via learn- able embedding tables, which are both added to the input token embeddings. | 2308.10837#19 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 19 | including pressure relief devices, leak detection systems, and automatic shutdown features. 4, Green Energy Sources: To make the storage station more sustainable, consider using renewable energy sources 5. Collaboration with Universities and Research Institutions 6. Community Engagement: Engage with the local community to educate them about the benefits of hydrogen energy 7. Government Support: Seek support and funding from local and state government 8. Infrastructure Development: Develop infrastructure for hydrogen fueling stations 1. Location Selection: Use Geographic Information System (GIS) technology to create a detailed map of potential sites 2. Compression Tech: Implement cutting-edge compression technology like onic Liquid Piston Compressors 8, Safety Measures: [Roughly the same as last round] 4, Green Energy Sources: Incorporate solar panels or wind turbines to power the compression process 5, Collaboration with Universities and Research Institutions: Form research Partnerships with institutions like Ohio State University or Case Western Reserve University. 6. Community Engagement: Engage with the local community to educate them about the benefits of hydrogen energy 7. Government Support: Apply for grants and incentives offered by the Ohio Development Services Agency 8. Infrastructure Development: Develop a network of hydrogen fueling stations | 2308.10848#19 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 20 | 1S) Token SI] sgtereaition ~> Autoregressive Blank infiling Intra-position > Autoregressive Judgment [S] Xs X% X7 5555 1212 5) x5 %6 X7 55585 1234 [SIs Xe x7 [S] Watch Star Wars [S] Apple AirPods Pro [5] Casual Wear And 5 5 5 e 2 e3 1231
Figure 2: This is the dynamic position mechanism. When one token is generated, it will be judged as one part of an entity or not. If it and the previous token belong to one entity, the intra-position id will continue to grow. Otherwise, it will start at 1 again. | 2308.10837#20 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 20 | hydrogen energy 7. Government Support: Apply for grants and incentives offered by the Ohio Development Services Agency 8. Infrastructure Development: Develop a network of hydrogen fueling stations Evaluator: âComprehensiveness:8, Detailedness: 7, Feasibility: 8, Novelty: 6 The team should also consider the economic feasibility and cost- effectiveness of the proposed solutions. (..] For human resource staff, recruiting experts in advanced materials and hydrogen fuel cell technology would |...) Evaluator: âComprehensiveness:9, Detailedness: 8, Feasibility: 8, Novelty: 8 âThe ideas are comprehensive and well detailed, covering multiple aspects of the project (...] There's a need for a systems integration expert to ensure [...] Furthermore, a public relations specialist should be included to [...] Evaluator: +hensiveness:8, Detailedness: 7, ity: 8, Novelty: 7 âare comprehensive and practical [..-] However, more detailed implementation steps could be provided such as specific technologies or strategies to be used, and potential challenges to be addressed, Evaluator: âComprehensiveness:9,Detailedness: 8, Feasibility: | 2308.10848#20 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 21 | Inference phase Because our pre-trained model is designed to fit different downstream tasks, the length of the generated text should be unknown beforehand and flexible for the different tasks. Further, due to the existence of entities, the intra-position ids represent the relative position of the entity. As shown in the âInference Phaseâ of Figure 1, we cannot specify the intra- position ids in advance when autoregressive blank infilling. Hence, we designed a dynamic position mechanism for the mask and position modifications made during the inference phase. It can conduct the autoregressive judgment to deter- mine and complement the intra-position ids one by one as each token is generated in the autoregressive generation pro- cedure. Specifically, we establish an entity pool beforehand, which stores all the tokens of the entities that existed in our recommendation task. When one token is generated, it will be judged as one part of an entity or not. We utilize the Trie
algorithm (Bodon and R´onyai 2003) to check whether the generated token and previous token belong to the same en- tity, which is a tree data structure used for locating specific keys from within a set. If they belong to one entity, the intra- position id will continue to grow. Otherwise, it will start at 1 again. The detailed procedure is illustrated in Figure 2.
# Experiments | 2308.10837#21 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10837 | 22 | # Experiments
Experimental Setup Datasets. We evaluate our method on three real-world e- commerce datasets from Amazon.com, spanning the cate- gories of Sports & Outdoors, Beauty, and Toys & Games. The datasets contain user ratings and reviews from 2019, along with transaction records between January 1 and De- cember 31 (Zhou et al. 2020; Xue et al. 2022, 2023). Key statistics of the resulting datasets are provided in Table 1. | 2308.10837#22 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 22 | Figure 2: The illustration of an example process of consulting. The task is to give some suggestions on building a compressed hydrogen storage station in Ohio.
setup is that: sometimes Agent A, despite starting with a correct answer, would be easily swayed by Agent Bâs incorrect feedback. Roughly 10% of errors in the MGSM dataset can be traced to this dynamic. Notably, this phenomenon is absent in GPT-4-based agents, highlighting the importance of agentsâ resilience to conflicting information during collaborative discussions.
Overall, the results show that AGENTVERSE effectively enhances the general understanding and reasoning capabilities of agents. Moreover, agents driven by advanced LLMs demonstrate better performance when engaged in collaborative decision-making. The nuanced challenges observed with GPT-3.5-Turbo indicate the need to improve LLMsâ robustness on incorrect information so that the collaboration can amplify individual strengths without introducing new vulnerabilities.
Case Study: Consulting. In Table 1, the Group setup does not show a clear advantage over the Solo setup for both LLMs. This is mainly because the evaluation metrics for each benchmark have a limited scope. In the following case, we highlight the benefits of the group formed by GPT-4 agents by focusing on a consulting scenario where the group acts as a consultancy, responding to inquiries as shown in Figure 2. The goal is to offer suggestions for a hydrogen storage station in Ohio. | 2308.10848#22 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 23 | Metrics. Following the experiments in (Geng et al. 2022), we cover five different task families â rating, sequential rec- ommendation, explanation, review, and direct recommenda- tion to facilitate the multitask pretraining for the recommen- dation. For rating prediction, we adopt Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) as eval- uation metrics. For sequential recommendation and direct recommendation tasks, we employ top-k Hit Ratio (HR@k) and Normalized Discounted Cumulative Gain (NDCG@k) to evaluate the performance and report HR@1, 5, 10 and NGCG@5, 10. For explanation generation and review sum- marization, we evaluate different methods with BLEU-4, ROUGE-1, ROUGE-2, and ROUGE-L. Lower values of RMSE and MAE indicate better performance, while higher values are preferred for all other metrics. In all result ta- bles, bold numbers represent the best performance, while underlined numbers refer to the second-best performance. | 2308.10837#23 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 23 | At first glance, the Solo setup seems to cover a broader scope than the Group setup at round 0. However, the Group setup offers more depth thanks to the recruited experts. For instance, while the Solo setup might suggest something basic like âFind an optimal locationâ, the Group setup provides detailed advice, such as âevaluating site soil properties to ensure storage tank stability.â By the second round, different experts offer new insights in the Group setup. As a result, the Group setup not only covers a broader range (highlighted in red in the referenced figure) but also gives more detailed advice. For a detailed look at agent interactions, see Appendix F.
3.2 CODING CAPABILITIES
In this section, we first assess the agentsâ coding capabilities using the Humaneval code completion dataset. Next, through a case study, we illustrate how collaboration among multiple agents improves output quality, highlighting its superiority over software development by just one agent.
Experimental Results. In Table 2, we see a clear performance improvement moving from CoT to Solo and then to Group setup. This trend is especially pronounced with GPT-4, which sees a performance boost from 83.5 to 89.0. These results highlight AGENTVERSEâs effectiveness in managing a skilled group of agents for coding. For GPT-3.5-Turbo, although we have observed a drop
5
Preprint | 2308.10848#23 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 24 | Baselines for Multiple Tasks To demonstrate compe- tence on a wide range of recommendation-related tasks, we adopt the same representative approaches as (Geng et al. 2022) for different tasks, such as Rating Prediction (MF (Koren, Bell, and Volinsky 2009) and MLP (Cheng et al. 2016)), Direct Recommendation (BPR-MF (Ren- dle et al. 2009), BPR-MLP (Cheng et al. 2016), and SimpleX (Mao et al. 2021)), Sequential Recommendation (Caser (Tang and Wang 2018), HGN (Ma, Kang, and Liu 2019), GRU4Rec (Hidasi et al. 2016), BERT4Rec (Sun et al. 2019), FDSA (Zhang et al. 2019), SASRec (Kang and McAuley 2018), and S3-Rec (Zhou et al. 2020)), Explana- tion Generation (Attn2Seq (Dong et al. 2017), NRT (Li et al. 2017), PETER (Li, Zhang, and Chen 2021), and PETER+), and review summarization (T0 (Sanh et al. 2022) and GPT- 2 (Radford et al. 2019)). The detailed baselines are provided in the Appendix. | 2308.10837#24 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 24 | Software Development with Group Setup Software Development with Solo Setup i: An experienced programmer â¬: A software developer: A UVUX designertâ¢: A software tester | 1 Round 0 Round 1 1 Round 0 Round 1 GOBE Decision-Making: GOB! decision-Making: 1] @becision-Making: â@ Decision-Making: 1 eee sine caeuter eee sinsscasiter ee on nen â 1 : ° â am |: 7 : = H 7 8 2 1 7 8 ° : . es | 4 A 6 : 4 5 6 2 2 o 2 : ZZ | 1 2 3 1 2 3 =a ° a a | ° ° 5 + Clow one Ger Dae tf cee Clear Runnable Color Difference Error Handle Runnable Golor Difference Error Handle | {| âRunnable Color Difference Error Handle Runnable Color Difference Error Handle i o e o eo Functionally Keyboard Tnput Click Feedback | | âFuncHonally Keyboard Tnput Click Feedback | 1 | Functionality Keyboard Tnput Click Feedback | | âFunctionality Keyboard Input Click Feedback @ @ ' @ @ 1 Evaluator: Evaluator: 1| Evaluator: Evaluator: Completeness:8, Functionality: 8 Completeness:9, | 2308.10848#24 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 25 | Implementation To facilitate the multitask prompt-based pretraining for the recommendation, Geng et al. (2022) created a collection of personalized prompt templates. The collection covers five different task families â rating, sequential recommenda- tion, explanation, review, and direct recommendation. The
Table 1: Basic statistics of the experimental datasets.
Dataset Sports Beauty Toys #Users #Items #Reviews #Sparsity (%) 35,598 18,357 296,337 0.0453 22,363 12,101 198,502 0.0734 19,412 11,924 167,597 0.0724 | 2308.10837#25 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 25 | @ 1 Evaluator: Evaluator: 1| Evaluator: Evaluator: Completeness:8, Functionality: 8 Completeness:9, Functionality: 9 1 | Completeness:2, Functionality: 7 Completeness:9, Functionality: 9, Readability: 7, Robusiness: 7 Readability: 9, Robustness: 9 1 | Feadabiliy: 7, Robustness: 7 Readability: 8, Robustness: 8 âTho keyboard input doesnt include The code is wel-stuctured, readable and 1 | Use a safer way to evaluato mathematical âThe code i wel structured and accomplishes {unctonalty for delete clea, or calculate robust. Ithandles common exceptions and expressions. Add more comments. Add more is task. Mere are comments that make it operations provides clear feedback to[..] | [excestion handing easier to understand what each part does. [..] | 2308.10848#25 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 26 | prompts include personalized fields for users and items to help the model discover user-item preferences. For rating prediction, prompts ask to predict a userâs rating or prefer- ence for an item. For sequential recommendation, prompts ask to predict the next item a user will interact with. For ex- planation, prompts ask to generate text explaining a userâs preferences. For review, prompts summarize or predict rat- ings from reviews. For direct recommendation, prompts ask whether to recommend an item to a user. The complete col- lection of personalized prompts with examples is provided in the Appendix of (Geng et al. 2022). These prompts en- able the building of diverse training examples from raw data for multitask pertaining. We pretrain our RecSysLLM with diverse training examples with different prompt templates from all five task families to verify its multitask learning ability. Besides, we adopt a part of prompts in each task fam- ily for zero-shot evaluation while all remaining prompts are utilized for multitasking prompted pretraining. As a result, we are able to not only compare the performance across var- ious recommendation tasks but also evaluate the zero-shot generalization capability on unseen prompts. | 2308.10837#26 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 26 | Figure 3: The illustration of an example process of developing a calculator with GUI in Python.
Table 2: The pass@1 on Humaneval. Setting GPT-3.5-Turbo GPT-4 CoT Solo Group 73.8 74.4 75.6 83.5 87.2 89.0
Case Study: Software Development. Our examination of the code generated for Humaneval by the Group setup in AGENTVERSE offers benefits beyond mere correctness. The agent group refines solutions, yielding more efficient, robust, and secure algorithms that are not covered by simple pass@1 metric. To better elucidate these advantages, we present a case study with GPT-4 on software development, a domain requiring multifaceted collaboration and refinement. | 2308.10848#26 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 27 | Our RecSysLLM model for these English language tasks leverages the powerful GLM-10B for English (Du et al. 2021) model as a foundation. GLM is a General Language Model pretrained with an autoregressive blank-filling objec- tive and can be finetuned on various natural language under- standing and generation tasks. Our approach builds on this pre-trained GLM-10B foundation by utilizing a parameter- efficient fine-tuning method called LoRA (Low-Rank Adap- tation) (Hu et al. 2021) to adapt the model to our specific recommendation tasks. LoRA enables efficiently customiz- ing the enormous GLM-10B model to specialized domains by learning a low-dimensional decomposition of the model update. This allows us to tap into GLM-10Bâs broad lan- guage knowledge while calibrating it to our RecSysLLM objectives. We inject trainable rank decomposition matri- ces into each query key value, dense, dense h to 4h and dense 4h to h layer of Transformer architecture in GLM- 10B. We pretrain our RecSysLLM for eight epochs with AdamW optimization (Loshchilov and Hutter 2017) on four NVIDIA RTX A100 | 2308.10837#27 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 27 | We present an example where AGENTVERSE creates a Python-based calculator GUI by bringing together diverse expert agents. A concise development process overview is visualized in Figure 3. Comparing the applications from the Group and Solo setups reveals notable distinctions. Both achieve core functionality, but the Group-created calculator boasts a user-friendly interface with features like color distinctions and keyboard input. This improved design resulted from the diverse feedback of the multi-agent group. Suggestions from UI designer and evaluators enhance the user experience, while software tester enhances code robustness. A deeper examination of the code confirms that the multi-agent groupâs output excels in exception handling compared to that of a solo agent. The codes generated by the two setups and the complete progress can be seen at Appendix F.
3.3 TOOL UTILIZATION CAPABILITIES
The capability of LLMs to use real-world tools has been emphasized in many recent studies (Schick et al., 2023a; Qin et al., 2023a). By equipping the LLMs with different tools such as a calculator, a web browser, and a code interpreter, the capabilities of LLMs can be significantly improved. In this section, we demonstrate that AGENTVERSE enables a group of agents to address intricate and multi-faceted tasks that require interaction with multiple tools, thereby enhancing work efficiency. | 2308.10848#27 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 28 | 10B. We pretrain our RecSysLLM for eight epochs with AdamW optimization (Loshchilov and Hutter 2017) on four NVIDIA RTX A100 GPUs. In order to achieve efficient use of memory and distributed training, we use the DeepSpeed (Rasley et al. 2020) module. The batch size is set to 32 per GPU. We set the peak learning rate as 1 Ã 10â5 and use a warmup strategy to adjust the learning rate. In addition, we set the maximum length of input tokens to 1024. | 2308.10837#28 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 28 | Experimental Results. We design a set of 10 intricate tasks, each requiring the use of at least two distinct tools to accomplish. By providing agents access to several tools, including Bing search API, a web browser, a code interpreter, and task-related APIs, we explore how AGENTVERSE facilitates agent collaboration, dissects the overarching task into manageable sub-tasks, and effectively deploys the available tools to address realistic user queries. Of the 10 challenging tasks provided, an agent group orchestrated by AGENTVERSE adeptly accomplishes 9 tasks. On the other hand, a standalone ReAct agent (Yao et al., 2023b), which is a prevalent agent designed for tool using, can only fulfill 3 tasks. In 6 out of 7 tasks where the single ReAct agent fails, the agent does not adhere to one or more criteria detailed in the task, and exit earlier than expected. We refer interested readers to Appendix B for a comprehensive comparison of the solutions given by AGENTVERSE and a single ReAct agent.
6
Preprint | 2308.10848#28 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 29 | # Performance.
We pretrain our RecSysLLM on a diverse set of training ex- amples utilizing different prompt templates across all five
Table 2: Performance on rating prediction. The shadow refers to the test on unseen prompts in a zero-shot manner.
Methods Sports RMSE MAE Beauty RMSE MAE Toys RMSE MAE 1.0234 1.1277 1.0357 RecSysLLM 1.0410 1.0292 RecSysLLM 1.0278 MF MLP P5 P5 0.7935 0.7626 0.6813 0.7012 0.6864 0.6631 1.1973 1.3078 1.2843 1.2721 1.2870 1.2671 0.9461 0.9597 0.8534 0.8431 0.8531 0.8235 1.0123 1.1215 1.0544 1.0246 1.0245 1.0112 0.7984 0.8097 0.7177 0.7012 0.6931 0.6014 | 2308.10837#29 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 29 | Code Interpreter Agents: &i: Bella &}: Charlie } [Toots: |p Bing Search API © web Browser @ Query: Recently, it has become popular to verify the mathematical reasoning abilities of LLMs by observing if they can solve the â24-Point Game." What is this game? Does it have a code-based solution? If it does, provide a Python code along with test cases and test its functionality. What are some other similar games that can be used to test the models' mathematical reasoning abilities? Round 0 Round 1 Decision-Makin, Decision-Making -Research the game and identify similar games 18: Axvevelop and test the Python code for solving the game i f i \ @ j 11. 1b: what is 24-point game?! '1. 09: Rule of 24-point game? | [> : 24-point simitar games?! | ' 12.6): Browse the Ist website | |2.(9): Browse the 1st website | @: Browse the Ist website | |1[=]: More test case and test, |3.§@J: Submit the rules '3.§2: Write code + test cases! |] !3.9): Browse the 2nd website! |2.@J: Submit the result A. | 2308.10848#29 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 30 | task families. This is to thoroughly verify its multitask learn- ing capabilities. The results in Tables 2-7 demonstrate that for tasks with seen prompt templates, our model reaches the same conclusions as the P5 model and achieves compara- ble or superior performance. However, we were pleasantly surprised to discover that for unseen prompt templates in a zero-shot manner, our model significantly surpasses P5. | 2308.10837#30 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10837 | 31 | (1) From Table 2, for rating prediction, our RecSysLLM gets similar performance on prompt in the train data set, but it has better RMSE and MAE on all three datasets compared with P5 on zero-shot setting. It reflects that our RecSysLLM inherits the semantic understanding capacity of LLM on un- seen prompts, which meets our expectations for the LLM. (2) In Table 4, for the sequential recommendation, our Rec- SysLLM surpasses P5 on Beauty and Toys. It gets better per- formance than P5 on unseen prompts in a zero-shot manner. The results show that our RecSysLLM gains inter- and intra- entity knowledge and make more reasonable predictions. (3) As shown in Table 5, our RecSysLLM demonstrates supe- rior performance on the task of explanation generation, both with and without feature-based hints. The large improve- ments in natural language processing abilities of LLMs un- derlie this strong performance. Moreover, the considerable increase in scores when hints are provided highlights the critical role prompt engineering plays in eliciting the full capabilities of large language models. Through prompt de- sign and the generative power of LLMs, our system achieves | 2308.10837#31 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 31 | Figure 4: An example process of multi-agent solving user query with three different tools.
Case Study: Solving 24-Point Game and Providing Similar Games. Here, we present an example in Figure 4, illustrating how AGENTVERSE searches for the rules of 24-point game, implements the code along with test cases, and explores similar games. The task is multifaceted; thus, during decision-making stage, the agents split the task into two sub-tasks in their discussion, and each assigned to a certain agent. While agent Charlie overlooks the sub-task of identifying games similar to the 24-point game in round 0, feedback from the evaluation module rectifies this in the subsequent iteration. Ultimately, the agent group provides not only the 24-point game rules and a solving code with test cases, but also a summary of a similar game. In contrast, a standalone ReAct agent merely provides the gameâs definition along with a code and omits the query for similar games.
# 4 EMERGENT BEHAVIORS WITHIN A MULTI-AGENT GROUP | 2308.10848#31 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 32 | the critical role prompt engineering plays in eliciting the full capabilities of large language models. Through prompt de- sign and the generative power of LLMs, our system achieves state-of-the-art results on this challenging task. (4) The re- view summarization results further demonstrate the superi- ority of our RecSysLLM, as shown in Table 6. Despite hav- ing fewer parameters than T0 (7 billion vs 11 billion), our model attains higher performance across all evaluation met- rics. These gains over strong baselines like T0 underscore the efficiency and effectiveness of our approach. The capa- bility to produce high-quality summaries with fewer param- eters highlights the strength of our method, delivering strong performance without the need for extremely large models. (5) For the task of direct recommendation, we make an eval- uation on open question prompts to test the ability of gener- ative recommendation. The results are illustrated in Table 7. Our RecSysLLM outperforms P5 on most evaluation met- rics for this task. The simpleX model is a strong collabora- tive filtering baseline, but RecSysLLM achieves better top-1 item ranking compared to simpleX. | 2308.10837#32 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 32 | # 4 EMERGENT BEHAVIORS WITHIN A MULTI-AGENT GROUP
Round Round 1 Round 3 Decision-Making _ Execution Decision-Making Execution Decision-Making Execution ie] A ge fe Ee ger ee ee Qo ABs (78s) OO 8) | Qee Qeo! Ape Cie euce toce | : Inventory at the End of this Round Inventory at the End of this Round Inventory at the End of this Round © BB ace EG@«@2 )) | B@xn@egur )| | B@x Ge Fn Gaga wen | Gu )| | as xe oan )| | (Qa Gre Hi as. He) "eM crarie | (ZB 2 0 us imEsieaa kane rn } 2X Bu
Figure 5: An illustration of the collaborative process involving three agents crafting a bookshelf. The process begins with the decision-making and breaking down the goal into several sub-tasks, with each agent receiving an assignment. The execution results and the current environmental state are then passed to the evaluator. This process repeats until the goal of crafting a bookshelf is achieved. | 2308.10848#32 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10848 | 33 | In the preceding section, the efficacy of AGENTVERSE has been illustrated across a spectrum of tasks that necessitate multi-agent decision-making, especially for GPT-4-based agents. Our endeavor, however, surpasses just improvements on benchmark datasets. We delve deeper into emergent collaborative behaviors exhibited by agents within realistic, embodied AI contexts. Minecraft, a sandbox game, serves as an ideal platform for such exploration due to its intricate parallelisms with real-world dynamics. In the game, agents must not just execute tasks but also plan, coordinate, and adjust to evolving situations. We task agents with collaboratively crafting a variety of items, spanning from paper and paintings to books and bookshelves. A succinct figure showcasing three agents adeptly crafting a bookshelf can be viewed in Figure 5. An elaborate visualization is placed at Appendix F, and details of the setups can be found in Appendix C.
By examining the decision-making process, we identify several emergent behaviors and categorize them into three aspects: volunteer, conformity, and destructive behaviors. Note that these behaviors not necessarily only appear in Minecraft but also in previous experiments such as tool utilization.
7
Preprint | 2308.10848#33 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 34 | Table 3: The training sequences in Amazon Toys dataset for P5 and our RecSysLLM model.
Sequence P5 RecSysLLM 1 1, 2, 3, 4, 5, 6, 7 Hasbro Electronic Catch Phrase, Gloom, Cards Against Humanity, Carcassonne Basic Game, Asmodee 7 Wonders Wonder Pack, Village Board Game, Roryâs Story Cubes - Voyages 2 8, 9, 10, 11, 12 Megabloks CAT 3in1 Ride On Truck, Fisher-Price Jake and The Never Land Pirates - Jakeâs Musical Pirate Ship Bucky, VTech KidiBeats Drum Set, Playskool Heroes Transformers Rescue Bots Blades the Copter-Bot Figure, LeapFrog LeapPad2 Power Learning Tablet 1767 692, 5235, 5765, 709, 7162 Badger Basket White Doll Crib With Cabinet Bedding And Mobile - Pink/White, Badger Basket Doll High Chair With Plate Bib And Spoon - Pink/White, Fisher-Price Brilliant Basics Lil Snoopy (Colors May Vary), LeapFrog Shapes and Sharing Picnic Basket, JC Toys 20" La Baby Doll 17788 Webkinz Velvety Elephant, Webkinz Love Frog Limited Edition Release | 2308.10837#34 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 34 | (1) Volunteer Behaviors [Alice]: Hi Bob, for this task we need to craft 2 papers. .. One of [Alice]: I've gathered all the sugar canes we need ... I can drop us can focus on gathering the sugar canes while the other one the sugar canes for you to put them into the crafting table can be ready to craft them into paper. What do you think? : and craft the paper. Please confirm if this sounds good to you. [Bob]: Yes, Alice, it sounds good. Let's meet up at the crafting â table where you can drop the sugar canes. I'll then pick them up and use them to craft the paper. I'm ready when you are. {py [Bob]: Hi Alice, sounds like a good plan. To be more efficient, how about we both gather sugar canes? ... What do you think? (1a) Time Contribution (1b) Resource Contribution | [Alice]: Great job, Bob! We have now gathered the sugar canes needed. Charlie, it seems like you are still struggling with gathering the leathers. Maybe Bob and I could help you with that. What do you think, Bob? Should we all focus on gathering the leathers? pry [Bob]: Alice, I agree with your suggestion. .. | 2308.10848#34 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10848 | 35 | could help you with that. What do you think, Bob? Should we all focus on gathering the leathers? pry [Bob]: Alice, I agree with your suggestion. .. Charlie, we will help you with this task. Let's all gather the leathers so we can progress with our mission. Are you okay with this plan, Charlie? [Charlie]: Yes, Bob, .. Gathering the leathers has proven to be more challenging than expected. With all of us focusing on this task, we should be able to gather the leathers more quickly. So, for this round, let's all focus on gathering the 3 leathers. Let's do this, team! : (2) Conformity Behaviors B [Alice]: Bob, good job on getting one leather. We still need two more. Charlie, I see you've been very busy crafting, but we need to focus on gathering the leathers. For this round, let's all continue to focus on getting the leathers. Are we in agreement? gry [Bob]: Alice, I agree with your plan. We should all continue to focus on gathering the 2 remaining leathers. Charlie, are you okay with this assignment? pry [Charlie]: Yes, Alice and Bob, I agree with the plan. I realize I got sidetracked with | 2308.10848#35 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
2308.10837 | 36 | Methods Sports Beauty Toys HR@5 NDCG@5 HR@10 NDCG@10 HR@5 NDCG@5 HR@10 NDCG@10 HR@5 NDCG@5 HR@10 NDCG@10 0.0116 0.0189 0.0129 0.0115 0.0182 0.0233 0.0251 0.0364 RecSysLLM 0.0360 0.0387 RecSysLLM 0.0392 Caser HGN GRU4Rec BERT4Rec FDSA SASRec S3-Rec P5 P5 0.0072 0.0120 0.0086 0.0075 0.0122 0.0154 0.0161 0.0296 0.0291 0.0312 0.0330 0.0194 0.0313 0.0204 0.0191 0.0288 0.0350 0.0385 0.0431 0.0417 0.0460 0.0512 0.0097 0.0159 0.0110 0.0099 0.0156 0.0192 0.0204 0.0318 0.0302 0.0336 0.0375 0.0205 0.0325 0.0164 0.0203 0.0267 0.0387 0.0387 0.0508 0.0508 0.0493 | 2308.10837#36 | Leveraging Large Language Models for Pre-trained Recommender Systems | Recent advancements in recommendation systems have shifted towards more
comprehensive and personalized recommendations by utilizing large language
models (LLM). However, effectively integrating LLM's commonsense knowledge and
reasoning abilities into recommendation systems remains a challenging problem.
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model
based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating
recommendation domain knowledge through unique designs of data, training, and
inference. This allows RecSysLLM to leverage LLMs' capabilities for
recommendation tasks in an efficient, unified framework. We demonstrate the
effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM
provides a promising approach to developing unified recommendation systems by
fully exploiting the power of pre-trained language models. | http://arxiv.org/pdf/2308.10837 | Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, James Y Zhang, Sheng Li | cs.IR | 13 pages, 4 figures | null | cs.IR | 20230821 | 20230821 | [
{
"id": "1810.04805"
},
{
"id": "2305.07001"
},
{
"id": "2305.19860"
},
{
"id": "2212.01681"
},
{
"id": "2305.06566"
},
{
"id": "2305.07961"
},
{
"id": "2302.00848"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2304.03516"
},
{
"id": "1511.06939"
},
{
"id": "2103.10360"
},
{
"id": "2303.14524"
},
{
"id": "1711.05101"
},
{
"id": "2304.10149"
},
{
"id": "2305.06474"
},
{
"id": "2306.05817"
},
{
"id": "2305.16646"
},
{
"id": "2106.09685"
},
{
"id": "2305.07622"
}
] |
2308.10848 | 36 | Charlie, are you okay with this assignment? pry [Charlie]: Yes, Alice and Bob, I agree with the plan. I realize I got sidetracked with other tasks, and I will refocus on our main objective of gathering the remaining 2 leathers. Let's continue with this mission, team! (3) D structive Behaviors ing. I suggest that [Alice]: Bob, [..] Now we need to craft 1 painti [Alice]: Bob, I'm still struggling you drop 1 wool and 4 sticks, and I will pick them up to craft of leather. I should continue to find and kill 3 cows. Once I'm done, ! the painting. What do you think? : Twill inform you immediately. How about you? : | gy [Bob]: Alice, I just failed to craft 3 books. I will continue to focus on crafting 3 books. Let's get it done! [Bob]: That sounds good, Alice. I'll drop 1 wool and 4 sticks for you to pick up. [..] Let's do this. BB Alice actually executes: Kill Bob and collect the dropped items! yr Bob actually executes: Break the library in a village to get the dropped books instead of collecting necessary materials and craft. | 2308.10848#36 | AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors | Autonomous agents empowered by Large Language Models (LLMs) have undergone
significant improvements, enabling them to generalize across a broad spectrum
of tasks. However, in real-world scenarios, cooperation among individuals is
often required to enhance the efficiency and effectiveness of task
accomplishment. Hence, inspired by human group dynamics, we propose a
multi-agent framework \framework that can collaboratively and dynamically
adjust its composition as a greater-than-the-sum-of-its-parts system. Our
experiments demonstrate that \framework framework can effectively deploy
multi-agent groups that outperform a single agent. Furthermore, we delve into
the emergence of social behaviors among individual agents within a group during
collaborative task accomplishment. In view of these behaviors, we discuss some
possible strategies to leverage positive ones and mitigate negative ones for
improving the collaborative potential of multi-agent groups. Our codes for
\framework will soon be released at
\url{https://github.com/OpenBMB/AgentVerse}. | http://arxiv.org/pdf/2308.10848 | Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou | cs.CL | Under review. Code at https://github.com/OpenBMB/AgentVerse/ | null | cs.CL | 20230821 | 20231023 | [
{
"id": "2308.01862"
},
{
"id": "2307.16789"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.