id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.06782#57 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 14 TABLE 5: PENTESTGPT performance over the active Hack- TheBox Challenges. Machine Sau Pilgramage Topology PC MonitorsTwo Authority Sandworm Jupiter Agile OnlyForYou Total Difficulty Easy Easy Easy Easy Easy Medium Medium Medium Medium Medium - Completion â â â â â â â â â â 6 Completed Users 4798 5474 4500 6061 8684 1209 2106 1494 4395 2296 - Cost (USD) 15.2 12.6 8.3 16.1 9.2 11.5 10.2 6.6 22.5 19.3 131.5 | 2308.06782#56 | 2308.06782#58 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#58 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | HackTheBox active machine challenges, a series of penetra- tion testing objectives open to global testers. Each challenge consists of two components: a user flag, retrievable upon initial user access, and a root flag, obtainable after gaining root access. Our evaluation encompasses five targets of easy difficulty and five of medium difficulty. During this exercise, PENTESTGPT, utilizing GPT-4â s 32k token API, conducts up to five tests on each target. Success is defined solely by the capture of the root flag. Table 5 details the performance of PENTESTGPT in these challenges3. Ultimately, PEN- TESTGPT completes three easy and five medium challenges. The total expenditure for this exercise amounts to 131.5 USD, averaging 21.92 USD per target. This cost is markedly lower than employing human penetration testers and falls within an acceptable range. Our evaluation, therefore, under- scores PENTESTGPTâ s capability to yield viable penetration testing results in real-world settings at an efficient cost, thereby highlighting its potential as a practical tool in the cybersecurity domain. | 2308.06782#57 | 2308.06782#59 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#59 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | # 7. Discussion We recognize that the penetration testing walkthrough might have been part of the training material for the tested LLMs, potentially biasing the results. To mitigate this, we take two measures. First, we manually verify that the LLM does not have prior knowledge of the target machine. We do this by prompting the LLMs if the tested machine is within their knowledge base. Second, we include penetration test- ing target machines released after 2021 in our benchmark, which falls outside the training data of OpenAI models. The practicality study on the most recent HackTheBox challenges also demonstrates that PENTESTGPT can solve challenges without prior knowledge of the target. The rapidly evolving nature of LLMs and inconsistencies in available APIs could invalidate PENTESTGPTâ | 2308.06782#58 | 2308.06782#60 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#60 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | s designed prompts. We strive to make prompts general and suitable for various LLMs. However, due to their hacking nature, some LLMs resist generating specific penetration testing content, such as concrete reverse shell scripts. Our prompts include jailbreak techniques [48] to guide the LLM to gener- ate penetration-testing-related information. How to generate 3. Completed Users denotes the number of users globally who have completed the target as of the manuscript submission time. Note that HackTheBox boasts over 670,000 active users. | 2308.06782#59 | 2308.06782#61 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#61 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | reproducible outcomes is an important direction we are working towards. We identify hallucination in Large Language Mod- els [46] as a significant challenge where the modelâ s outputs diverge from its training data. This affects the reliability of our automatic penetration testing tool. We are actively exploring various techniques [49] to reduce hallucination and enhance our toolâ s performance. As an ongoing work, we believe such an attempt will lead to a more robust and effective automatic penetration testing tool. # 8. Conclusion | 2308.06782#60 | 2308.06782#62 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#62 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | In this work, we explore the capabilities and limitations of Large Language Models (LLMs) in the context of pen- etration testing. By developing and implementing a novel benchmark, we provide critical insights into how LLMs perform in this intricate domain. We find that LLMs handle fundamental penetration testing tasks and utilize testing tools competently, but they also suffer from context loss and attention issues inherent to their design. Building on these findings, we introduce PENTESTGPT, a specialized tool that simulates human-like behavior in penetration testing. Drawing inspiration from the structure of real-world penetration testing teams, PENTESTGPT features Reasoning, Generation, and Parsing Modules. This design enables a divide-and-conquer approach to problem-solving. Our thorough evaluation of PENTESTGPT reveals its poten- tial and highlights areas where human expertise continues to outpace current technology. Overall, the contributions of this study serve as a valuable resource and offer a promising direction for continued research and development in the essential field of cybersecurity. | 2308.06782#61 | 2308.06782#63 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#63 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 15 # References [1] A. Applebaum, D. Miller, B. Strom, H. Foster, and C. Thomas, â Anal- ysis of automated adversary emulation techniques,â in Proceedings of the Summer Simulation Multi-Conference. Society for Computer Simulation International, 2017, p. 16. [2] B. Arkin, S. Stender, and G. McGraw, â Software penetration testing,â IEEE Security & Privacy, vol. 3, no. 1, pp. 84â 87, 2005. [3] G. Deng, Z. Zhang, Y. Li, Y. Liu, T. Zhang, Y. Liu, G. Yu, and D. Wang, â | 2308.06782#62 | 2308.06782#64 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#64 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Nautilus: Automated restful api vulnerability detection.â [4] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong et al., â A survey of large language models,â arXiv preprint arXiv:2303.18223, 2023. [5] Y. Liu, T. Han, S. Ma, J. Zhang, Y. Yang, J. Tian, H. He, A. Li, M. He, Z. Liu et al., â Summary of chatgpt/gpt-4 research and per- spective towards the future of large language models,â arXiv preprint arXiv:2304.01852, 2023. | 2308.06782#63 | 2308.06782#65 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#65 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler et al., â Emergent abilities of large language models,â arXiv preprint arXiv:2206.07682, 2022. [7] N. Antunes and M. Vieira, â | 2308.06782#64 | 2308.06782#66 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#66 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Benchmarking vulnerability detection tools for web services,â in 2010 IEEE International Conference on Web Services. P. Xiong and L. Peyton, â A model-driven penetration test framework for web applications,â in 2010 Eighth International Conference on Privacy, Security and Trust. â Hackthebox: Hacking training for the best.â [Online]. Available: http://www.hackthebox.com/ [10] [Online]. Available: https://www.vulnhub.com/ [11] â OWASP Foundation,â https://owasp.org/. [12] â Models - openai api,â https://platform.openai.com/docs/models/, (Accessed on 02/02/2023). | 2308.06782#65 | 2308.06782#67 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#67 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | [13] â Gpt-4,â https://openai.com/research/gpt-4, (Accessed on 06/30/2023). [14] Google, â Bard,â https://bard.google.com/?hl=en. [15] Rapid7, â Metasploit framework,â 2023, accessed: 30-07-2023. [Online]. Available: https://www.metasploit.com/ [16] S. Mauw and M. Oostdijk, â | 2308.06782#66 | 2308.06782#68 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#68 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Foundations of attack trees,â vol. 3935, 07 2006, pp. 186â 198. [17] [Online]. Available: https://app.hackthebox.com/machines/list/active penetra- tion https://anonymous.4open.science/r/ EXCALIBUR-Automated-Penetration-Testing/README.md, 2023. [19] G. Weidman, Penetration testing: a hands-on introduction to hacking. No starch press, 2014. | 2308.06782#67 | 2308.06782#69 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#69 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | [20] F. Abu-Dabaseh and E. Alshammari, â Automated penetration testing: An overview,â in The 4th International Conference on Natural Lan- guage Computing, Copenhagen, Denmark, 2018, pp. 121â 129. [21] J. Schwartz and H. Kurniawati, â Autonomous penetration testing us- ing reinforcement learning,â arXiv preprint arXiv:1905.05965, 2019. [22] H. Pearce, B. Ahmad, B. Tan, B. Dolan-Gavitt, and R. | 2308.06782#68 | 2308.06782#70 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#70 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Karri, â Asleep the keyboard? assessing the security of github copilotâ s code at contributions,â in 2022 IEEE Symposium on Security and Privacy (SP). [23] H. Pearce, B. Tan, B. Ahmad, R. Karri, and B. Dolan-Gavitt, â Exam- ining zero-shot vulnerability repair with large language models,â in 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023, pp. 2339â 2356. [24] â OWASP Juice-Shop Project,â https://owasp.org/ www-project-juice-shop/, 2022. | 2308.06782#69 | 2308.06782#71 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#71 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 16 # [25] [Online]. Available: https://nmap.org/ [26] MITRE, â Common Weakness Enumeration (CWE),â https://cwe. mitre.org/index.html, 2021. [27] E. Collins, â Lamda: Our breakthrough conversation technology,â May 2021. [Online]. Available: https://blog.google/technology/ai/lamda/ [28] â New chat,â https://chat.openai.com/, (Accessed on 02/02/2023). [29] â | 2308.06782#70 | 2308.06782#72 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#72 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | The most advanced penetration testing distribution.â [Online]. Available: https://www.kali.org/ [30] S. Inc., â Nexus vulnerability scanner.â [Online]. Available: https: //www.sonatype.com/products/vulnerability-scanner-upload [31] S. Rahalkar and S. Rahalkar, â Openvas,â Quick Start Guide to Pen- etration Testing: With NMAP, OpenVAS and Metasploit, pp. 47â | 2308.06782#71 | 2308.06782#73 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#73 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 71, 2019. [32] B. Guimaraes and M. Stampar, â sqlmap: Automatic SQL injection and database takeover tool,â https://sqlmap.org/, 2022. [33] J. Yeo, â Using penetration testing to enhance your companyâ s secu- rity,â Computer Fraud & Security, vol. 2013, no. 4, pp. 17â 20, 2013. [34] A. | 2308.06782#72 | 2308.06782#74 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#74 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, â Attention is all you need,â 2023. [35] Y. Bang, S. Cahyawijaya, N. Lee, W. Dai, D. Su, B. Wilie, H. Lovenia, Z. Ji, T. Yu, W. Chung et al., â A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity,â arXiv preprint arXiv:2302.04023, 2023. [36] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou, â Chain-of-thought prompting elicits reasoning in large language models,â | 2308.06782#73 | 2308.06782#75 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#75 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 2023. [37] H. S. Lallie, K. Debattista, and J. Bal, â A review of attack graph and attack tree visual syntax in cyber security,â Computer Science Review, vol. 35, p. 100219, 2020. [Online]. Available: https: //www.sciencedirect.com/science/article/pii/S1574013719300772 [38] K. Barbar, â Attributed tree grammars,â Theoretical Computer Science, vol. 119, no. 1, pp. 3â | 2308.06782#74 | 2308.06782#76 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#76 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 22, 1993. [Online]. Available: https://www.sciencedirect.com/science/article/pii/030439759390337S [39] H. Sun, X. Li, Y. Xu, Y. Homma, Q. Cao, M. Wu, J. Jiao, and D. Charles, â Autohint: Automatic prompt optimization with hint generation,â 2023. [40] Sep 2018. carrier/963 [Online]. Available: https://forum.hackthebox.com/t/ [41] â Nikto web server scanner.â [Online]. Available: https://github.com/ sullo/nikto [42] [Online]. Available: https://openai.com/blog/chatgpt-plugins# code-interpreter threaded java application designed to brute force directories and files names on web/application servers.â [Online]. Available: https://github.com/KajanM/DirBuster [44] J. Wang, X. Yi, R. Guo, H. Jin, P. Xu, S. Li, X. Wang, X. Guo, C. Li, X. Xu et al., â | 2308.06782#75 | 2308.06782#77 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#77 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Milvus: A purpose-built vector data management system,â in Proceedings of the 2021 International Conference on Management of Data, 2021, pp. 2614â 2627. [45] R. Guo, X. Luan, L. Xiang, X. Yan, X. Yi, J. Luo, Q. Cheng, W. Xu, J. Luo, F. Liu et al., â Manu: a cloud native vector database management system,â Proceedings of the VLDB Endowment, vol. 15, no. 12, pp. 3548â 3561, 2022. [46] M. Zhang, O. Press, W. Merrill, A. Liu, and N. A. Smith, â | 2308.06782#76 | 2308.06782#78 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#78 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | How language model hallucinations can snowball,â arXiv preprint arXiv:2305.13534, 2023. [47] [Online]. Available: https://www.vulnhub.com/entry/hackable-ii,711/ [48] Y. Liu, G. Deng, Z. Xu, Y. Li, Y. Zheng, Y. Zhang, L. Zhao, T. Zhang, and Y. Liu, â | 2308.06782#77 | 2308.06782#79 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#79 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Jailbreaking chatgpt via prompt engineering: An empirical study,â arXiv preprint arXiv:2305.13860, 2023. [49] P. Manakul, A. Liusie, and M. J. Gales, â Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language mod- els,â arXiv preprint arXiv:2303.08896, 2023. TABLE 6: Summarized 26 types of sub-tasks in the proposed penetration testing benchmark. Description Utilize various security tools for scanning, probing, and analyzing vulnerabilities in the target system. Identify the open ports and related information on the target machine. Gather detailed information about the targetâ s web applications, including directory structure, available services, and underlying technologies. Review the targetâ s source code to find vulnerabilities that may lead to unauthorized access or other malicious activities. Craft and utilize shell codes to manipulate the target system, often enabling control or extraction of data. Traverse and manipulate directories to discover sensitive files, misconfigurations, or hidden information on the target system. Identify and exploit weaknesses in permissions to gain higher-level access to systems or data. | 2308.06782#78 | 2308.06782#80 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#80 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Locate and retrieve specific data markers (â flagsâ ) often used in Capture The Flag (CTF) challenges to prove that a system was successfully penetrated. Utilize tools and techniques to decipher or crack passwords and cryptographic hash values for unauthorized authentication. Identify and exploit vulnerabilities within the network infrastructure to gain unauthorized access or disrupt services. Inject arbitrary commands to be run on a host machine, often leading to unauthorized system control. Manipulate user access controls to escalate privileges or gain unauthorized access to resources. Locate and extract authentication credentials such as usernames and passwords within the system. Exploit vulnerabilities in FTP (File Transfer Protocol) services to gain unauthorized access, file manipulation, or data extraction. Analyze and manipulate scheduled tasks (cron jobs) to execute unauthorized commands or disrupt normal operations. Exploit SQL (Structured Query Language) vulnerabilities like SQL injection to manipulate databases and extract sensitive information. Target Windows-based networks to exploit domain-level vulnerabilities, often gaining widespread unauthorized access. Exploit insecure deserialization processes to execute arbitrary code or manipulate object data. Repeatedly try different authentication credentials to gain unauthorized access to systems or data. Inject malicious scripts into web pages viewed by others, allowing for unauthorized access or data theft. Utilize or create exploits targeting PHP applications, leading to unauthorized access or code execution. Create and utilize custom-crafted passwords based on gathered information, aiding in unauthorized access attempts. Exploit vulnerabilities in XML parsers to perform unauthorized reading of data, denial of service, or execute remote requests. Target SSH (Secure Shell) services to gain unauthorized access or command execution on remote systems. Research known vulnerabilities in the Common Vulnerabilities and Exposures (CVE) database to understand and potentially exploit weaknesses in target systems. Other engagements in additional exploratory testing and other methods to uncover vulnerabilities not identified by standard procedures. | 2308.06782#79 | 2308.06782#81 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#81 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 17 | 2308.06782#80 | 2308.06782 | [
"2305.13860"
]
|
|
2308.05960#0 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | 3 2 0 2 g u A 1 1 ] I A . s c [ 1 v 0 6 9 5 0 . 8 0 3 2 : v i X r a PREPRINT # BOLAA: BENCHMARKING AND ORCHESTRATING LLM-AUGMENTED AUTONOMOUS AGENTS Zhiwei Liuâ â , Weiran Yaoâ , Jianguo Zhangâ , Le Xueâ , Shelby Heineckeâ , Rithesh Murthyâ , Yihao Fengâ , Zeyuan Chenâ , Juan Carlos Nieblesâ , Devansh Arpitâ , Ran Xuâ , Phil Muiâ , Huan Wangâ â ¦, Caiming Xiongâ â ¦, Silvio Savareseâ â ¦ # â Salesforce Research, USA â CTO Office, Salesforce, USA â ¦Corresponding Authors: {huan.wang, cxiong, ssavarese}@salesforce.com # ABSTRACT The massive successes of large language models (LLMs) encourage the emerging exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to generate actions with its core LLM and interact with environments, which fa- cilitates the ability to resolve complex tasks by conditioning on past interactions such as observations and actions. Since the investigation of LAA is still very re- cent, limited explorations are available. Therefore, we provide a comprehensive comparison of LAA in terms of both agent architectures and LLM backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs such that each labor LAA focuses on one type of action, i.e. BOLAA, where a controller manages the communication among multiple agents. We conduct simulations on both decision-making and multi-step reasoning environments, which comprehen- sively justify the capacity of LAAs. Our performance results provide quantitative suggestions for designing LAA architectures and the optimal choice of LLMs, as well as the compatibility of both. We release our implementation code of LAAs to the public at https://github.com/salesforce/BOLAA. # INTRODUCTION | 2308.05960#1 | 2308.05960 | [
"2204.02311"
]
|
|
2308.05960#1 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Recent booming successes of large language models (LLMs) (OpenAI, 2023; Touvron et al., 2023) motivate emerging exploration of employing LLM to tackle various complex tasks (Zhang et al., 2023), amongst which LLM-augmented Autonomous Agents (LAAs) (Shinn et al., 2023; Madaan et al., 2023b; Huang et al., 2022; Kim et al., 2023; Paul et al., 2023; Yao et al., 2023a) stand with most spotlights. LAA extends the intelligence of LLM to sequential action executions, exhibiting su- periority in interacting with environments and resolving complex tasks via collecting observations. To name a few, BabyAGI1 proposes an AI-powered task management system, which leverages Ope- nAI LLM2 to create, prioritize, and execute tasks. AutoGPT3 is another popular open-source LAA framework that enables the API calling capability of LLMs. ReAct (Yao et al., 2023a) is a recently proposed LAA method to interact with environments then consecutively generate the next action. Langchain4 is a recently released open-source framework for developing LAA. Due to the initial investigation, LAA is rather under-explored. Firstly, the optimal agent architecture is undetermined. ReAct (Yao et al., 2023a) prompts the agents with pre-defined examples such that the LLM learns to generate the next action via in-context learning. Moreover, ReAct argues that an agent should have intermediate reasoning steps before action executions. ReWOO (Xu et al., 2023) introduces additional planning steps for LAA. Langchain generalizes the ReAct agent with | 2308.05960#0 | 2308.05960#2 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#2 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | â [email protected] 1https://github.com/yoheinakajima/babyagi 2https://platform.openai.com/docs/api-reference 3https://github.com/Significant-Gravitas/Auto-GPT 4https://github.com/langchain-ai/langchain 1 PREPRINT zero-shot tool usage ability. Intrinsically, the optimal architecture of agents should be aligned with both tasks and the associated LLM backbone, which is less explored in the existing works. Secondly, understanding the efficacy of the existing LLMs in LAA is far from comprehensive. The existing preliminary works only compare the performances of a few LLM backbones. Re- Act adopts the PaLM (Chowdhery et al., 2022) as the backbone LLM. ReWOO employs OpenAI text-davinci-003 model for instruction-tuning Alpaca model (Taori et al., 2023) for agent planning. MIND2Web (Deng et al., 2023) compares Flan-T5 and OpenAI GPT3.5/4 for generalist web agent. Nevertheless, few current works comprehensively compare the performance of LAA with regard to various pre-trained LLMs. A very recent work (Liu et al., 2023) releases a benchmark for evaluat- ing LLMs as Agents. Nevertheless, they fail to jointly consider the agent architectures along with their LLM backbones. Selecting the optimal LLMs from both efficacy and efficiency perspectives advances the current exploration of LAA. Thirdly, the increasing complexity of tasks may require the orchestration of multiple agents. Re- WOO recently identifies that decoupling reasoning from observation improves the efficiency for LAA. In this paper, we argue that as the task complexity increases, especially in open-domain envi- ronments, it is better to coordinate multiple agents to complete one task. For example, regarding the web navigation task, we could employ one click agent to interact with clickable buttons and request another search agent to retrieve additional resources. Nonetheless, there are few works discussing how to orchestrate multiple agents and investigating the impacts of orchestration. To address these research gaps, this paper proposes to comprehensively compare the performances of LAAs. We dive deep into the agent architecture of LAAs and the LLM backbones. | 2308.05960#1 | 2308.05960#3 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#3 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Specifically, we construct agent benchmarks from the existing environments to evaluate the performances of various agent architectures built upon various LLM backbones. The tasks in our agent benchmarks are associated with different task complexity levels, which enables the agent performance analyses w.r.t. task complexity. Those agent architectures are designed to extensively verify the existing design choices. Regarding the orchestration of multiple LAAs, we propose a novel LAA architecture BOLAA5, which has a controller module on top of multiple collaborated agents, for enabling the selection and communication between multiple labor LAA. The contributions of this paper are as follows: | 2308.05960#2 | 2308.05960#4 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#4 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | â ¢ We develop 6 different LAA agent architecture. We combine them with various backbone LLMs to justify the designing intuition of LAA from prompting, self-thinking, and planning. We also develop BOLAA for orchestrating multi-agent strategy, which enhances the action interaction ability of solo agents. â ¢ We conduct extensive experiments on both decision-making web navigation environment and knowledge reasoning task environment. We report the performance in terms of final sparse re- wards and intermediate recalls, which provides qualitative indications for the optimal choice of LAAs as well as their compatible LLMs. | 2308.05960#3 | 2308.05960#5 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#5 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | â ¢ BOLAA on the WebShop environment consistently yields the best performance compared with other LAA architectures. Our results demonstrate that the importance of designing specialist agents to collaborate on resolving complex task, which should be as equally important as training a large LLM with high generalization ability. 2 RELATED WORK 2.1 AUGMENTED LANGUAGE AGENT ARCHITECTURE The completion of a complex task typically entails multiple stages. An agent must possess an under- standing of these stages and plan accordingly. Chain-of-Thoughts, also known as CoT (Wei et al., 2022), is a groundbreaking work that prompts the agent to deconstruct challenging reasoning tasks into smaller, more manageable steps. On the other hand, ReAct (Yao et al., 2023a) proposes lever- aging this aptitude for reasoning and action within Language and Learning Models (LLMs) to foster interactive engagement with the environment, such as utilizing the Wikipedia search API, by map- ping observations to the generation of reasoning and action traces or API calls in natural language. | 2308.05960#4 | 2308.05960#6 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#6 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | 5For easy memorizing, we intentionally name it the same as paper title. 2 PREPRINT This agent architecture has given rise to various applications, including HuggingGPT (Shen et al., 2023), Generative Agents (Park et al., 2023), WebGPT (Nakano et al., 2021), AutoGPT (Gravitas, 2023), BabyAGI (Nakajima, 2023), and Langchain (Chase, 2023). # check However, these approaches neglect to incorporate valuable feedback, such as environment rewards, to enhance the agentâ s behaviors, resulting in performances that rely solely on the quality of the pre- trained Language and Learning Model (LLM). Self-refine (Madaan et al., 2023a) tackles this limita- tion by employing a single LLM as a generator, refiner, and provider of feedback, enabling iterative refinement of outputs. However, it is not specifically tailored for real-world task-based interaction with the environment. On the other hand, REX (Murthy et al., 2023) and RAP (Hao et al., 2023) re- purpose the LLM to function as both a comprehensive world model and a reasoning agent. They in- corporate Monte Carlo Tree Search for strategic exploration within the vast realm of reasoning with environment rewards. This approach facilitates effective navigation and decision-making in intricate domains. Shinn et al. (2023) presents Reflexion, a framework that equips agents with dynamic mem- ory and self-reflection capabilities, enhancing their reasoning skills. Self-reflection plays a pivotal role, allowing autonomous agents to iteratively refine past actions, make improvements, and prevent repetitive errors. Recently, Yao et al. (2023b) proposes a framework, namely Retroformer, which leverages policy gradient optimization to align the agentâ s behaviors with environment-specific re- wards by learning a plug-in retrospective language model. # 2.2 WEB AGENT Web navigation is the foundation for humans to collect information and communicate. Before the boom of LLM, previous endeavours (Liu et al., 2018; Shi et al., 2017) already explored how to train web agent in a web simulation environment. Very recently, a series of works have been devoted to developing LAA to tackle complex web navigation tasks. | 2308.05960#5 | 2308.05960#7 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#7 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Though action space of web navigation is almost infinite due to numerous available elements online, these action can be divided into a few operation types, such as click, type and select. MIND2Web (Deng et al., 2023) collects a web browser data to fine-tune LLM to generate executable actions, which functions as a Web LAA. WebAgent (Gur et al., 2023) is able to decompose task instruction into sub-tasks, which directly generates executable python program for web navigation. WebArena (Zhou et al., 2023) supports realistic tasks simulation for designing Web LAA. Langchain and ChatGPT both provide convenient web plugin such that the LLM behaves as Web LAA. We believe that the web navigation is the next fundamental task for LAA to shine its superiority. 2.3 TOOL AGENT The evolution of LLM and their interactions with various tools has been a focal point of recent re- search. The concept of a â Tool Agentâ encapsulates the idea of LLMs leveraging external tools to enhance their capabilities and solve complex tasks. One of the pioneering works in this domain is the introduction of â | 2308.05960#6 | 2308.05960#8 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#8 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Gorillaâ (Patil et al., 2023). This model is adept at writing API calls and exhibits the ability to adapt test-time document changes. Another noteworthy work is the â ToolLLMâ frame- work (Qin et al., 2023). This open-source framework incorporates LLMs to efficiently engage with a myriad of tools, particularly APIs, to execute intricate tasks. The framework encompasses Tool- Bench, an instruction-tuning dataset tailored for tool utilization More recently, a paradigm shift in teaching LLMs to use new tools has been discussed in (Hsieh et al., 2023), which champions the use of tool documentation. The authors present empirical evidence suggesting that tool documentation offers detailed descriptions of tool usage, which is a more effective and scalable approach. Notably, their research indicates that zero-shot prompts, which are exclusively based on tool documentation, can rival the performance of few-shot prompts. # 3 AGENT ARCHITECTURES In this section, we compare various LAA architectures. We first present how to design different solo LAA based on the intuition of existing work. We then present the our orchestration designing of multiple LAAs, i.e. | 2308.05960#7 | 2308.05960#9 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#9 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | BOLAA. 3 PREPRINT Environment Environment Environment {Action} ( Observation | (Action _}( Observation ] ("Action ] {Observation } 4 Action Parser Action Parser Feta? 8 a Fr + x Ea z g z\|<= = = ic = +2 P| o a z 3 2 =|| 3 5 2 8 Fl s = Zeroshot S Zeroshot g Fewshot 2 - Prompt 3 Prompt Ey Prompt (a) Zeroshot LAA (b) ZeroshotThink LAA (c) ReAct LAA Figure 1: The LAA architectures for Zeroshot-LAA (ZS-LAA), ZeroshotThink LAA (ZST-LAA) and ReAct LAA. ZS-LAA generates actions from LLM with zeroshot prompt. ZST-LAA extends ZS-LAA with self-think. ReAct LAA advances ZST-LAA with fewshot prompt. They all resolve a given task by interacting with environment via actions to collect observations. | 2308.05960#8 | 2308.05960#10 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#10 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Better view in colors. 3.1 SOLO AGENTS Hereafter, we present 5 different LAAs. Each type of LAA is able to interact with the environment with its own interaction strategy. Zeroshot LAA (ZS-LAA) directly extends the LLM to be action executor. Specifically, the prompt for LLMs to function as the action executor consists of detailed descriptions for those actions. For example, if we prompt LAA to understand the click action with â click: using this action to click observed [button], the clickable buttons are in [].â , it may behave as a web navigation agent. | 2308.05960#9 | 2308.05960#11 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#11 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | We present the architecture of ZS-LAA in Figure 1(a). The working flow is as follows: â ¢ Initial step: firstly, the ZS-LAA receives the task instruction and constructs the zeroshot prompt. Then, the LLM layer generates a possible response, which is parsed to output a feasible action. After that, the observation from environment is appended into the agent memory. â ¢ Working teps: the agent checks whether the task is finished. If not, ZS-LAA retrieves the previous actions and observations from memory, and constructs the prompts for LLM to generate the next executable actions. ZS-LAA continues the working stage until reaching the maximum steps or completing the task. ZS-LAA is a minimum LAA architecture. It enables the action generation ability of LLM via zeroshot prompt layer, which is easy to generalize to new environments and requires no examples. ZeroshotThink LAA (ZST-LAA) is an extended version of ZS-LAA. Different from ZS-LAA, ZST- LAA has an additional self-think flow. The architecture of ZST-LAA is presented in Figure 1(b), where we denote the self-think flow as in pink arrow lines. Self-think is running in intermediate steps of action generations flow, which enables the Chain-of-Thought (CoT) reasoning ability. | 2308.05960#10 | 2308.05960#12 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#12 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | â ¢ Self-think Step: before generating the next action, ZST-LAA collect observations and previous actions to construct the think prompt. Then, the thought is stored into memory. Self-think step is generally useful when given reasoning tasks. Note that the think prompt is also in a zero-shot format, such as â think: using this action to plan your actions and reasoningâ . ReAct LAA additionally advances ZST-LAA in the prompt layer, where fewshot examples are provided. The architecture of ReAct LAA is illustrated in Figure 1(c). ReAct LAA is able to leverage successful running examples to improve the action generation ability of LLM and enhance the environment interaction of LAA, because those fewshot examples endows the in-context learning ability of LLM. However, the drawback for ReAct LAA is that, due to the limited context length, fewer token spaces are available after the occupancy of fewshot examples in the prompt. PlanAct LAA is designed to facilitate the planning ability of LAA. PlanAct LAA differs from ZS- LAA in two parts: 1) the planning flow and 2) the fewshot prompt. The architecture is depicted | 2308.05960#11 | 2308.05960#13 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#13 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | 4 PREPRINT Environment 4 a a rf & a ca 2 = * <= 5 3 3 c c 2 g g 3 3 i 3 Promo Prompt Prompt Jann (a) PlanAct LAA (a) PlanReAct LAA Figure 2: The LAA architectures for PlanAct LAA and PlanReAct LAA. in Figure 2. The planning flow is executed before the initial action generation step, which has additional plan prompt to construct the input for the core LLM. â ¢ Planning Step: PlanAct LAA generates a plan for a given task before interacting with environ- ments. The plan is memorized and will be retrieved to construct prompts. It is worth noting that the plan prompt in this paper is in fewshot way, which allows LAA to generate plans based on previous successful plans. PlanReAct LAA extends PlanAct LAA with additional self-think flow, which also enables the CoT ability. The architecture of PlanReAct LAA is presented in Figure 2. Intuitively, since the Planning flow is executed before the LAA observes the environment, self-think flow alleviates the hallucina- tion incurred from incorrect plans. Next, we introduce our multi-agent orchestrating architecture, i.e. | 2308.05960#12 | 2308.05960#14 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#14 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | BOLAA. 3.2 BOLAA: ORCHESTRATING MULTIPLE AGENTS. Environment Ei g z E a g Agents Message Controller Figure 3: The BOLAA architecture, which employs a controller to orchestrate multiple LAAs. Though the success of the existing LLMs in completing various language understanding tasks, plenty of issues are still under-explored, such as the context length constraints, in-context learning and generalization ability, and etc. Hence, it is challenging to employ a solo LAA to complete all tasks, especially when tasks are of high complexity. Therefore, we propose a new agent architecture for orchestrating multiple LAAs, which is illustrated in Figure 3. BOLAA has two main modules, the labor agents pool and the controller. The labor agents pool manages multiple LAAs. Each LAA may only focus on generating one type of actions. For example, in the web navigation environment, we could establish click LAA and search LAA. In this way, the former only generates the next button to click, while the later only outputs search query, which divides a complex task into feasible tasks. The controller is devised to selectively call LAAs from agents pool. Controller has the agents selection | 2308.05960#13 | 2308.05960#15 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#15 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | 5 PREPRINT layer for choosing the most relevant LAA to call. Then, the controller constructs the message for the selected LAA and builds the communication. After obtaining the response from the labor LAA, the controller parses it to an executable action and then interacts with the environment. Note that we can also design those labor LAAs to be think/plan agent. In this way, the self-think and plan work flows are also retained. 4 EXPERIMENT 4.1 ENVIRONMENT BENCHMARK We construct the evaluation benchmarks from two environments, i.e., the WebShop (Yao et al., preprint) and HotPotQA (Yang et al., 2018) with Wikipedia API usage (Yao et al., 2023a). WebShop is a recently proposed online shopping website environment with 1.18M real-world prod- ucts and human instructions. Each instruction is associated with one ground-truth product, and contains attribute requirements, e.g. | 2308.05960#14 | 2308.05960#16 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#16 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Iâ m looking for a travel monopod camera tripod with quick release and easy to carry, and price lower than 130.00 dollars. This instruction includes 3 attribute requirements i.e. â quick releaseâ , â camera tripodâ and â easy carryâ attributes. We define the com- plexity of an instruction using the number of attribute requirements. Thus, this instruction example above is of complexity 3. We equally sample 150 instructions regarding each complexity level. Since we have fewer than 150 instructions for complexity larger than 6, we only include instruc- tions from complexity in {1, 2, . . . , 6}, which sums up to 900 tasks for benchmark evaluation in the WebShop environment. In the WebShop environment, an agent operates either SEARCH[QUERY] or CLICK[ELEMENT] actions to interact the environment, for evaluating the interactive decision mak- ing ability of LAA. The observation from WebShop is simplified web browser, which includes the clickable buttons and associated page content. LAA interacts with the WebShop environment as a web navigation agent. HotPotQA with Wikipedia API is another environment considered in this paper, which contains multi-hop questions answering tasks that requires reasoning over two or more Wikipedia passages. This simulation environment serves as a powerful tool for evaluating the multi-step planning and comprehension capabilities and information retrieval skills of AI models, ensuring they are profi- cient in sourcing reliable information from vast online resources. With its unique blend of real-world internet browsing scenarios and text analysis, HotpotQA is an invaluable asset for the advancement of augmented large language agent systems. In HotPotQA environment, an agent has three types of actions, i.e., SEARCH[ENTITY], LOOKUP[STRING] and FINISH[ANSWER] to interact with Hot- PotQA environment. HotPotQA environment aims at evaluate the knowledge reasoning ability of LAA. We randomly sample 100 questions from easy, medium and hard levels, which constitutes the final 300 benchmark questions for evaluating LAAs. 4.2 EVALUATION METRICS We mainly use the reward score in each environment to evaluate the performances of LAAs. In the WebShop environment, the reward is defined as the attribute overlapping ratio between the bought item and ground truth item. In HotPotQA environment, the reward is defined as the F1 score grading between agent answer and ground-truth answer. | 2308.05960#15 | 2308.05960#17 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#17 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Additionally, we develop the Recall performance for WebShop environment, which is defined as 1 if the ground truth item is retrieved and 0 if not during one task session. The Recall is reported as the average recall scores across all tasks in WebShop environment. # 4.3 LLM UTILIZATION The core component of LAA is the LLM backbone. We compare different LLMs with various choices of model size and context length. We reported the results w.r.t. open LLM models such as fastchat-3b, vicuna-3b/13b/33b (Zheng et al., 2023), Llama-2-7b/13b/70b6 (Touvron et al., 2023), MPT-7b/30b (Team, 2023), xgen-8k-7b, longchat-16k-7b/13b and OpenAI API LLMs, including text-davinci-003, gpt-3.5-turbo and gpt-3.5-turbo-16k. 6All Llama-2 models are -chat-hf version. 6 | 2308.05960#16 | 2308.05960#18 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#18 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | PREPRINT Table 1: Average reward in the WebShop environment. Len denotes the maximum context length. Bold results denote the best results in one row, i.e. best LAA architecture w.r.t. one LLM. Underline results denote the best performance in one column, i.e. best LLM regarding one LAA architecture. LLM Len. LAA Architecture fastchat-t5-3b vicuna-7b vicuna-13b vicuna-33b llama-2-7b llama-2-13b llama-2-70b mpt-7b-instruct mpt-30b-instruct xgen-8k-7b-instruct longchat-7b-16k longchat-13b-16k text-davinci-003 gpt-3.5-turbo gpt-3.5-turbo-16k 2k 2k 2k 2k 4k 4k 4k 8k 8k 8k 16k 16k 4k 4k 16k ZS 0.3971 0.0012 0.0340 0.1356 0.0042 0.0662 0.0122 0.0001 0.1664 0.0001 0.0165 0.0007 0.5292 0.5061 0.5657 ZST 0.2832 0.0002 0.0451 0.2049 0.0068 0.0420 0.0080 0.0001 0.1255 0.0015 0.0171 0.0007 0.5395 0.5057 0.5642 ReAct 0.3098 0.1033 0.1509 0.1887 0.1248 0.2568 0.4426 0.0573 0.3119 0.0685 0.069 0.2373 0.5474 0.5383 0.4898 PlanAct 0.3837 0.0555 0.3120 0.3692 0.3156 0.4892 0.2979 0.0656 0.3060 0.1574 0.0917 0.3978 0.4751 0.4667 0.4565 PlanReAct BOLAA 0.5169 0.0604 0.5350 0.5612 0.4648 0.3716 0.5040 0.0632 0.4381 0.3697 0.1964 0.3205 0.6341 0.6567 0.6541 0.1507 0.0674 0.4127 0.3125 0.2761 0.4091 0.3770 0.1574 0.3198 0.1004 0.1322 0.4019 0.4912 0.5483 0.5607 | 2308.05960#17 | 2308.05960#19 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#19 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | 4.4 DECISION-MAKING SIMULATION In this section, we present and compare the decision-making performances of LAAs in the WebShop environment. The performance regarding the average reward is reported in Table 1. The agent prompts are constructed based on the maximum context length of different LLM models. Regarding BOLAA, we devise one search LAA and one click LAA to generate search query and click elements, respectively. We have the following observation: â ¢ BOLAA performs the best compared with the other LAA architectures, especially when built on the high performing LLMs. BOLAA is able to actively select the appropriate LAA and yield qualitative communication, which stabilizes the action generation. We observe that BOLAA, when paired with a 3b fastchat-t5 LLM, performs comparably to other LAA architectures with more powerful LLMs. The superiority of BOLAA indicates that orchestrating multiple smaller- sized LAAs is a better choice if the computing resources are limited. This further exemplifies the potential for fine-tuning multiple smaller-sized specialised LAAs rather than fine-tuning one large generalized LAA. | 2308.05960#18 | 2308.05960#20 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#20 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | â ¢ Pairing the LLM with the optimal LAA architecture is crucial. For example, Llama-2-13b per- forms best under PlanAct LAA arch while Llama-2-70b performs best under the BOLAA arch. Also, Longchat-13b-16K performs best when using PlanAct and PlanReAct, which may indicate the extraordinary planning ability of longchat-13b-16k models. â ¢ Increasing the context length alone may not necessarily improve the LAA performances. For example, when comparing longchat-13b-16k with llama-2-13b models, the latter yields better performances though with less context length. By checking the running log of those LAAs, we observe more occurrence of hallucinated generation when the LAA runs for more steps, which in the end degrades the benefits of longer context. | 2308.05960#19 | 2308.05960#21 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#21 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | â ¢ A powerful LLM is able to generalize under the zeroshot LAA arch. The best performance of OpenAI API-based models are actually under ZS and ZST arch. This indicates the great po- tential of developing a generic LAA with powerful LLM. Actually, this is currently what open- source projects are working towards, directly calling OpenAI API and tuning the zeroshot agent prompt instead. Our benchmark results quantitatively justify that using only a ZS LAA can already achieve comparable or even better performances than LAA arch with additional Plan or Self-think flow. However, for other less powerful LLMs, fewshot prompts are necessary for LAAs. | 2308.05960#20 | 2308.05960#22 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#22 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | â ¢ Plan flow generally improves the performances when the agent is built on open-source LLMs. By comparing the performances of ReAct, PlanAct and PlanReAct, we observe a performance gain 7 PREPRINT Table 2: Average recall in the WebShop environment. Len denotes the maximum context length. Bold results denote the best results in one row, i.e. best LAA architecture w.r.t. one LLM. Underline results denote the best performance in one column, i.e. best LLM regarding one LAA architecture. LLM Len. | 2308.05960#21 | 2308.05960#23 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#23 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | LAA Architecture fastchat-t5-3b vicuna-7b vicuna-13b vicuna-33b llama-2-7b llama-2-13b llama-2-70b mpt-7b-instruct mpt-30b-instruct xgen-8k-7b-instruct longchat-7b-16k longchat-13b-16k text-davinci-003 gpt-3.5-turbo gpt-3.5-turbo-16k-0613 2k 2k 2k 2k 4k 4k 4k 8k 8k 8k 16k 16k 4k 4k 16k ZS 0.3533 0.0833 0.0867 0.3600 0.0678 0.2856 0.3344 0.0144 0.2973 0.0667 0.1344 0.0756 0.3800 0.3889 0.3856 ZST 0.3122 0.0500 0.0644 0.3411 0.0311 0.2211 0.3244 0.0322 0.3372 0.1400 0.1856 0.0867 0.3856 0.3756 0.3833 ReAct 0.3800 0.3600 0.3622 0.3822 0.3744 0.3844 0.3789 0.3644 0.3333 0.3711 0.3644 0.3678 0.3767 0.3933 0.4011 PlanAct 0.3700 0.3233 0.3444 0.3733 0.3400 0.3278 0.3400 0.3200 0.3575 0.3400 0.3622 0.3467 0.3711 0.3789 0.3756 PlanReAct BOLAA 0.3867 0.3522 0.3700 0.3956 0.3856 0.4078 0.4011 0.3600 0.3900 0.3800 0.3811 0.3789 0.3956 0.3929 0.3933 0.3722 0.3278 0.2367 0.3567 0.3578 0.3500 0.3600 0.3400 0.3412 0.3278 0.3622 0.3471 0.3889 0.3867 0.3811 | 2308.05960#22 | 2308.05960#24 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#24 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | on most LLM cases when using plan flow. However, planning and thinking require the LLM to be able to reason in steps, which may be challenging for small size LLMs. For example, fastchat- t5-3b performs above average on ZS LAA arch. But the performance degrades by a large margin under PlanReAct arch. We also report the intermediate Recall performances for all LAAs, which are illustrated in Table 2. Recall is mainly related to the search action. High recall performances indicate that the LAA is capable of generating a precise search query. High recalls usually lead to better rewards. | 2308.05960#23 | 2308.05960#25 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#25 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | But they are not tightly related. For example, Llama-2-70b has a recall performance of nearly 0.3344 on ZS LAA, which is comparable to the best LAA. However, the reward performance in Table 1 of ZS LAA Llama-2-70b is only 0.0122. The reason is that generating the search query requires a different LLM ability from generating the correct click action, where the latter is more challenging. Another observation is that our proposed BOLAA generally performs the best on all LLMs, which indicates that separating the search agent from the click agent improves the accuracy of the search action, leading to a higher recall value. LAA performance w.r.t. | 2308.05960#24 | 2308.05960#26 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#26 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Complexity. After the overall performances of those LAAs and LLMs are compared, we conduct more details investigation of the performance w.r.t. the task complexity. Due to the space limitation, we only report the performance of text-davinci-003 and llama-2-70b. The reward performance is illustrated in Figure 4. The BOLAA model consistently performs better on all complexity levels. We also observe the degraded performances when the task complexity is increased, which follows the intuition. Surprisingly, we find out that further increasing the complexity of tasks greater than 4 will not further degrade the performances. The reason is that the recall performance increases when the task is of higher complexity, which we demonstrated in Figure 5. This is due to the fact that high-complexity task instruction provides more additional context information for the LAA. As such, the search action can be more specific and accurate under high complexity levels. 4.5 KNOWLEDGE REASONING SIMULATION We benchmark on the HotPotQA environment to evaluate the multi-step reasoning ability of LAAs. Since the available search, lookup and finish operations are all related to knowledge reasoning in this environment and hard to separate, we therefore leave the BOLAA arch for future work and only compare the performance on other agent arch. The results are in Table 3. In general, ReAct agent arch achieves the best performances, which can be interpreted in twofold. Firstly, fewshot prompt is necessary to enable the action generation and reasoning ability for LAA, especially when | 2308.05960#25 | 2308.05960#27 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#27 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | 8 PREPRINT (a) text-davinci-003 (b) Llama-2-70b Figure 4: The reward w.r.t. task complexity in WebShop. Each bar represents one LAA. (a) text-davinci-003 (b) Llama-2-70b Figure 5: The recall w.r.t. task complexity in WebShop. Each bar represents one LAA. experimenting with those small-size language models. Secondly, comparing ReAct, PlanAct, and PlanReAct, we would conclude that planning flow of LAA hinders performance the in knowledge reasoning environment and tasks. The reason is that knowledge reasoning tasks require contextu- alized information to conduct reasoning, whereas planning flow is executed ahead of interactions. Thus, those generated plans tend to lead to more hallucination of LAA. Thirdly, regarding this knowledge reasoning task, model size is much more important than the context length. Large-sized model has better abilities in reasoning, thus performing better. Additionally, the superior reasoning ability of OpenAI gpt-3.5 models is again verified. We also observe the best performance of Llama- 2-70b on all open-source LLMs, which suggests that potential future fine-tuning can be applied on Llama-2 models. LAA performance w.r.t. | 2308.05960#26 | 2308.05960#28 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#28 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Complexity. Since we have easy, medium, and high level tasks, we com- pare the performance of Llama-2-70b and regarding different levels of complexity, as illustrated in Figure 6. We observe degrading performance if increasing the complexity of tasks. In HotPotQA tasks, the hardness is defined as the question answer hops. Therefore, hard question requires more context understanding and reasoning ability of LAA. Though OpenAI text-davinci-003 model con- sistently outperforms Llama-2-70b on all levels of complexity, their difference is of smaller margin in hard questions. Since hard questions requires more resoning efforts, we can conclude that Llama- 2-70b posses comparable reasoning ability with text-davinci-003. | 2308.05960#27 | 2308.05960#29 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#29 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | 9 PREPRINT Table 3: Average reward in the HotPotQA environment. Len denotes the maximum context length. Bold results denote the best results in one row, i.e. best LAA architecture w.r.t. one LLM. Underline results denote the best performance in one column, i.e. best LLM regarding one LAA architecture. LLM Len. LAA Architecture fastchat-t5-3b vicuna-7b vicuna-13b vicuna-33b llama-2-7b llama-2-13b llama-2-70b mpt-7b-instruct mpt-30b-instruct xgen-8k-7b-instruct longchat-7b-16k longchat-13b-16k text-davinci-003 gpt-3.5-turbo gpt-3.5-turbo-16k-0613 2k 2k 2k 2k 4k 4k 4k 8k 8k 8k 16k 16k 4k 4k 16k ZS 0.0252 0.1339 0.1541 0.2180 0.0395 0.1731 0.2809 0.0982 0.1562 0.1502 0.0791 0.1083 0.3430 0.3340 0.3027 ZST 0.0067 0.0797 0.0910 0.2223 0.0207 0.2313 0.3207 0.0483 0.2141 0.1244 0.0672 0.0562 0.3304 0.3254 0.2264 ReAct 0.0692 0.0318 0.2637 0.2602 0.2624 0.2521 0.3558 0.1707 0.3261 0.1937 0.2161 0.2387 0.4503 0.3226 0.1859 PlanAct 0.1155 0.0868 0.1754 0.1333 0.1780 0.2192 0.1424 0.1147 0.2224 0.1116 0.1296 0.1623 0.3577 0.2762 0.2113 PlanReAct 0.0834 0.0956 0.2075 0.2016 0.1417 0.2177 0.1797 0.1195 0.2315 0.1096 0.0971 0.1349 0.4101 0.3192 0.2251 | 2308.05960#28 | 2308.05960#30 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#30 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | # H é (a) text-davinci-003 (b) Llama-2-70b Figure 6: The reward w.r.t. complexity level in HotPotQA. Each bar represents one LAA. # 5 CONCLUSION AND FUTURE WORK In this paper, we systematically investigate the performances of various LAA architecture paired with different LLM backbones. We also provide one novel orchestrating method for multiple agents, i.e. BOLAA. The benchmarking results provide experimental justification for the LAA investigation and verify the potential benefits of BOLAA architecture. During the investigation, we also identify the challenge of designing BOLAA architecture for environments with compounding actions. In the future, we will explore whether we can harness LLMs in the controller such that selection and communication with labor agents is also fully autonomous. We will continue developing more LAA architectures and include more LLMs and environments for evaluations. | 2308.05960#29 | 2308.05960#31 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#31 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | 10 PREPRINT # REFERENCES # Harrison Chase. Langchain. https://github.com/hwchase17/langchain, 2023. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. | 2308.05960#30 | 2308.05960#32 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#32 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070, 2023. Significant Gravitas. Auto-GPT, 2023. Autogpt. https://github.com/Significant-Gravitas/ Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksandra Faust. A real-world webagent with planning, long context understanding, and pro- gram synthesis. arXiv preprint arXiv:2307.12856, 2023. Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992, 2023. Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, and Tomas Pfister. Tool documentation enables zero-shot tool-usage with large language models. arXiv preprint arXiv:2308.00675, 2023. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pp. 9118â 9147. PMLR, 2022. Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement learning on web interfaces using workflow-guided exploration. arXiv preprint arXiv:1802.08802, 2018. | 2308.05960#31 | 2308.05960#33 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#33 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. Agentbench: Evaluating llms as agents, 2023. Aman Madaan, Alexander Shypula, Uri Alon, Milad Hashemi, Parthasarathy Ranganathan, Yiming Yang, Graham Neubig, and Amir Yazdanbakhsh. Learning performance-improving code edits. arXiv preprint arXiv:2302.07867, 2023a. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023b. Rithesh Murthy, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Le Xue, Weiran Yao, Yihao Feng, Zeyuan Chen, Akash Gokul, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, and Silvio Savarese. Rex: Rapid exploration and exploitation for ai agents, 2023. | 2308.05960#32 | 2308.05960#34 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#34 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | # Yohei Nakajima. Babyagi. https://github.com/yoheinakajima/babyagi, 2023. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. OpenAI. Gpt-4 technical report. ArXiv, 2023. 11 PREPRINT | 2308.05960#33 | 2308.05960#35 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#35 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Joon Sung Park, Joseph C Oâ Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023. | 2308.05960#34 | 2308.05960#36 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#36 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi Faltings. Refiner: Reasoning feedback on intermediate representations. arXiv preprint arXiv:2304.01904, 2023. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. | 2308.05960#35 | 2308.05960#37 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#37 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. arXiv preprint Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv:2303.17580, 2023. Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In International Conference on Machine Learning, pp. 3135â 3144. PMLR, 2017. Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. | 2308.05960#36 | 2308.05960#38 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#38 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. MosaicML NLP Team. Introducing mpt-7b: A new standard for open-source, commercially usable llms, 2023. URL www.mosaicml.com/blog/mpt-7b. Accessed: 2023-05-05. | 2308.05960#37 | 2308.05960#39 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#39 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. | 2308.05960#38 | 2308.05960#40 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#40 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Llama 2: Open foundation and fine-tuned chat models, 2023. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022. Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, and Dongkuan Xu. | 2308.05960#39 | 2308.05960#41 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#41 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Rewoo: Decoupling reasoning from observations for efficient augmented language models. arXiv preprint arXiv:2305.18323, 2023. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question In Conference on Empirical Methods in Natural Language Processing (EMNLP), answering. 2018. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. | 2308.05960#40 | 2308.05960#42 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#42 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023a. 12 PREPRINT Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. In ArXiv, preprint. Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caim- ing Xiong, and Silvio Savarese. | 2308.05960#41 | 2308.05960#43 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#43 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Retroformer: Retrospective large language agents with policy gradient optimization, 2023b. Jianguo Zhang, Kun Qian, Zhiwei Liu, Shelby Heinecke, Rui Meng, Ye Liu, Zhou Yu, Huan Wang, Silvio Savarese, and Caiming Xiong. Dialogstudio: Towards richest and most diverse unified dataset collection for conversational ai, 2023. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. | 2308.05960#42 | 2308.05960#44 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#44 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al. Webarena: A realistic web environment for building autonomous agents. arXiv preprint arXiv:2307.13854, 2023. URL https://webarena.dev. | 2308.05960#43 | 2308.05960#45 | 2308.05960 | [
"2204.02311"
]
|
2308.05960#45 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | 13 | 2308.05960#44 | 2308.05960 | [
"2204.02311"
]
|
|
2308.06391#0 | Dynamic Planning with a LLM | 3 2 0 2 g u A 1 1 ] L C . s c [ 1 v 1 9 3 6 0 . 8 0 3 2 : v i X r a # Dynamic Planning with a LLM # Frank Keller School of Informatics University of Edinburgh, UK [email protected], {keller, alex}@inf.ed.ac.uk # Abstract While Large Language Models (LLMs) can solve many NLP tasks in zero-shot settings, ap- plications involving embodied agents remain problematic. In particular, complex plans that require multi-step reasoning become difficult and too costly as the context window grows. Planning requires understanding the likely ef- fects of oneâ s actions and identifying whether the current environment satisfies the goal state. While symbolic planners find optimal solu- tions quickly, they require a complete and ac- curate representation of the planning problem, severely limiting their use in practical scenarios. In contrast, modern LLMs cope with noisy ob- servations and high levels of uncertainty when reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a neuro- symbolic framework where an LLM works hand-in-hand with a traditional planner to solve an embodied task. Given action-descriptions, LLM-DP solves Alfworld faster and more effi- ciently than a naive LLM ReAct baseline. | 2308.06391#1 | 2308.06391 | [
"2303.11366"
]
|
|
2308.06391#1 | Dynamic Planning with a LLM | 1 # 1 Introduction Consistency (Wang et al., 2023b) augment the con- text with reasoning traces. Other, agent-based ap- proaches, such as ReAct (Yao et al., 2023), inte- grate feedback from the environment iteratively, giving the agent the ability to take â thinkingâ steps or to augment its context with a reasoning trace. However, these approaches frequently involve high computational costs due to the iterated invocations of LLMs and still face challenges dealing with the limits of the context window and recovering from hallucinations, which can compromise the quality of the plans. Conversely, traditional symbolic planners, such as the Fast-Forward planner (Hoffmann and Nebel, 2001) or the BFS(f) planner(Lipovetzky et al., 2014), excel at finding optimal plans efficiently. But symbolic planners require problem and domain descriptions as prerequisites (McDermott, 2000), which hampers their applicability in real-world sce- narios where it may be infeasible to achieve these high informational demands. For instance, know- ing a complete and accurate description of the goal may not be possible before exploring the environ- ment through actions. Large Language Models (LLMs), like GPT-4 (Ope- nAI, 2023), have proven remarkably effective at various natural language processing tasks, partic- ularly in zero-shot or few-shot settings (Brown et al., 2020). However, employing LLMs in em- bodied agents, which interact with dynamic envi- ronments, presents substantial challenges. LLMs tend to generate incorrect or spurious information, a phenomenon known as hallucination, and their performance is brittle to the phrasing of prompts (Ji et al., 2022). Moreover, LLMs are ill-equipped for naive long-term planning since managing an extensive context over multiple steps is complex and resource-consuming (Silver et al., 2022; Liu et al., 2023). Various approaches have aimed to mitigate some of these limitations. For instance, methods like Chain-of-Thought (Wei et al., 2022) and Self- Previous work by (Liu et al., 2023) has shown that LLMs can generate valid problem files in the Planning Domain Definition Language (PDDL ) for many simple examples. | 2308.06391#0 | 2308.06391#2 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#2 | Dynamic Planning with a LLM | Yet, the problem of incom- plete information remains: agents often need to interact with the world to discover their surround- ings before optimal planning can be applied. Some versions of PDDL have been proposed in the past to deal with probabilities or Task and Motion Plan- ning, such as PPDDL and PDDLStream (Younes and Littman, 2004; Garrett et al., 2018), but these still assume a human designer encoding the agentâ s understanding of the domain and the planning prob- lem, rather than the agent learning from interac- tions. Therefore, where modern LLMs need mini- mal information to figure out a task, e.g. through Few-shot or In-Context Learning (Honovich et al., f"(yaction go-to © ++) i(:action pick-upâ PDDL Domain : (actin. heat PDDL Problem(s) : ee) 4 (goal (exists (?t - potato ?x - countertop) 1 cccececceseestoceseeeeeneneese 8 (and (inReceptacle ?t ?r) © | Heata potato re ° Cee Noe f Plan | @| andputitona â â > LLM â o Te = No Plan found % ~ | countertop a enerato bu (init: ... u ) <0 to =, et (inReceptacle potato-1 fridge-1)) Observation} * ' Action ___ Selector Figure 1: | 2308.06391#1 | 2308.06391#3 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#3 | Dynamic Planning with a LLM | LLM Dynamic Planner (LLM-DP). The LLM grounds observations and processes natural language instructions into PDDL to use with a symbolic planner. This model can solve plans for unobserved or previously unknown objects because the LLM generates plausible predicates for relevant objects through semantic and pragmatic inference. Through sampling possible predicates, multiple plans can be found, and an Action Selector decides whether to act, review its understanding of the problem, or ask clarification questions. 2022; Chen et al., 2022; Min et al., 2022), tradi- tional planners need maximal information. In this work, we introduce the LLM Dynamic Planner (LLM-DP), a neuro-symbolic frame- work that integrates an LLM with a symbolic planner to solve embodied tasks.1 LLM-DP capi- talises on the LLMâ s ability to understand actions and their impact on their environment and com- bines it with the plannerâ s efficiency in finding so- lutions. Using domain knowledge, LLM-DP solves the Alfworld test set faster and more efficiently than a LLM-only (ReAct) approach. The remainder of this paper explores the architecture of LLM-DP, dis- cusses how to combine the strengths of LLMs and symbolic planning and presents potential research avenues for future work in LLM-driven agents. # 2 Related Work Symbolic Planners Symbolic planners have been a cornerstone in automated planning and artificial intelligence for decades (Fikes and Nilsson, 1971). Based on formal logic, they operate over symbolic representations of the world to find a sequence of actions that transition from an initial state to a goal state. Since the introduction of PDDL (McDermott, 2000), the AI planning community has developed an array of efficient planning algorithms. For exam- ple, the Fast-Forward planner (FF) (Hoffmann and Nebel, 2001) employs heuristics derived from a relaxed version of the planning problem. Similarly, the BFS(f) planner (Lipovetzky et al., 2014) com- bines breadth-first search and specialised heuristics. These planners find high-quality or optimal solu- tions quickly in well-defined domains. However, their up-front requirement for comprehensive prob- lem and domain descriptions limits their applicabil- ity in complex real-world settings where complete information may not be available. | 2308.06391#2 | 2308.06391#4 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#4 | Dynamic Planning with a LLM | LLMs in Planning and Reasoning In contrast to symbolic planners, LLMs have shown promise in adapting to noisy planning and reasoning tasks Some general ap- through various methods. proaches such as Chain-of-Thought (Wei et al., 2022), Self-Consistency (Wang et al., 2023b), and Reasoning via Planning (Hao et al., 2023) augment the context with a reasoning trace that the LLM gen- erates to improve its final prediction. Alternatively, giving access to tools/APIs (Schick et al., 2023; Patil et al., 2023), outside knowledge or databases (Peng et al., 2023; Hu et al., 2023), code (Surà s et al., 2023), and even symbolic reasoners (Yang et al., 2023) to enrich an LLMâ s context and abil- ity to reason. The LLM can trigger these external sources of information or logic (through fine-tuning or prompting) to obtain additional context and im- prove its downstream performance. Embodied Agents with LLMs In a parallel di- rection, recent works such as ReAct (Yao et al., 2023), Reflexion (Shinn et al., 2023), AutoGPT (Significant-Gravitas, 2023), and Voyager (Wang et al., 2023a), take an agent-based approach and augment the reasoning process through a closed â whileâ loop that feeds environment observations back to the LLM. ReAct (Yao et al., 2023) allows the LLM agent to either take an action or a â | 2308.06391#3 | 2308.06391#5 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#5 | Dynamic Planning with a LLM | think- ingâ step. This allows the LLM to augment its context with its reasoning, which can be seen as 1Our code is available at github.com/itl-ed/llm-dp agent-driven Chain-of-Thought prompting. Voy- ager (Wang et al., 2023a) incrementally builds an agentâ s capabilities from its interactions with the environment and an accessible memory compo- nent (skill library). While many of these works show promising results in building general exe- cutable agents in embodied environments (Wang et al., 2023a), they still require many expensive calls to the LLMs, are limited by the LLMâ s con- text window, and do not guarantee optimal plans. | 2308.06391#4 | 2308.06391#6 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#6 | Dynamic Planning with a LLM | # 3 Alfworld Alfworld (Shridhar et al., 2020) is a text-only home environment where an agent is tasked with seven possible tasks, such as interacting with one or more objects and placing them in a specific receptacle. At the start of each episode, the goal is given in natural language, and the initial observation does not include the location of any objects. Therefore an agent must navigate the environment to search for the relevant objects and perform the correct ac- tions. The possible locations of the environment are known, and the agent can navigate to any re- ceptacle by using a â go toâ action. However, since none of the objectsâ locations are initially observed, the agent must be able to plan around uncertainty, estimate where objects are likely to be observed and adjust accordingly. | 2308.06391#5 | 2308.06391#7 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#7 | Dynamic Planning with a LLM | # 4 LLM-DP To tackle an embodied environment like Alfworld, we introduce the Large Language Model Dynamic Planner (LLM-DP), which operates as a closed- loop agent. LLM-DP uses a combination of lan- guage understanding and symbolic reasoning to plan and solve tasks in the simulated environment. The model tracks a World State W and beliefs B about predicates in the environment, uses an LLM to translate the task description into an executable goal state and samples its beliefs to generate plau- sible world states. We describe the working of the LLM-DP agent as pseudo-code in Algorithm 1. # 4.1 Assumptions We make several simplifying assumptions when applying the LLM-DP framework to Alfworld: 1. Known action-descriptions and predicates: Our input to the planner and the LLM re- quires the PDDL domain file, which describes what actions can be taken, their pre- and post- conditions, and what predicates exist. Algorithm 1 LLM-DP Pseudo-code Require: LLM, PG, AS, Domain, task, obso goal <- LLM(Domain, task) W, B <-observe(goal, obso) while goal not reached do plans + 0 for iin N do Wbelief -LLM(B, W) plans â PG(wreties UW) end for action <-AS(plans) obs <-Env(action) W, B <observe(action, obs) # end while 2. Perfect observations: The Alfworld environ- ment provides a perfect textual description of the current location. This observation also contains the intrinsic attributes of observed objects and receptacles, such as whether or not a given receptacle can be opened. 3. Causal Environment: changes in the envi- ronment are entirely caused by the agent. 4. Valid actions always succeed 4.2 Generating the Goal State LLM-DP uses an LLM to generate a PDDL goal, given the natural language instruction (task) and the valid predicates defined by the PDDL domain file. Figure 1 shows an example task converted to a valid PDDL goal. For each episode, we use a set of three in-context examples that are fixed for the entire evaluation duration. We use the OpenAI gpt-3.5-turbo-0613 LLM model with a temper- ature of 0 in all our LLM-DP experiments. | 2308.06391#6 | 2308.06391#8 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#8 | Dynamic Planning with a LLM | # 4.3 Sampling Beliefs We parse the initial scene description into a struc- tured representation of the environment W and a set of beliefs B. The internal representation of the world W contains all known information, for in- stance, all receptacles (possible locations) in the scene from the initial observation and their intrin- sic attributes are known (i.e. a fridge holds the isFridge predicate). Whereas the set of beliefs B are a set of possible valid predicates that can be true or false and which the model does not have enough information to disambiguate. In Alfworld, the objectsâ locations are unknown; therefore, the set of possible predicates for each object includes all possible locations. | 2308.06391#7 | 2308.06391#9 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#9 | Dynamic Planning with a LLM | Average Accuracy (%) Model clean cool examine heat put puttwo overall (â ) LLM Tokens (â ) LLM-DP LLM-DP-random ReAct (Yao et al., 2023) ReAct (ours) 0.94 0.94 0.61 0.35 1.00 1.00 0.81 0.90 1.00 1.00 0.89 0.33 0.87 0.87 0.30 0.65 1.00 0.96 0.79 0.71 0.94 1.00 0.47 0.29 0.96 0.96 0.64 0.54 633k 67k â * 9.16M (a) The average accuracy and number of LLM Tokens processed (context + generation) for each model. *Not reported. Average Episode Length cool overall (â ) Model clean examine heat put puttwo LLM-DP LLM-DP-random ReAct (ours) 12.00 15.06 25.10 13.67 17.14 9.86 12.06 10.56 21.67 12.30 14.04 14.70 12.75 14.62 15.33 17.59 18.94 24.94 13.16 15.02 18.69 (b) The average episode length for each model, where the length of an episode denotes how many actions the agent has taken or attempted to take to complete a task. | 2308.06391#8 | 2308.06391#10 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#10 | Dynamic Planning with a LLM | We do not count the â thinkingâ action of ReAct as an action in this metric. Table 1: Summary of model performance on the Alfword test set. LLM-DP and LLM-DP-random differ in the sampling strategy of the belief. LLM-DP uses an LLM to generate n = 3 plausible world states, while LLM-DP-random randomly samples n = 3 plausible world states. LLM-DP uses stored observations W, beliefs B and an LLM to construct different planning prob- lem files in PDDL . A PDDL problem file includes the objects observed (:objects), a representation of the current state (:init) of the world and the ob- ject attributes, and the goal to be achieved (:goal). The goal is derived from the LLM (Section 4.2), while the objects and their attributes are obtained from W (observations) and the beliefs the B has about the objects. # 4.4 Plan Generator Upon constructing the different PDDL problems, the agent uses a Plan Generator (PG) to solve each problem and obtain a plan. We use the BFS(f) solver (Lipovetzky et al., 2014) implemented as an executable by LAPKT (Ramirez et al., 2015). A generated plan is a sequence of actions, where each action is represented in a symbolic form, which, if executed, would lead to the goal state from the initial state. Since B includes possible predicates which are unknown, we sample from B using an LLM to obtain wbelief . For instance, our belief could be that (inReceptacle tomato ?x) where ?x can be countertop, cabinet, fridge, etc. Since we want to condition the sampling of where the tomato can appear, we pass the known world state W along with the predicate (in this case inReceptacle) and its options to the LLM.This sampling leverages the LLM to complete a world state and is extendable to any unknown predicate from which a set of beliefs can be deduced. We also compare LLM sampling with random sam- pling (llmdp-random). | 2308.06391#9 | 2308.06391#11 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#11 | Dynamic Planning with a LLM | # 4.5 Action Selector The Action Selector (AS) module decides the agentâ s immediate next action. It takes the plan- nerâ s output, a set of plans, and selects an action from them. In our Alfworld experiments, the Ac- tion Selector simply selects the shortest plan re- turned. If no valid plans are returned, all sampled states were satisfying goal states, there is a mistake with the constructed domain/problem files, or the planner has failed to find a path to the goal. In the first case, we re-sample random world states and re-run the planners once. We describe our likely world state as the union between a sampled set of beliefs and the known world state wpyetier JW. Then sampling 7 1,.., N different sets of beliefs during the planning loop, we obtain NV likely world states. Finally, we convert each likely world state to lists of predicates to interface with the PDDL planner. We also propose exploring different strategies when valid plans cannot be found. For instance, similarly to self-reflection (Shinn et al., 2023), the Action Selector could prompt an update in the agentâ s belief about the world state if none of gener- ated problem descriptions are solvable. The Action Selector could also interact with a human teacher or oracle to adjust its understanding of the environ- ment (problem) or its logic (domain). # 4.6 Observation Processing LLM-DP uses the result of each action to update its internal state representation. It uses the symbolic effects of the action to infer changes in the state of the objects and receptacles. Then it integrates the information from the new observation, which might reveal additional details not directly inferred from the action itself. For instance, opening an unseen drawer might reveal new objects inside. Observing also updates the beliefs â if an object is observed at a location, it cannot be elsewhere, but if an object is not observed at a location, it cannot be there. Observations incorporate beliefs into W. If the agent detects new information from the scene - such as discovering new objects - it triggers a re-planning process. The agent then generates a new set of possible PDDL problems using the up- dated state representation and corresponding plans using the Plan Generator. | 2308.06391#10 | 2308.06391#12 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#12 | Dynamic Planning with a LLM | This approach is similar to some Task and Motion Planning (TAMP) meth- ods (Garrett et al., 2018; Chen et al., 2023), en- abling the agent to adapt to environmental changes and unexpected outcomes of actions. # 5 Results We contrast the LLM-DP approach with ReAct (LLM-only baseline) from the original implemen- tation by Yao et al. (2023). Since we use a differ- ent backbone LLM model (gpt-3.5-turbo rather than text-davinci-002) than the ReAct base- line for cost purposes, we also reproduce their re- sults using gpt-3.5-turbo and adapt the ReAct prompts to a chat format. As shown in Table 1, LLM-DP solves Alfworld almost perfectly (96%) compared to our baseline reproduction of ReAct (53%). The LLM-DP can translate the task description into an executable PDDL goal 97% of the time, but sampling reduces the accuracy further when it fails to select a valid set of possible world states â for instance, by sam- pling states where the goal is already satisfied. We note, that the ReAct baseline makes differ- ent assumptions about the problem; while it does not require a domain file containing the action- descriptions and object predicates, it uses two sep- arate human-annotated episodes per example to bootstrap its in-context logic. ReAct also switches out which examples to use in-context based on the type of task, such that two examples of the same type of task being solved are always shown. We also find that our reproduction of ReAct is worse than the original and attribute this to the gpt-3.5-turbo model being more conversational than text-davinci-002, and thus less likely to output valid actions as it favours fluency over fol- lowing the templated action language. We also measure the length of each successful episode and find that LLM-DP reaches the goal state faster on average (13.16 actions) versus ReAct (18.69 actions) and a random search strategy (15.02 actions). The Average Episode Length measures the number of actions taken in the environment and how efficient the agent is. # 6 Conclusion | 2308.06391#11 | 2308.06391#13 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#13 | Dynamic Planning with a LLM | The LLM-DP agent effectively integrates language understanding, symbolic planning, and state track- ing in a dynamic environment. It uses the language model to understand tasks and scenes expressed in natural language, constructs and solves plan- ning problems to decide on a course of action, and keeps track of the world state to adapt to changes and make informed decisions. This workflow en- ables the agent to perform complex tasks in the Alfworld environment, making it a promising ap- proach for embodied tasks that involve language understanding, reasoning, and decision-making. LLM-DP offers a cost and efficiency trade-off between a wholly symbolic solution and an LLM- only model. The LLMâ s semantic knowledge of the world is leveraged to translate the problem into PDDL while guiding the search process through be- lief instantiation. We find that not only is LLM-DP cheaper, on a per-token comparison, but it is also faster and more successful at long-term planning in an embodied environment. LLM-DP validates the need for LLM research to incorporate specialised tools, such as PDDL solvers, in embodied agents to promote valid Despite these promising results, numerous topics and unresolved issues remain open for future in- vestigation. Key among these is devising strategies to encode the world model and belief, currently handled symbolically, and managing uncertain ob- servations â particularly from an image model â along with propagating any uncertainty to the planner and Action Selector. We intentionally kept the Action Selector simple for our experiments, but future work may also explore different strategies to encourage self-reflection within the agent loop. For instance, if all plans prove invalid, beliefs may be updated, or it might indicate an incorrect domain definition. Such instances may necessitate agents to interact with an instructor who can provide in- sights about action pre-conditions and effects. This direction could lead us from a static domain file towards an agent truly adaptable to new environ- ments, fostering continual learning and adaptation. # Acknowledgements This work was supported in part by the UKRI Cen- tre for Doctoral Training in Natural Language Pro- cessing, funded by the UKRI (grant EP/S022481/1) at the University of Edinburgh, School of Infor- matics and School of Philosophy, Psychology & Language Sciences and by the UKRI-funded TAS Governance Node (grant number EP/V026607/1). # References | 2308.06391#12 | 2308.06391#14 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#14 | Dynamic Planning with a LLM | Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. | 2308.06391#13 | 2308.06391#15 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#15 | Dynamic Planning with a LLM | Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Process- ing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis, and He He. 2022. Meta-learning via language model in-context tuning. In Proceedings of the 60th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 719â 730, Dublin, Ireland. Association for Computational Lin- guistics. Yongchao Chen, Jacob Arkin, Yang Zhang, Nicholas A. Roy, and Chuchu Fan. 2023. | 2308.06391#14 | 2308.06391#16 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#16 | Dynamic Planning with a LLM | Autotamp: Autoregres- sive task and motion planning with llms as translators and checkers. ArXiv, abs/2306.06531. Richard E. Fikes and Nils J. Nilsson. 1971. Strips: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2(3):189â 208. Caelan Reed Garrett, Tomas Lozano-Perez, and Leslie Pack Kaelbling. 2018. Pddlstream: Integrat- ing symbolic planners and blackbox samplers via optimistic adaptive planning. In International Con- ference on Automated Planning and Scheduling. Shibo Hao, Yilan Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. 2023. Reasoning with language model is planning with world model. ArXiv, abs/2305.14992. Jörg Hoffmann and Bernhard Nebel. 2001. | 2308.06391#15 | 2308.06391#17 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#17 | Dynamic Planning with a LLM | The FF plan- ning system: Fast plan generation through heuristic search. Journal of Artificial Intelligence Research, 14:253â 302. Or Honovich, Uri Shaham, Samuel R. Bowman, and Omer Levy. 2022. Instruction induction: From few examples to natural language task descriptions. ArXiv, abs/2205.10782. Chenxu Hu, Jie Fu, Chenzhuang Du, Simian Luo, Junbo Jake Zhao, and Hang Zhao. 2023. Chatdb: Augmenting llms with databases as their symbolic memory. ArXiv, abs/2306.03901. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Wenliang Dai, Andrea Madotto, and Pascale Fung. 2022. | 2308.06391#16 | 2308.06391#18 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#18 | Dynamic Planning with a LLM | Survey of hallucination in natural language generation. ACM Computing Surveys, 55:1 â 38. Nir Lipovetzky, Miquel Ramirez, Christian Muise, and Hector Geffner. 2014. Width and inference based planners: Siw, bfs (f), and probe. Proceedings of the 8th International Planning Competition (IPC-2014), page 43. B. Liu, Yuqian Jiang, Xiaohan Zhang, Qian Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. 2023. Llm+p: Empowering large language models with op- timal planning proficiency. ArXiv, abs/2304.11477. | 2308.06391#17 | 2308.06391#19 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#19 | Dynamic Planning with a LLM | Drew McDermott. 2000. The 1998 ai planning systems competition. AI Magazine, 21(2):35â 55. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettle- moyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? In Confer- ence on Empirical Methods in Natural Language Processing. OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Computation and Language (cs.CL); Artificial Intelligence (cs.AI). Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. 2023. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334. Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Lidén, Zhou Yu, Weizhu Chen, and Jianfeng Gao. 2023. | 2308.06391#18 | 2308.06391#20 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#20 | Dynamic Planning with a LLM | Check your facts and try again: Improving large language models with external knowledge and automated feed- back. ArXiv, abs/2302.12813. Miquel Ramirez, Nir Lipovetzky, and Christian Muise. 2015. Lightweight Automated Planning ToolKiT. http://lapkt.org/. Accessed: 2020. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. | 2308.06391#19 | 2308.06391#21 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#21 | Dynamic Planning with a LLM | Toolformer: Language models can teach themselves to use tools. ArXiv, abs/2302.04761. Noah Shinn, Beck Labash, and Ashwin Gopinath. 2023. Reflexion: An autonomous agent with dy- namic memory and self-reflection. arXiv preprint arXiv:2303.11366. Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew J. Hausknecht. 2020. | 2308.06391#20 | 2308.06391#22 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#22 | Dynamic Planning with a LLM | Alfworld: Aligning text and em- bodied environments for interactive learning. CoRR, abs/2010.03768. Significant-Gravitas. 2023. An experimental open- source attempt to make gpt-4 fully autonomous. https://github.com/significant-gravitas/ auto-gpt. Accessed: 2023-06-09. Tom Silver, Varun Hariprasad, Reece S Shuttle- worth, Nishanth Kumar, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. 2022. | 2308.06391#21 | 2308.06391#23 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#23 | Dynamic Planning with a LLM | Pddl planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop. Dà dac Surà s, Sachit Menon, and Carl Vondrick. 2023. Vipergpt: Visual inference via python execution for reasoning. ArXiv, abs/2303.08128. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Man- dlekar, Chaowei Xiao, Yuke Zhu, Linxi (Jim) Fan, and Anima Anandkumar. 2023a. | 2308.06391#22 | 2308.06391#24 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#24 | Dynamic Planning with a LLM | Voyager: An open- ended embodied agent with large language models. ArXiv, abs/2305.16291. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Huai hsin Chi, and Denny Zhou. 2023b. Self- consistency improves chain of thought reasoning in language models. In International Conference on Learning Representations (ICLR). Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. | 2308.06391#23 | 2308.06391#25 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#25 | Dynamic Planning with a LLM | Chain-of-thought prompt- ing elicits reasoning in large language models. In NeurIPS. Zhun Yang, Adam Ishay, and Joohyung Lee. 2023. Cou- pling large language models with logic programming for robust and general reasoning from text. In Find- ings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 5186â 5219. Association for Computational Linguis- tics. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR). HÃ¥kan LS Younes and Michael L Littman. 2004. Ppddl1. 0: An extension to pddl for expressing plan- ning domains with probabilistic effects. Techn. Rep. CMU-CS-04-162, 2:99. SR EL LLM-DP (n=3) LLM-DP (n=3) - fallback LLM-DP (n=5) LLM-DP (n=5) - fallback 0.96 0.92 0.96 0.94 13.16 12.80 12.54 12.24 Table 2: We compare the average Success Rate (SR) and average Episode Length (EL) for different sam- pling sizes n and with or without a fallback to random sampling. The random sampling fallback affects the success rate as the LLM sampler can more often sample n world states which are already satisfied. However as n increases, we see that it becomes more likely for the sampling procedure to at find at least one plan, and therefore the SR increases when no fallback (- fallback) is used. | 2308.06391#24 | 2308.06391#26 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#26 | Dynamic Planning with a LLM | # A Prompts and Few-shot details See Table 3 and Table 4 for LLM-DP prompts used. # B ReAct # B.1 Reproduction with Chat Model We slightly modify the â systemâ prompt of the original ReAct (see Table 5) to guide the model away from its conversational tendencies. gpt-3.5-turbo apologises significantly more than the text-davinci-002 model, and we found that it would often get stuck in loops of apologising. We also modify the code so that we replace all gen- erated instances of â inâ and â onâ with â in/onâ if the model did not generate it correctly, since Alfworld expects â in/onâ but gpt-3.5-turbo tends to gen- erate only the correct preposition. | 2308.06391#25 | 2308.06391#27 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#27 | Dynamic Planning with a LLM | Without these changes, ReAct would be significantly worse than our reported metric. # C LLM-DP # C.1 Generated Goal Examples See Table 6 for examples of generated goals, both valid and invalid. # C.2 Varying n See Table 6 for results when different varying n and fallback. Fallback is when no plans are sam- pled successfully through the LLM, LLM-DP re- samples n plans randomly. (define (domain alfred) (:predicates (isReceptacle ?o - object) ; true if the object is a receptacle (atReceptacleLocation ?r - object) ; true if the robot is at the receptacle location (inReceptacle ?o - object ?r - object) ; true if object ?o is in receptacle ?r (openable ?r - object) ; true if a receptacle is openable (opened ?r - object) ; true if a receptacle is opened (isLight ?o - object) ; true if an object is light source (examined ?o - object ?l - object) ; whether the object has been looked at with light (holds ?o - object) ; object ?o is held by robot (isClean ?o - object) ; true if the object has been cleaned in sink (isHot ?o - object) ; true if the object has been heated up (isCool ?o - object) ; true if the object has been cooled (isSink ?o - object) ; true if the object is a sink (isMicrowave ?o - object) ; true if the object is a microwave (isFridge ?o - object) ; true if the object is a fridge )) Table 3: System Prompt used by gpt-3.5-turbo for generating the :goal in LLM-DP | 2308.06391#26 | 2308.06391#28 | 2308.06391 | [
"2303.11366"
]
|
2308.06391#28 | Dynamic Planning with a LLM | Your task is to: put a clean plate in microwave. (:goal (exists (?t - plate ?r - microwave) (and (inReceptacle ?t ?r) (isClean ?t) ))) Your task is to: examine an alarmclock with the desklamp", (:goal (exists (?t - alarmclock ?l - desklamp) (and (examined ?t ?l) (holds ?t) ))) Your task is to: put two cellphone in bed (:goal (exists (?t1 - cellphone ?t2 - cellphone ?r - bed) (and (inReceptacle ?t1 ?r) (inReceptacle ?t2 ?r) (not (= ?t1 ?t2)) ))) Table 4: Fixed Few-shot examples used by gpt-3.5-turbo for generating the :goal in LLM-DP | 2308.06391#27 | 2308.06391#29 | 2308.06391 | [
"2303.11366"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.