id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.03427#13
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
â ¢ The first one, named as the One-step Agent (TPTU-OA), adopts a global perspective to interpret the original problem, effectively breaking it down into a sequence of sub-tasks in a single instance. This strategy fully harnesses the modelâ s comprehensive understanding capabilities to map out the problem-solving steps for all sub-tasks at once. This method underscores the significance of a holistic understanding and planning of the overall task, albeit it might lack flexibility when dealing with individual sub-tasks.
2308.03427#12
2308.03427#14
2308.03427
[ "2302.13971" ]
2308.03427#14
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
â ¢ The second type, referred to as the Sequential Agent (TPTU-SA), emphasizes tackling the current sub-task at hand. Upon successfully resolving the ongoing sub-task, this agent requests the LLMs to provide the succeeding sub-task. This approach enables the model to maintain a clear and concentrated focus throughout the problem-solving journey, tackling issues incrementally. Such a methodology allows for continuous feedback and progress within the confines of addressing a broader problem. These two distinct agent models represent two disparate problem-solving strategies - the one-step and sequential resolution 4. In our subsequent experiments, we aim to understand their respective strengths and weaknesses and how they can be best utilized to leverage the capabilities of LLMs in real-world problem-solving scenarios. # 3 Evaluation We instantiate the proposed LLM-based AI Agent framework (TPTU-OA and TPTU-SA) with different LLMs and evaluate their performance on typical tasks. 4One can also combine the two strategies to design a hierarchical agent, but this is beyond the scope of this paper.
2308.03427#13
2308.03427#15
2308.03427
[ "2302.13971" ]
2308.03427#15
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
paper. 5 Problem One-step Plans 1. SQL generator: â Figuring out how many colleague who has worked for five years from the database; taking it as X.â 2. Python generator: â Calculating the value of 100*X with a calculatorâ How much budget is required| to provide a 1008 incentive for each colleague who has worked for five years?â (a) One-step Agent (TPTU-OA) Sequential Plan 1 ee â How much budget is required to provide a 1008 incentive for each colleague who has worked for five years?â SQL generator: â Figuring out how many colleague who has worked for five years from the | database; taking it as s Sequential Plan 2 , = __sF Python generator: â Calculating P|) the value of 100%X with a | calculator.â
2308.03427#14
2308.03427#16
2308.03427
[ "2302.13971" ]
2308.03427#16
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
a (b) Sequential Agent (TPTU-SA) Figure 3: The workflows of the One-step Agent and the Sequential Agent are specifically designed to assess the Task Planning and Tool Usage abilities of LLMs. # 3.1 Preparations Before beginning our evaluation, we first outline the preparations. We will give detailed descriptions of the datasets, available tools, and popular large language models. # 3.1.1 Datasets We first clarify the motivations behind our choice of tools for evaluation. The selection was guided by two primary factors: the number of tools to be evaluated and the specific tools to be included. Firstly, regarding the number of tools, it is important to state that our proposed evaluation framework is extensible. It can incorporate any number of tools as pluggable components to be managed by the LLM-based AI agents. Secondly, looking at the current work on tool-augmented LLMs, such as T-Bench [16] and ToolBench [17], we see that only a handful of tools are launched and executed in a single scenario. Meanwhile, API-Bank [18], in a single scenario, typically dispatches only one API tool and awaits its response. APIBench [19] and ToolAlpaca [20] do not even execute a tool response. Hence, for the sake of simplicity and focus in our evaluation, we have decided to primarily assess two tools (which can be called multiple times) within a single scenario. Secondly, we also need to decide which specific tools should be used for evaluation. Consider a real-world scenario where we pose the question: â
2308.03427#15
2308.03427#17
2308.03427
[ "2302.13971" ]
2308.03427#17
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
How much budget is required to offer a $100 incentive to each employee who has been with the company for over five years?". To answer this, we first need to retrieve the relevant data from a database, typically using SQL, to find the number of eligible employees. Then, we need to perform a mathematical calculation to estimate the total budget. Such scenarios are quite common in daily life where the formulation and resolution of a question often involve SQL and mathematical tools. Recognizing the importance of these tools, we have chosen to focus our evaluation on SQL and Python generators, which represent the capabilities of database querying and mathematical computation, respectively. To this end, we have prepared 120 question-answer pairs that vary in complexity. These pairs provide a rigorous assessment of the LLM-based AI agents in understanding, generating, and
2308.03427#16
2308.03427#18
2308.03427
[ "2302.13971" ]
2308.03427#18
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
6 utilizing these essential tools. For further information on these queries and their corresponding demonstrations, please refer to Appendix A. # 3.1.2 Tools We have defined a total of 12 available tools for the selection of the LLM-based AI agents for evaluation. They are defined as follows: â ¢ SQL generator: Given an input question and a database, create a syntactically correct SQLite query statement. â ¢ Python generator: Given an input question and some information, generate a syntactically correct Python code.
2308.03427#17
2308.03427#19
2308.03427
[ "2302.13971" ]
2308.03427#19
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Weather query tool: Given a location, output the current real-time weather at that location. Image generator: Given a text description, generate a related image. â ¢ Text extractor: Given a link to an image, extract the corresponding text and its position coordinates. Translator: Given a piece of text, translate it into other languages. â ¢ Bing Searcher: Given a piece of text, conduct a search on the Bing browser and return content. â ¢ Shell generator: Given an input question and some information, generate a syntactically correct Shell code.
2308.03427#18
2308.03427#20
2308.03427
[ "2302.13971" ]
2308.03427#20
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
â ¢ Java generator: Given an input question and some information, generate a syntactically correct Java code. Wikipedia searcher: Given a piece of text, conduct a search on Wikipedia and return content. â ¢ Office software: Given a text description, automatically generate corresponding long docu- ments or spreadsheets or PPTs. â ¢ Movie player: Given a movie name, automatically play the corresponding movie resources. # 3.1.3 LLMs The LLMs evaluated in this paper are listed in Table 2, elaborated as follows:
2308.03427#19
2308.03427#21
2308.03427
[ "2302.13971" ]
2308.03427#21
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
â ¢ GPT series developed by OpenAI boasts a powerful language model with a vast number of parameters, enabling it to tackle intricate problems efficiently. This paper aims to evaluate the performance of ChatGPT, which balances the performance with costs (the number of OpenAI API calls). â ¢ Claude is committed to maintaining honesty and ensuring user safety, which is developed by Anthropic. With its impressive size, Claude ranks among the largest language models globally and poses a formidable challenge to ChatGPT as a strong competitor.
2308.03427#20
2308.03427#22
2308.03427
[ "2302.13971" ]
2308.03427#22
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
â ¢ InternLM, a sophisticated language model developed by Shanghai AI Lab, boasts a multi- round dialogue capability and an impressive ability to comprehend super-long text. This language model is meticulously designed to cater to the nuances of the Chinese language, enabling it to comprehensively understand and effectively process Chinese text. Here, we adopted the version with 120 billion parameters. â ¢ Ziya is an expansive and robust pre-training model developed by IDEA, derived from the LLaMa with 13 billion parameters. This comprehensive model exhibits a wide range of capabilities, including translation, programming, and mathematical calculations. Notably, it stands out as a bilingual LLM, highlighting its ability to effectively process and comprehend text in Chinese.
2308.03427#21
2308.03427#23
2308.03427
[ "2302.13971" ]
2308.03427#23
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
â ¢ ChatGLM, developed by Tsinghua University, is an open-source dialogue language model that supports bilingual Q&A in Chinese and English, with a particular focus on Chinese optimization. Built on the General Language Model (GLM) architecture and utilizing model quantization technology, the ChatGLM can be easily deployed on consumer-grade graphics cards, enabling local implementation by users. 7 â ¢ Chinese-Alpaca-Plus is achieved by extending LLaMAâ s existing vocabulary with an additional 20,000 Chinese tokens from Meta AI (formerly known as Facebook AI Research Laboratory). In this version, we use a model with 33 billion parameters. The training text has been expanded to 120GB, and the fine-tuning instruction data has been increased to 4.3M. Table 2: The LLMs evaluated in this paper. Organization Model Name Model Parameters OpenAI Anthropic Shanghai AI Lab IDEA Tsinghua University - ChatGPT[21] Claude[22] InternLM Ziya-13B ChatGLM-130B[23] Chinese-Alpaca-Plus-33B[24, 25] 200B >52B 120B 13B 130B 33B # 3.2 Evaluation on Task Planning Ability In this section, to evaluate the planning capabilities of the LLM-based AI agents, we have structured the evaluations as follows. For TPTU-OA, we begin by examining the agentsâ ability to plan the order of tool use. This is followed by an evaluation of the agentsâ capacity to not only plan the sequence of tools but also the corresponding subtask descriptions. Subsequently, we conduct a specialized planning evaluation where the agents must generate multiple sequences of key-value pairs of the form {tool: subtask description} in complex problem teardowns. Moreover, we expand the toolset with additional, unrelated tools to further challenge and reassess the planning ability of the LLM-based AI agents. For TPTU-SA, we follow the regime that the agent should generate multiple sequences of key-value pairs of the form {tool: subtask description} for evaluation. # 3.2.1 TPTU-OA: Tool Order Planning
2308.03427#22
2308.03427#24
2308.03427
[ "2302.13971" ]
2308.03427#24
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Here, we utilize two kinds of tools for problem-solving: the SQL generator, which retrieves data from databases, and the Python generator, adept at addressing mathematical questions. To validate the capacity of the LLM-based AI agents to strategically plan for the tool order, we designed the prompt as shown in Figure 8 of Appendix B. This design is motivated by the goal to assess the ability of LLM-based AI agents to understand complex problems, subsequently decomposing them into a sequence of simpler tasks executed by appropriately selected tools. Specifically, we require the LLM-based AI agent to follow our instructions, select tools from our pre-defined tool set with detailed function descriptions, conform to the given format strictly, and understand the demonstrations to learn from them. Upon feeding these prompts into the LLM-based AI agents under evaluation, we obtained the following accuracy rates for the tool planning, as shown in Table 3. # Table 3: The evaluation results for the planning of tool order generation. Model Accuracy Model Accuracy ChatGPT 100% Claude 100% ChatGLM Chinese-Alpaca-Plus 45% 20% Ziya 45% InternLM 80% The results of our experiments indicate that models, notably Ziya and ChatGLM, frequently grapple with the generation of lists in the correct format. For other models, the predominant challenges lie in
2308.03427#23
2308.03427#25
2308.03427
[ "2302.13971" ]
2308.03427#25
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
8 generating tools in the correct sequence or in the occasional omission of necessary tools. Nonetheless, the issue of parsing list formats is generally negligible. These findings suggest that the majority of LLM-based AI agents possess a fundamental capability to analyze the tool needs of a given problem and understand its task requirements. To further explore whether these LLM-based AI agents can effectively break down the original problem into sub-tasks, we proceed to the following section. # 3.2.2 TPTU-OA: Tool Order Planning and Subtask Description Generation Simply planning the order of tool usage is not sufficient to fully address a problem. To truly solve it, we need to provide a guide or instructions for the usage of each tool, that is, a decomposed subtask description. Therefore, we can decompose the original complex problem into two separate sequences. One sequence represents the order in which the tools are utilized, while the other sequence corresponds to the subtask descriptions that each tool in the tool sequence aims to resolve. A problem is only truly solved when both the tool and subtask description sequences have been successfully planned. In order to verify whether LLM-based AI agents truly have the ability to solve complex problems, we designed a new prompt as shown in Figure 9 of Appendix B. The main improvement is to plan the corresponding subtask description for each tool after the tool planning is completed.
2308.03427#24
2308.03427#26
2308.03427
[ "2302.13971" ]
2308.03427#26
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Table 4: The evaluation results for the planning of tool order and subtask description generation. Model Accuracy Model Accuracy ChatGPT 55% Claude 15% ChatGLM Chinese-Alpaca-Plus 10% 0% Ziya 10% InternLM 45% After feeding the prompt to these LLM-based AI agents, we get results shown in Table 4. Although the generation of tool sequences and their corresponding subtask descriptions might be an effective way to problem-solving, there is a significant decrease in accuracy for all LLMs as can be seen. We hypothesize that there are a few potential drawbacks to this method:
2308.03427#25
2308.03427#27
2308.03427
[ "2302.13971" ]
2308.03427#27
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
1. Difficulty in Error Tracking and Debugging. Generating the complete tool and subtask sequences may make it more challenging to track and debug errors. If an error arises within the sequence, it might require a total regeneration instead of a simple modification or repair to the erroneous part. 2. Tool-Subtask Pairing Issue. If all tool sequences and subtask descriptions are generated independently, thereâ s an inherent risk of misalignment between the tools and their corre- sponding subtasks. This could potentially lead to an improper pairing, which, in turn, could result in a flawed or ineffective solution that fails to appropriately resolve the given problem.
2308.03427#26
2308.03427#28
2308.03427
[ "2302.13971" ]
2308.03427#28
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
3. Lack of Flexibility. The approach may lack this flexibility when facing complex problems requiring adjustments to the tool or subtask sequence. 4. Dependency on Global Information. Generating the entire tool and subtask sequences requires a global understanding and planning of the entire problem. However, in some instances, certain parts of the problem might not be clear at the early stages of problem- solving, which could pose challenges within this framework. # 3.2.3 TPTU-OA: The Planning of Tool-Subtask Pair To mitigate the aforementioned issue, we propose a novel approach to foster flexible problem-solving with the LLM-based AI agent. We prompt the agent to generate multiple sequences, each consisting of a key-value pair in the format of {tool: subtask description} that associates a tool with its respective subtask description. This allows us to simultaneously plan the tool choice and subtask without the risk of improper matching. Moreover, it offers the flexibility to update the planned sequences in real-time based on evolving problem feedback, enhancing adaptability and efficiency when addressing complex tasks.
2308.03427#27
2308.03427#29
2308.03427
[ "2302.13971" ]
2308.03427#29
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
9 With this consideration, we have designed a unique prompt that encourages this advanced problem- solving strategy. In the following section, we delve into the specifics of this prompt design in Figure 10 of Appendix B. The key improvement in this prompt is its directive for the LLM-based AI agents to stringently adhere to the predefined dictionary format. To facilitate this, we offer several demonstrations in our desired format, serving as references for the language model to follow. # Table 5: The evaluation results for the planning of Tool-Subtask pair. Model Accuracy Model Accuracy ChatGPT 75% Claude 90% ChatGLM Chinese-Alpaca-Plus 0% 5% Ziya 20% InternLM 55% After feeding the prompt to these LLM-based AI agents, we get results shown in Table 5. Analyzing the results from Tables 4 and 5, we observe a marked improvement of 52.9% when the tool-subtask pairs are generated in a unified format compared to separate generation of tools and subtasks. This significant performance enhancement can likely be attributed to the close coupling between tools and their associated subtasks in our unified generation strategy. When tools and subtasks are generated separately, there is a potential disconnect or lack of coherence between the two, which could lead to less accurate or efficient solutions. In contrast, by generating tool-subtask pairs together, we ensure that each tool is directly tied to its relevant subtask, leading to a more coordinated and effective problem-solving approach. This might explain the observed increase in overall performance. # 3.2.4 TPTU-OA: The Planning of Tool-Subtask Pair with Unrelated Tools
2308.03427#28
2308.03427#30
2308.03427
[ "2302.13971" ]
2308.03427#30
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
So far, our analysis and evaluation have been primarily focused on the LLM-based AI agentsâ proficiency in planning with specific tools. However, we are also interested in how it would perform when faced with many irrelevant or similar tools. Therefore, for a more comprehensive assessment, we expanded the prompt in Table 10 to include an additional ten unrelated tools, as illustrated in Figure 11 of Appendix B. Table 6: The evaluation results for the planning of Tool-Subtask pair with unrelated tools. Model Accuracy Model Accuracy ChatGPT 70% Claude 90% ChatGLM Chinese-Alpaca-Plus 0% 5% Ziya 10% InternLM 50% After feeding the prompt to these LLM-based AI agents, we get results shown in Table 6. The results from our expanded evaluation demonstrate that even when presented with irrelevant or similar tools and descriptions, LLM-based AI agents consistently avoid selecting these unrelated tools (i.e., the accuracy has remained unchanged or exhibited only a marginal decrease compared with Table 5). This outcome indicates the effectiveness of our designed prompt, which successfully guides the LLM-based agents to understand the appropriate tool sequence for complex problem decomposition. This observation reinforces the notion that a well-structured and informative prompt can efficiently guide AI agents to understand the core essence of the problem, thereby enabling them to sift through irrelevant information and focus on key tasks. This successful discrimination against unrelated tools also points towards the modelsâ ability to understand the specific context of a problem and select the appropriate tools, thereby enhancing the overall problem-solving process. # 3.2.5 TPTU-SA: The Planning of Tool-Subtask Pair Generation Upon identifying the drawbacks of first generating a list of tools and then generating corresponding subtask descriptions, we decided to focus subsequent tests on the generation of tool-subtask pairs.
2308.03427#29
2308.03427#31
2308.03427
[ "2302.13971" ]
2308.03427#31
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
10 Consequently, in this section, we evaluate the capability of TPTU-SA to generate these tool-subtask pairs. To achieve the goal of recursively generating tool-subtask pairs, we have designed prompts as illustrated in Figure 12 of Appendix B. Table 7: The evaluation results for the planning of Tool-Subtask with the sequential agent. Model Accuracy Model Accuracy ChatGPT 80% Claude 100% ChatGLM Chinese-Alpaca-Plus 0% 0% Ziya 10% InternLM 65% The evaluation results are shown in Table 7. Compared with results shown in Table 5, TPTU-SA generally performs better than TPTU-OA especially for highâ performing LLMs (e.g., ChatGPT, Claude and InternLM). We propose the following potential reasons for this observation: 1. Sequentiality Mimics Human Problem-Solving: In real-world scenarios, humans tend to solve complex problems by breaking them down into smaller, manageable subtasks which are often handled sequentially. Sequential agents are designed to mimic this step-by-step approach, which might inherently suit complex problem-solving better.
2308.03427#30
2308.03427#32
2308.03427
[ "2302.13971" ]
2308.03427#32
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
2. Richer Contextual Understanding: Sequential agents are exposed to the outcome of each previous subtask before moving on to the next one. This iterative process could facilitate a richer understanding of the problem context, enabling more accurate task planning and tool usage. 3. Flexibility in Task Management: In comparison to one-step agents, sequential agents might have more flexibility in managing tasks. They have the opportunity to correct errors or adjust their strategy after each step, which can lead to improved overall performance. 4. Improved Learning From History: The sequential process provides a history of actions and results which can be beneficial in learning. The agent can use this history to make better predictions about what tool to use next or what subtask to tackle, leading to more accurate and efficient problem-solving. These points of analysis suggest that the structure and operation of sequential agents inherently confer certain advantages in complex problem-solving scenarios, leading to their superior performance. # 3.3 Evaluation on Tool Usage Ability Before evaluating the end-to-end multi-tool usage ability of LLM-based AI agents, we first evaluate the effectiveness of single-tool usage for SQL generation and mathematical code generation. Subsequently, to assess the end-to-end performance of LLMs across various tools, two types of agents (TPTU-OA and TPTU-SA) were developed and several LLMs were subjected to testing under these agents. The role of the agents is to break down complex questions into simpler sub-questions and plan corresponding tools to solve them, based on the available toolset and corresponding tool descriptions. # 3.3.1 The effectiveness of Single Tool Usage Our aim is to systematically assess how effectively these models can use various tools, focusing on their proficiency with SQL and other coding languages. The Effectiveness of simple SQL Creation Using the schemas provided in Table 12 and Table 13, we construct questions similar to those in Table 14, and refer readers to Appendix A. These questions are posed to various LLMs using our specifically designed prompts in Appendix B. Following the tailored prompts, the LLMs are evaluated based on their responses to the presented queries. The results of this comprehensive assessment are compiled and exhibited in Figure 8. This verifies the capabilities of each LLM in handling varying simple single-table SQL queries, thus providing a basis for comparison and analysis.
2308.03427#31
2308.03427#33
2308.03427
[ "2302.13971" ]
2308.03427#33
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
11 # Table 8: The evaluation results for simple SQL questions. Model Accuracy Model Accuracy ChatGPT 90% Claude 100% ChatGLM Chinese-Alpaca-Plus 30% 20% Ziya 50% InternLM 90% The Effectiveness of Complex Nested SQL Creation Using the schemas provided in Ta- ble 15, 16, 17, and 18, we construct questions similar to those in Table 19, and refer readers to Appendix A. For complex nested SQL questions, to further verify the SQL tool creation capability of LLMs, we have designed two types of prompts. One is the direct-guidance type, which explicitly informs the model that it needs to generate nested SQL query statements, as shown in Figure 14 in Appendix B. The other is based on the Chain-of-Thought (CoT) [26] approach, which leverages the modelâ s ability to reason step by step to comprehend and craft SQL tools, and the prompt is shown in Figure 15 in Appendix B. This method guides the model to sequentially generate SQL query clauses based on the problem context, thus breaking down the complex query generation task into smaller and manageable subtasks. This approach provides the model with a structured way to handle complex SQL tasks and showcases its capacity to engage in incremental reasoning and problem-solving. The design of these two types of prompts serves as the backbone of our evaluation for complex nested SQL questions. While the direct-guidance approach focuses on testing the modelâ s raw ability to generate SQL queries when explicitly instructed, the CoT-based approach evaluates a more nuanced capability: the modelâ s reasoning and problem-solving skills in a step-by-step manner. Both these methods present unique challenges and offer valuable insights into the strengths and potential areas of improvement for the large language modelâ s SQL tool generation ability. Subsequently, we will explore these two dimensions based on our experimental evaluations shown in Table 9. Table 9: The evaluation results for complex nested SQL questions. Model Direct-based CoT-based ChatGPT 80% 80% Claude 100% 100% Ziya 50% 40% Model Direct-based CoT-based ChatGLM Chinese-Alpaca-Plus 60% 70% 0% 0% InternLM 60% 50% From the above results in Table 9, it is clear that different models possess varying levels of proficiency in handling complex nested SQL tasks.
2308.03427#32
2308.03427#34
2308.03427
[ "2302.13971" ]
2308.03427#34
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Some models, like Claude, exhibit a robust capability in SQL generation, no matter whether the approach is direct or CoT-based. Most of these models demonstrate the SQL tool usage capability. Specifically, some models such as ChatGLM show a distinct preference for the CoT-based approach, their performance improves when problems are broken down into smaller, manageable sub-tasks. This suggests that these models may have a stronger ability in sequential problem-solving and benefit more from step-by-step guidance. Conversely, models like Ziya and InternLM show a drop in performance when tasks are guided in the CoT-based format. This might indicate challenges in managing dependencies between sub-tasks or handling the continuity in sequential problem-solving. Lastly, Chinese-Alpaca-Plus shows significant room for improvement in complex SQL generation tasks. This shows that not all models are equally suited to handle advanced problem-solving involving nested SQL queries. Overall, these findings underscore the importance of tailoring evaluation and training methodologies to the individual strengths and weaknesses of each model. By adopting this approach, we can better understand the performance variations across different models and provide targeted improvements to enhance their problem-solving abilities. Furthermore, this analysis highlights the potential of
2308.03427#33
2308.03427#35
2308.03427
[ "2302.13971" ]
2308.03427#35
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
12 LLM-based agents in real-world applications, and the need to push their boundaries through continued research and development. The Effectiveness of Mathematical Code Creation Following our evaluation of the LLMâ s profi- ciency in creating complex SQL queries, we now shift our focus to another tool creation: the creation of mathematical code. To the best of our knowledge, while large language models possess significant capabilities, they often fall short of providing highly accurate solutions to mathematical problems. Guiding these LLMs to generate mathematical code, and subsequently leveraging external tools to execute and derive the solutions, could significantly enhance their ability to tackle mathematical challenges. In the upcoming section, we will conduct a detailed evaluation of guiding these LLMs to generate mathematical code. We aim to shed light on the true capability of these models in generating mathematical code and to elucidate the extent to which they can be utilized to aid in mathematical problem-solving. The prompt about how to guide LLMs is shown in Figure 16 in Appendix B. # Table 10: The evaluation results for mathematical questions. Model Accuracy Model Accuracy ChatGPT 90% Claude 85% ChatGLM Chinese-Alpaca-Plus 0% 55% Ziya 50% InternLM 95% The results shown in Table 10 indicate that the capabilities of LLM-based agents to generate math- ematical code vary considerably. High-performing models like ChatGPT, Claude, and InternLM display excellent proficiency, suggesting their potent ability to solve complex mathematical tasks. Middle-tier models, such as Ziya, show moderate success, indicating the potential for improvement and adaptability with the right training and optimization. Surprisingly, Alpaca demonstrated a notable proficiency in mathematical tasks, despite its poor performance in SQL generation, suggesting a possible inclination towards mathematical problems. In contrast, ChatGLM struggles significantly with mathematical code generation, underlining a potential weak spot in its capabilities and the need for focused improvement in this area. Overall, these results underscore the task-dependent nature of LLMsâ capabilities and highlight the importance of recognizing their individual strengths and weaknesses for optimal model guidance and enhanced problem-solving. # 3.3.2 TPTU-OA and TPTU-SA: Tool Usage for Multiple Tools We now aim to utilize the one-step agent and sequential agent, which we designed, to conduct an evaluation involving multiple tools.
2308.03427#34
2308.03427#36
2308.03427
[ "2302.13971" ]
2308.03427#36
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Corresponding prompts for each agent type have been crafted and are presented in Figure 17 and Figure 18 of Appendix B, respectively. In this phase of the evaluation, we need to automatically invoke the respective tools through code and produce the results. Given that user interface-based LLMs lack the capability to call external tools, we will only utilize the following four API-based LLMs (ChatGPT, Ziya, Chinese-Alpaca, and InternLM) for this comprehensive evaluation of external tool usage ability. # Table 11: The evaluation results for end-to-end ability of multiple tools. Model TPTU-OA TPTU-SA ChatGPT Ziya Chinese-Alpaca-Plus 50% 55% 0% 0% 0% 0% InternLM 15% 20% With agents mentioned above, the final results are presented in Table 11. The evaluation results demonstrate varying levels of task planning and tool usage capabilities among the four API-based LLMs. In the TPTU-OA evaluation, ChatGPT achieved a performance rate of 50%, significantly outperforming the other models, with InternLM at 15%, while both Ziya and Chinese-Alpaca did not manage to complete any tasks successfully, resulting in a score of 0%. In the TPTU-SA evaluation,
2308.03427#35
2308.03427#37
2308.03427
[ "2302.13971" ]
2308.03427#37
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
13 an overall slight improvement was observed. ChatGPT maintained its leading position, with a slightly improved performance rate of 55%. InternLM also exhibited better performance, achieving a score of 20%, whereas Ziya and Chinese-Alpaca-Plus again failed to register any successful task completion. These results reflect a notable discrepancy in the performance of LLMs when it comes to using external tools. ChatGPT and InternLM have demonstrated some ability to navigate these tasks, but their performance rates suggest there is significant room for improvement.
2308.03427#36
2308.03427#38
2308.03427
[ "2302.13971" ]
2308.03427#38
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Ziya and Chinese- Alpaca-Plusâ performance indicates a struggle to effectively utilize external tools in their current state. The differential performance between the TPTU-OA and TPTU-SA evaluation hints at the possible impact of the agent design on the LLMsâ task execution ability. In particular, the performance increase under the sequential agent framework suggests that breaking down tasks into sequential steps might help LLM-based AI agents better utilize external tools. This insight could prove valuable in future improvements and developments of LLM-based AI agents. However, even with this approach, it is clear that LLM-based AI agents are far from perfect when it comes to effectively using external tools for complex tasks. This finding underlines the importance of further investigation and improvement in this domain. # Insightful Observations Upon closer observation of our experimental results, we have identified several phenomena that deserved further exploration. These findings serve to broaden our understanding of LLM-based agentsâ behavior and capabilities and provide essential insights that could shape future research in this field. In the following, we will dissect these phenomena as shown in Figure 4 - 7, casting light on the weaknesses of LLM-based agents in the context of task planning and tool usage. 1.
2308.03427#37
2308.03427#39
2308.03427
[ "2302.13971" ]
2308.03427#39
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Misunderstanding Output Formats: LLMs frequently encounter difficulty when output is required in specific formats such as lists or dictionaries. One such example includes incon- sistencies between the number of tools and corresponding subtasks, leading to formatting issues that hinder the correct execution of tasks. How many more concerts has Jay Chou held than Li Ronghao? Is this number bigger than the square root of 10? Tools: ["Python generator", "SQL generator"] LLMs Subtasks:["How many concerts did Jay Chou perform?", Srey "How many concerts did Li Ronghao perform?", = "How many more concerts did Jay Chou perform than Li Ronghao?", a "Is the number bigger than the square root of 10?"]
2308.03427#38
2308.03427#40
2308.03427
[ "2302.13971" ]
2308.03427#40
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Figure 4: Issue-1: Inconsistencies between the number of tools and corresponding subtasks. 2. Struggling to Grasp Task Requirements: LLMs might incorrectly disintegrate subprob- lems or apply unsuitable tools to carry out the subproblem. For example, an LLM might attempt to solve a purely mathematical problem by employing an SQL tool or could misun- derstand similar terms like cube extraction and cube roots. 3. Endless Extensions: LLMs tend to overutilize a particular tool, even in instances where a single use would suffice for the correct result. This issue can lead to extended and nonsensical planning, where the same subtask is repeatedly solved.
2308.03427#39
2308.03427#41
2308.03427
[ "2302.13971" ]
2308.03427#41
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
4. Lack of Summary Skills: LLMs do not take into account the responses to subproblems, relying instead on their internalized knowledge to generate the final answer. This may lead to a scenario where the final response only addresses a portion of the original query. By identifying and addressing these common issues, we stand a better chance at improving and refining LLMs, thereby unlocking their full potential. 14 How many singers have the average number of albums of singers in Beijing? Gives the square root of this number. Tools: ["SQL generator", "SQL generator", "SQL generator"] Subtasks:["What is the average number of albums by singers in Beijing?", "How many singers have the average number of albums by singers in Beijin | "What is the square root of this number?"] al Figure 5: Issue-2:Solve a purely mathematical problem by employing a SQL generator. Exclude the two birthplaces with the most singers, provide the number of singers from other birthplaces, and calculate the factorial of this number. The Tool_Query for the first execution of the tool is: {{"SQL Generator": "Not the two birthplaces with the most singers"}} The Tool_Query for the second execution of the tool is: { "SQL Generator": "Exclude the two birthplaces with the most singers, provide the number of singers from other birthplaces"}} The Tool_Query for the third execution of the tool is: {{"SQL Generator": "Exclude the two birthplaces with the most singers, provide the number of singers from other birthplaces, and calculate the factorial of this number"} }
2308.03427#40
2308.03427#42
2308.03427
[ "2302.13971" ]
2308.03427#42
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Figure 6: Issue-3: Unnecessary repetition of subtasks. Please use SQL language to query who are the singers who have not been nominated in the Golden Melody Awards? Give their names. Jay Chou, Cui J it LLMs Answer: Jay Chou, Cui Jian ms a D> rnternim Figure 7: Issue-4: Answering questions using common sense instead of generating code. # 4 Related Work The remarkable capacity for usage and creation of tools have facilitated the transcendence of our innate physical and cognitive constraints, thereby profoundly advancing the progress and prosperity of human civilization and society. The swift advancement of LLM has rendered it feasible to use and create tools like humans. The integration of specialized tools with LLM has unlocked substantial potential in addressing intricate tasks. In this section, we offer a concise synopsis of the relevant research pertaining to tool learning based on LLMs. # 4.1 Tool Usage The initial advancements in tool learning have been constrained by the capabilities of artificial intelligence (AI) models. [27] Traditional deep learning approaches exhibit limitations in terms of comprehension of tool functionality and user intentions, and common sense reasoning abilities. Consequently, these limitations directly result in a notable decline in the stability and precision of tool
2308.03427#41
2308.03427#43
2308.03427
[ "2302.13971" ]
2308.03427#43
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
15 learning methodologies. Recently, the advent of LLM has marked a pivotal juncture in the realm of tool learning. LLMs encompass a broad spectrum of common sense cognitive capabilities and exhibit remarkable proficiencies in natural language processing, reasoning, and interactive decision-making [28â 32]. These attributes furnish indispensable prerequisites for LLMs to comprehend user intentions and effectively employ tools in tackling intricate tasks [33]. Simultaneously, the advancement of fine-tuning [34â 38] and in-context learning [39, 40] technology has offered robust support to LLM in addressing increasingly intricate challenges. In addition, tool usage can mitigate the inherent limitations of LLMs, encompassing the acquisition of up-to-date information from real-world events, refined mathematical computational abilities, and the mitigation of potential hallucinatory phenomena. [41] Within the realm of embodied intelligence [42â 44], LLM engages in direct interactions with tangible tools like robots in order to enhance their cognitive abilities, optimize work productivity, and expand functional capacities. LLM possesses the capability to automatically devise action steps based on user intentions, enabling the guidance of robots in the completion of tasks [45â 53], or alternatively, to directly generate underlying code that can be executed by robots [54â 58]. Palm-E [50] introduced a multimodal language model which seamlessly integrates sensor data into its framework, enabling efficient planning of robot actions and task completion. Code as Policies (CaP) [58] facilitates the transformation of natural language instructions into code fragments that can be directly compiled and executed on robots. As for Inner Monologue [48], LLM incorporates diverse environmental feedback to construct inner monologues, thereby formulating effective robot control strategies. Furthermore, LP-SLAM [45] proposes a simultaneous localization and mapping (SLAM) system empowered with language perception capabilities, exploiting the potential of ChatGPT. PromptCraft [57], on the other hand, devises a function library tailored to ChatGPT on the robot platform, streamlining the conversion of user intentions into executable tasks via the underlying backend API. In addition to directly changing the real environment through interaction with tools in the physical world, LLM can also utilize software tools such as search engines [59â 67], mobile [68, 69], Microsoft Office [70, 71], calculators [72â 74], deep models [19, 75â
2308.03427#42
2308.03427#44
2308.03427
[ "2302.13971" ]
2308.03427#44
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
79, 13, 80, 81] and other versatile APIs [82, 5, 83, 84, 20, 85] to enhance model performance or complete complex workflows through flexible control of the software. Toolformer [5] employs a self-supervised methodology to fine-tune the language model, enabling it to acquire the ability to automatically invoke APIs. ART [86] leverages CoT [26] and In-context Learning [81, 41] techniques to automatically generate multi-step reasoning processes for new tasks, while also selecting and utilizing the most appropriate available tool at each step. ASH [62] utilizes LLM for sequence hierarchical decision-making to achieve web navigation tasks. WebGPT [66] and WebCPM [64] use network search to assist in implementing Question Answering tasks. In addition, RCI [87] recursively criticizes and improves itself to execute computer tasks guided by natural language according to the prompting scheme. To achieve the analysis and processing of tables, TableGPT [71] employs a table encoder to transform tabular data into vector representations, which are then fed into an LLM for inference in combination with user queries. # 4.2 Tool Creation The usage of tools is contingent upon the accessibility of external tools. Recently, efforts have been made to employ LLM as a tool creator in order to generate tools that can be utilized for diverse requests [88â
2308.03427#43
2308.03427#45
2308.03427
[ "2302.13971" ]
2308.03427#45
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
95]. This development has consequently raised the demands placed on LLM. And these created tools are typically implemented as Python or SQL functions. LATM [88], for example, leverages the prowess of GPT-4 to create tools, and the usage of more cost-effective models has shown potential in exhibiting performance on par with larger models for these tool applications. EVAPORATE [94] involves the synthesis of multiple functions, which are subsequently utilized at a large scale to efficiently process documents and generate structured views.
2308.03427#44
2308.03427#46
2308.03427
[ "2302.13971" ]
2308.03427#46
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
# 5 Conclusion In this paper, we have introduced a structured framework specially designed for LLM-based AI Agents, with an emphasis on their abilities in task planning and tool usage. This framework, coupled with our design of two distinct types of agents assigned for the inference process, allows for a comprehensive evaluation of the capabilities of current open-source LLMs, thereby yielding critical insights into their effectiveness. Furthermore, our research highlights the significant potential of 16 LLMs in managing complex tasks, revealing the exciting prospects they hold for future research and development. As we continue to explore and improve upon these models, we move closer to unlocking their full potential in a wide range of real-world applications.
2308.03427#45
2308.03427#47
2308.03427
[ "2302.13971" ]
2308.03427#47
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
# Acknowledgements This work was conducted collaboratively among the authors. Hangyu Mao and Rui Zhao led the project, formulating the central idea and laying out the framework for the primary literature review. Regarding the literature review phase, the surveys were conducted by various team members. Guoqing Du and Jingqing Ruan explored DNN-based Tool Scheduling by LLMs; Tianpeng Bao and Yihong Chen investigated Physical/Robot Tool Scheduling by LLMs; and Shiwei Shi and Zhiwei Xu handled the survey of API or GUI-based Tool Scheduling by LLMs. Bin Zhang summarized these papers and synthesized an overarching summary. As for the evaluation phase, Yihong Chen, Tianpeng Bao, Jingqing Ruan, Guoqing Du, Zhiwei Xu, Shiwei Shi, and Bin Zhang performed the experiments and analyzed the data. Hangyu Mao assisted in the analysis of the experimental phenomena and offered constructive suggestions for improvements. Xingyu Zeng and Rui Zhao provided invaluable feedback, contributed to the direction of the research. All authors participated in the discussion. Regarding the manuscript phase, Hangyu Mao organized the overall chapters of the manuscript and mainly wrote the methodology part, and provided assistance in other parts. Jingqing Ruan and Yihong Chen wrote the evaluation section. Bin Zhang wrote the summary of the literature review. Each author read and approved the final manuscript. The authors would like to thank Feng Zhu, Kun Wang, Yuhang Ran, Mengying Xu, Pengfei Jia, and Shaobo Lin for their valuable feedback, discussion, and participation in this project.
2308.03427#46
2308.03427#48
2308.03427
[ "2302.13971" ]
2308.03427#48
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
# References [1] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong et al., â A survey of large language models,â arXiv preprint arXiv:2303.18223, 2023. [2] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., â
2308.03427#47
2308.03427#49
2308.03427
[ "2302.13971" ]
2308.03427#49
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Language models are few-shot learners,â Advances in neural information processing systems, vol. 33, pp. 1877â 1901, 2020. [3] J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le, â Finetuned language models are zero-shot learners,â
2308.03427#48
2308.03427#50
2308.03427
[ "2302.13971" ]
2308.03427#50
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
arXiv preprint arXiv:2109.01652, 2021. [4] OpenAI, â Gpt-4 technical report,â 2023. [5] T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom, â Toolformer: Language models can teach themselves to use tools,â
2308.03427#49
2308.03427#51
2308.03427
[ "2302.13971" ]
2308.03427#51
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
arXiv preprint arXiv:2302.04761, 2023. [6] N. R. Jennings, K. Sycara, and M. Wooldridge, â A roadmap of agent research and development,â Autonomous agents and multi-agent systems, vol. 1, pp. 7â 38, 1998. [7] N. R. Jennings and M. Wooldridge, â Applying agent technology,â Applied Artificial Intelligence an International Journal, vol. 9, no. 4, pp. 357â 369, 1995. [8] S. Franklin and A. Graesser, â
2308.03427#50
2308.03427#52
2308.03427
[ "2302.13971" ]
2308.03427#52
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Is it an agent, or just a program?: A taxonomy for autonomous agents,â in International workshop on agent theories, architectures, and languages. Springer, 1996, pp. 21â 35. [9] C. Castelfranchi, â Modelling social action for ai agents,â Artificial intelligence, vol. 103, no. 1-2, pp. 157â 182, 1998. [10] J. Ferber and G. Weiss, Multi-agent systems: an introduction to distributed artificial intelligence. Addison-wesley Reading, 1999, vol. 1.
2308.03427#51
2308.03427#53
2308.03427
[ "2302.13971" ]
2308.03427#53
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
17 [11] L. Panait and S. Luke, â Cooperative multi-agent learning: The state of the art,â Autonomous agents and multi-agent systems, vol. 11, pp. 387â 434, 2005. [12] M. Pourreza and D. Rafiei, â Din-sql: Decomposed in-context learning of text-to-sql with self-correction,â arXiv preprint arXiv:2304.11015, 2023.
2308.03427#52
2308.03427#54
2308.03427
[ "2302.13971" ]
2308.03427#54
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
[13] C. Wu, S. Yin, W. Qi, X. Wang, Z. Tang, and N. Duan, â Visual chatgpt: Talking, drawing and editing with visual foundation models,â arXiv preprint arXiv:2303.04671, 2023. [14] J. Gorniak, Y. Kim, S. Gwon, D. Wei, and N. W. Kim, â
2308.03427#53
2308.03427#55
2308.03427
[ "2302.13971" ]
2308.03427#55
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Vizability: Multimodal accessible data visualization with keyboard navigation and conversational interaction,â arXiv preprint arXiv:2310.09611, 2023. [15] I. Team, â Internlm: A multilingual language model with progressively enhanced capabilities,â https://github.com/InternLM/InternLM, 2023. [16] Q. Xu, F. Hong, B. Li, C. Hu, Z. Chen, and J. Zhang, â
2308.03427#54
2308.03427#56
2308.03427
[ "2302.13971" ]
2308.03427#56
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
On the tool manipulation capability of open-source large language models,â arXiv preprint arXiv:2305.16504, 2023. [17] Y. Qin, S. Liang, Y. Ye, K. Zhu, L. Yan, Y. Lu, Y. Lin, X. Cong, X. Tang, B. Qian et al., â Toolllm: Facilitating large language models to master 16000+ real-world apis,â arXiv preprint arXiv:2307.16789, 2023. [18] M. Li, F. Song, B. Yu, H. Yu, Z. Li, F. Huang, and Y. Li, â
2308.03427#55
2308.03427#57
2308.03427
[ "2302.13971" ]
2308.03427#57
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Api-bank: A benchmark for tool-augmented llms,â arXiv preprint arXiv:2304.08244, 2023. [19] S. G. Patil, T. Zhang, X. Wang, and J. E. Gonzalez, â Gorilla: Large language model connected with massive apis,â arXiv preprint arXiv:2305.15334, 2023. [20] Q. Tang, Z. Deng, H. Lin, X. Han, Q. Liang, and L. Sun, â
2308.03427#56
2308.03427#58
2308.03427
[ "2302.13971" ]
2308.03427#58
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Toolalpaca: Generalized tool learning for language models with 3000 simulated cases,â arXiv preprint arXiv:2306.05301, 2023. [21] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al., â Training language models to follow instructions with human feedback,â Advances in Neural Information Processing Systems, vol. 35, pp. 27 730â
2308.03427#57
2308.03427#59
2308.03427
[ "2302.13971" ]
2308.03427#59
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
27 744, 2022. [22] Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirho- seini, C. McKinnon et al., â Constitutional ai: Harmlessness from ai feedback,â arXiv preprint arXiv:2212.08073, 2022. [23] A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, M. Ding, Z. Yang, Y. Xu, W. Zheng, X. Xia et al., â Glm-130b: An open bilingual pre-trained model,â arXiv preprint arXiv:2210.02414, 2022. [24] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., â
2308.03427#58
2308.03427#60
2308.03427
[ "2302.13971" ]
2308.03427#60
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Llama: Open and efficient foundation language models,â arXiv preprint arXiv:2302.13971, 2023. [25] Y. Cui, Z. Yang, and X. Yao, â Efficient and effective text encoding for chinese llama and alpaca,â arXiv preprint arXiv:2304.08177, 2023. [26] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou, â Chain-of-thought prompting elicits reasoning in large language models,â Neural Information Processing Systems, 2022. [27] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill et al., â On the opportunities and risks of foundation models,â
2308.03427#59
2308.03427#61
2308.03427
[ "2302.13971" ]
2308.03427#61
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
arXiv preprint arXiv:2108.07258, 2021. [28] M. Mosbach, T. Pimentel, S. Ravfogel, D. Klakow, and Y. Elazar, â Few-shot fine-tuning vs. in-context learning: A fair comparison and evaluation,â arXiv preprint arXiv:2305.16938, 2023. [29] J. Yang, H. Jin, R. Tang, X. Han, Q. Feng, H. Jiang, B. Yin, and X. Hu, â
2308.03427#60
2308.03427#62
2308.03427
[ "2302.13971" ]
2308.03427#62
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Harnessing the power of llms in practice: A survey on chatgpt and beyond,â arXiv preprint arXiv:2304.13712, 2023. 18 [30] C. Zhang, C. Zhang, C. Li, Y. Qiao, S. Zheng, S. K. Dam, M. Zhang, J. U. Kim, S. T. Kim, J. Choi et al., â One small step for generative ai, one giant leap for agi: A complete survey on chatgpt in aigc era,â
2308.03427#61
2308.03427#63
2308.03427
[ "2302.13971" ]
2308.03427#63
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
arXiv preprint arXiv:2304.06488, 2023. [31] F. Yu, H. Zhang, and B. Wang, â Nature language reasoning, a survey,â arXiv preprint arXiv:2303.14725, 2023. [32] Z. Wang, G. Zhang, K. Yang, N. Shi, W. Zhou, S. Hao, G. Xiong, Y. Li, M. Y. Sim, X. Chen et al., â Interactive natural language processing,â arXiv preprint arXiv:2305.13246, 2023. [33] Y. Qin, S. Hu, Y. Lin, W. Chen, N. Ding, G. Cui, Z. Zeng, Y. Huang, C. Xiao, C. Han et al., â
2308.03427#62
2308.03427#64
2308.03427
[ "2302.13971" ]
2308.03427#64
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Tool learning with foundation models,â arXiv preprint arXiv:2304.08354, 2023. [34] W. Yu, C. Zhu, Z. Li, Z. Hu, Q. Wang, H. Ji, and M. Jiang, â A survey of knowledge-enhanced text generation,â ACM Computing Surveys, vol. 54, no. 11s, pp. 1â 38, 2022. [35] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, â
2308.03427#63
2308.03427#65
2308.03427
[ "2302.13971" ]
2308.03427#65
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Lora: Low-rank adaptation of large language models,â arXiv preprint arXiv:2106.09685, 2021. [36] N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. At- tariyan, and S. Gelly, â Parameter-efficient transfer learning for nlp,â in International Conference on Machine Learning. PMLR, 2019, pp. 2790â 2799. [37] X. L. Li and P. Liang, â
2308.03427#64
2308.03427#66
2308.03427
[ "2302.13971" ]
2308.03427#66
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Prefix-tuning: Optimizing continuous prompts for generation,â arXiv preprint arXiv:2101.00190, 2021. [38] X. Liu, Y. Zheng, Z. Du, M. Ding, Y. Qian, Z. Yang, and J. Tang, â Gpt understands, too,â arXiv preprint arXiv:2103.10385, 2021. [39] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, â
2308.03427#65
2308.03427#67
2308.03427
[ "2302.13971" ]
2308.03427#67
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
React: Synergizing reasoning and acting in language models,â arXiv preprint arXiv:2210.03629, 2022. [40] T. Khot, H. Trivedi, M. Finlayson, Y. Fu, K. Richardson, P. Clark, and A. Sabharwal, â Decomposed prompting: A modular approach for solving complex tasks,â arXiv preprint arXiv:2210.02406, 2022. [41] G. Mialon, R. Dessì, M. Lomeli, C. Nalmpantis, R. Pasunuru, R. Raileanu, B. Rozière, T. Schick, J. Dwivedi-Yu, A. Celikyilmaz et al., â Augmented language models: a survey,â arXiv preprint arXiv:2302.07842, 2023. [42] J. Duan, S. Yu, H. L. Tan, H. Zhu, and C. Tan, â
2308.03427#66
2308.03427#68
2308.03427
[ "2302.13971" ]
2308.03427#68
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
A survey of embodied ai: From simulators to research tasks,â IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 6, no. 2, pp. 230â 244, 2022. [43] M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik et al., â
2308.03427#67
2308.03427#69
2308.03427
[ "2302.13971" ]
2308.03427#69
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Habitat: A platform for embodied ai research,â in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 9339â 9347. [44] S. Franklin, â Autonomous agents as embodied ai,â Cybernetics & Systems, vol. 28, no. 6, pp. 499â 520, 1997. [45] W. Zhang, Y. Guo, L. Niu, P. Li, C. Zhang, Z. Wan, J. Yan, F. U. D. Farrukh, and D. Zhang, â Lp-slam: Language-perceptive rgb-d slam system based on large language model,â arXiv preprint arXiv:2303.10089, 2023. [46] D. Shah, B. Osi´nski, S. Levine et al., â Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action,â in Conference on Robot Learning. PMLR, 2023, pp. 492â 504. [47] A. Brohan, Y. Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang, R. Julian et al., â
2308.03427#68
2308.03427#70
2308.03427
[ "2302.13971" ]
2308.03427#70
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Do as i can, not as i say: Grounding language in robotic affordances,â in Conference on Robot Learning. PMLR, 2023, pp. 287â 318. 19 [48] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar et al., â Inner monologue: Embodied reasoning through planning with language models,â
2308.03427#69
2308.03427#71
2308.03427
[ "2302.13971" ]
2308.03427#71
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
arXiv preprint arXiv:2207.05608, 2022. [49] B. Chen, F. Xia, B. Ichter, K. Rao, K. Gopalakrishnan, M. S. Ryoo, A. Stone, and D. Kappler, â Open-vocabulary queryable scene representations for real world planning,â in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 11 509â 11 522. [50] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu et al., â
2308.03427#70
2308.03427#72
2308.03427
[ "2302.13971" ]
2308.03427#72
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Palm-e: An embodied multimodal language model,â arXiv preprint arXiv:2303.03378, 2023. [51] N. Wake, A. Kanehira, K. Sasabuchi, J. Takamatsu, and K. Ikeuchi, â Chatgpt empow- ered long-step robot control in various environments: A case application,â arXiv preprint arXiv:2304.03893, 2023. [52] K. Rana, J. Haviland, S. Garg, J. Abou-Chakra, I. Reid, and N. Suenderhauf, â Sayplan:
2308.03427#71
2308.03427#73
2308.03427
[ "2302.13971" ]
2308.03427#73
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Ground- ing large language models using 3d scene graphs for scalable task planning,â arXiv preprint arXiv:2307.06135, 2023. [53] C. H. Song, J. Wu, C. Washington, B. M. Sadler, W.-L. Chao, and Y. Su, â Llm-planner: Few- shot grounded planning for embodied agents with large language models,â arXiv preprint arXiv:2212.04088, 2022. [54] A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus- man, A. Herzog, J. Hsu et al., â Rt-1: Robotics transformer for real-world control at scale,â arXiv preprint arXiv:2212.06817, 2022. [55] A. Stone, T. Xiao, Y. Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich, F. Xia, C. Finn et al., â
2308.03427#72
2308.03427#74
2308.03427
[ "2302.13971" ]
2308.03427#74
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Open-world object manipulation using pre-trained vision-language models,â arXiv preprint arXiv:2303.00905, 2023. [56] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez, Y. Sulsky, J. Kay, J. T. Springenberg et al., â A generalist agent,â
2308.03427#73
2308.03427#75
2308.03427
[ "2302.13971" ]
2308.03427#75
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
arXiv preprint arXiv:2205.06175, 2022. [57] S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, â Chatgpt for robotics: Design principles and model abilities,â Microsoft Auton. Syst. Robot. Res, vol. 2, p. 20, 2023. [58] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng, â
2308.03427#74
2308.03427#76
2308.03427
[ "2302.13971" ]
2308.03427#76
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Code as policies: Language model programs for embodied control,â in 2023 IEEE International Conference on Robotics and Automation (ICRA). [59] K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang, â Retrieval augmented language model pre-training,â in International conference on machine learning. PMLR, 2020, pp. 3929â 3938. [60] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel et al., â
2308.03427#75
2308.03427#77
2308.03427
[ "2302.13971" ]
2308.03427#77
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Retrieval-augmented generation for knowledge-intensive nlp tasks,â Advances in Neural Information Processing Systems, vol. 33, pp. 9459â 9474, 2020. [61] S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driess- che, J.-B. Lespiau, B. Damoc, A. Clark et al., â
2308.03427#76
2308.03427#78
2308.03427
[ "2302.13971" ]
2308.03427#78
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Improving language models by retrieving from trillions of tokens,â in International conference on machine learning. PMLR, 2022, pp. 2206â 2240. [62] A. Sridhar, R. Lo, F. F. Xu, H. Zhu, and S. Zhou, â Hierarchical prompting assists large language model on web navigation,â arXiv preprint arXiv:2305.14257, 2023. [63] H. Furuta, O. Nachum, K.-H. Lee, Y. Matsuo, S. S. Gu, and I. Gur, â
2308.03427#77
2308.03427#79
2308.03427
[ "2302.13971" ]
2308.03427#79
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Multimodal web navigation with instruction-finetuned foundation models,â arXiv preprint arXiv:2305.11854, 2023. [64] Y. Qin, Z. Cai, D. Jin, L. Yan, S. Liang, K. Zhu, Y. Lin, X. Han, N. Ding, H. Wang et al., â Webcpm: Interactive web search for chinese long-form question answering,â arXiv preprint arXiv:2305.06849, 2023.
2308.03427#78
2308.03427#80
2308.03427
[ "2302.13971" ]
2308.03427#80
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
20 [65] S. Yao, H. Chen, J. Yang, and K. Narasimhan, â Webshop: Towards scalable real-world web in- teraction with grounded language agents,â Advances in Neural Information Processing Systems, vol. 35, pp. 20 744â 20 757, 2022. [66] R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders et al., â
2308.03427#79
2308.03427#81
2308.03427
[ "2302.13971" ]
2308.03427#81
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Webgpt: Browser-assisted question-answering with human feedback,â arXiv preprint arXiv:2112.09332, 2021. [67] Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. Salakhutdinov, and C. D. Manning, â Hotpotqa: A dataset for diverse, explainable multi-hop question answering,â arXiv preprint arXiv:1809.09600, 2018. [68] B. Wang, G. Li, and Y. Li, â Enabling conversational interaction with mobile ui using large language models,â in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023, pp. 1â
2308.03427#80
2308.03427#82
2308.03427
[ "2302.13971" ]
2308.03427#82
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
17. [69] D. Zhang, L. Chen, and K. Yu, â Mobile-env: A universal platform for training and evaluation of mobile interaction,â arXiv preprint arXiv:2305.08144, 2023. [70] H. Li, J. Su, Y. Chen, Q. Li, and Z. Zhang, â Sheetcopilot: Bringing software productivity to the next level through large language models,â arXiv preprint arXiv:2305.19308, 2023. [71] L. Zha, J. Zhou, L. Li, R. Wang, Q. Huang, S. Yang, J. Yuan, C. Su, X. Li, A. Su et al., â
2308.03427#81
2308.03427#83
2308.03427
[ "2302.13971" ]
2308.03427#83
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Tablegpt: Towards unifying tables, nature language and commands into one gpt,â arXiv preprint arXiv:2307.08674, 2023. [72] Z. Chen, K. Zhou, B. Zhang, Z. Gong, W. X. Zhao, and J.-R. Wen, â Chatcot: Tool- augmented chain-of-thought reasoning on\chat-based large language models,â arXiv preprint arXiv:2305.14323, 2023. [73] A. Parisi, Y. Zhao, and N. Fiedel, â Talm: Tool augmented language models,â arXiv preprint arXiv:2205.12255, 2022. [74] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano et al., â
2308.03427#82
2308.03427#84
2308.03427
[ "2302.13971" ]
2308.03427#84
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Training verifiers to solve math word problems,â arXiv preprint arXiv:2110.14168, 2021. [75] Z. Yang, L. Li, J. Wang, K. Lin, E. Azarnasab, F. Ahmed, Z. Liu, C. Liu, M. Zeng, and L. Wang, â Mm-react: Prompting chatgpt for multimodal reasoning and action,â arXiv preprint arXiv:2303.11381, 2023. [76] Z. Liu, Y. He, W. Wang, W. Wang, Y. Wang, S. Chen, Q. Zhang, Y. Yang, Q. Li, J. Yu et al., â
2308.03427#83
2308.03427#85
2308.03427
[ "2302.13971" ]
2308.03427#85
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Internchat: Solving vision-centric tasks by interacting with chatbots beyond language,â arXiv preprint arXiv:2305.05662, 2023. [77] Y. Ge, W. Hua, J. Ji, J. Tan, S. Xu, and Y. Zhang, â Openagi: When llm meets domain experts,â arXiv preprint arXiv:2304.04370, 2023. [78] Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang, â
2308.03427#84
2308.03427#86
2308.03427
[ "2302.13971" ]
2308.03427#86
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface,â arXiv preprint arXiv:2303.17580, 2023. [79] D. Surà s, S. Menon, and C. Vondrick, â Vipergpt: Visual inference via python execution for reasoning,â arXiv preprint arXiv:2303.08128, 2023. [80] T. Gupta and A. Kembhavi, â
2308.03427#85
2308.03427#87
2308.03427
[ "2302.13971" ]
2308.03427#87
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Visual programming: Compositional visual reasoning without train- ing,â in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 14 953â 14 962. [81] L. Chen, B. Li, S. Shen, J. Yang, C. Li, K. Keutzer, T. Darrell, and Z. Liu, â Language models are visual reasoning coordinators,â in ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2023.
2308.03427#86
2308.03427#88
2308.03427
[ "2302.13971" ]
2308.03427#88
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
[82] P. Lu, B. Peng, H. Cheng, M. Galley, K.-W. Chang, Y. N. Wu, S.-C. Zhu, and J. Gao, â Chameleon: Plug-and-play compositional reasoning with large language models,â arXiv preprint arXiv:2304.09842, 2023. 21 [83] Z. Gou, Z. Shao, Y. Gong, Y. Shen, Y. Yang, N. Duan, and W. Chen, â Critic: Large language models can self-correct with tool-interactive critiquing,â
2308.03427#87
2308.03427#89
2308.03427
[ "2302.13971" ]
2308.03427#89
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
arXiv preprint arXiv:2305.11738, 2023. [84] Y. Liang, C. Wu, T. Song, W. Wu, Y. Xia, Y. Liu, Y. Ou, S. Lu, L. Ji, S. Mao et al., â Taskmatrix. ai: Completing tasks by connecting foundation models with millions of apis,â arXiv preprint arXiv:2303.16434, 2023.
2308.03427#88
2308.03427#90
2308.03427
[ "2302.13971" ]
2308.03427#90
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
[85] S. Hao, T. Liu, Z. Wang, and Z. Hu, â Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings,â arXiv preprint arXiv:2305.11554, 2023. [86] B. Paranjape, S. Lundberg, S. Singh, H. Hajishirzi, L. Zettlemoyer, and M. T. Ribeiro, â Art:
2308.03427#89
2308.03427#91
2308.03427
[ "2302.13971" ]
2308.03427#91
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Automatic multi-step reasoning and tool-use for large language models,â arXiv preprint arXiv:2303.09014, 2023. [87] G. Kim, P. Baldi, and S. McAleer, â Language models can solve computer tasks,â arXiv preprint arXiv:2303.17491, 2023. [88] T. Cai, X. Wang, T. Ma, X. Chen, and D. Zhou, â Large language models as tool makers,â
2308.03427#90
2308.03427#92
2308.03427
[ "2302.13971" ]
2308.03427#92
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
arXiv preprint arXiv:2305.17126, 2023. [89] R. H. Lewis and J. Jiao, â Computegpt: A computational chat model for numerical problems,â arXiv preprint arXiv:2305.06223, 2023. [90] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G.
2308.03427#91
2308.03427#93
2308.03427
[ "2302.13971" ]
2308.03427#93
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Neubig, â Pal: Program- aided language models,â in International Conference on Machine Learning. PMLR, 2023, pp. 10 764â 10 799. [91] G. Wang, Y. Xie, Y. Jiang, A. Mandlekar, C. Xiao, Y. Zhu, L. Fan, and A. Anandkumar, â Voyager: An open-ended embodied agent with large language models,â arXiv preprint arXiv:2305.16291, 2023. [92] C. Qian, C. Han, Y. R. Fung, Y. Qin, Z. Liu, and H. Ji, â Creator:
2308.03427#92
2308.03427#94
2308.03427
[ "2302.13971" ]
2308.03427#94
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Disentangling abstract and concrete reasonings of large language models through tool creation,â arXiv preprint arXiv:2305.14318, 2023. [93] Y. Cai, S. Mao, W. Wu, Z. Wang, Y. Liang, T. Ge, C. Wu, W. You, T. Song, Y. Xia et al., â Low-code llm: Visual programming over llms,â
2308.03427#93
2308.03427#95
2308.03427
[ "2302.13971" ]
2308.03427#95
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
arXiv preprint arXiv:2304.08103, 2023. [94] S. Arora, B. Yang, S. Eyuboglu, A. Narayan, A. Hojel, I. Trummer, and C. Ré, â Language models enable simple systems for generating structured views of heterogeneous data lakes,â arXiv preprint arXiv:2304.09433, 2023. [95] W. Zhang, Y. Shen, W. Lu, and Y. Zhuang, â
2308.03427#94
2308.03427#96
2308.03427
[ "2302.13971" ]
2308.03427#96
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Data-copilot: Bridging billions of data and humans with autonomous workflow,â arXiv preprint arXiv:2306.07209, 2023. 22 # A Detailed Dataset Description Simple SQL queries: These queries typically involve basic operations such as SELECT, FROM, WHERE, GROUP BY, etc. They are used to retrieve, filter, group, and sort data from a single table. We give the Schema of two tables in the SQL database in Table 12 and 13 and list several examples in Table 14. Table 12: Schema of the Person table Person Column Name Type School id name age sex school phone qualifications ability TEXT TEXT INTEGER TEXT TEXT TEXT TEXT TEXT Column Name Type id name info_985 info_211 TEXT TEXT TEXT TEXT # Table 13: Schema of the School table # Table 14: Demostrations of simple SQL queries. Table ID Question Answer SQL reference Person Person School Average ages How many men How many schools are both â
2308.03427#95
2308.03427#97
2308.03427
[ "2302.13971" ]
2308.03427#97
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
985â and â 211â institutions? 35.16 12 11 select avg(age) from Person select count(*) from Person where sex = â maleâ select count(*) from School where info_985 = â yesâ and info_211 = â yesâ ; Complex nested SQL queries: These queries contain subqueries, which are SQL queries nested inside a larger query. Nested queries can be used in various clauses such as SELECT, FROM, WHERE, and HAVING. They provide a way to perform multiple operations or calculations across multiple tables. We give the Schema of two tables in the SQL database in Table 15, 16, 17, and 18 and list several examples in Table 19. Table 15: Schema of GoldenMelodyAwards Table 16: Schema of the AwardNominees table GoldenMelodyAwards Column Name Type AwardNominees Nominated_Count Competing_Count Awards_Count Award_Name Host Year INTEGER INTEGER INTEGER TEXT TEXT TIME Column Name Type Singer_ID Nominated_Work Award_Name Award_Edition_ID INTEGER INTEGER TEXT TEXT
2308.03427#96
2308.03427#98
2308.03427
[ "2302.13971" ]
2308.03427#98
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Complex nested queries utilizing multiple tools: These are advanced queries that involve multiple tools, such as SQL queries, python code generation, user-defined functions, etc. We give the Schema 23 Table 17: Schema of the Singers table Singers Column Name Type Name Song_Count Album_Count Fan_Count Gender Singer_ID TEXT INTEGER INTEGER INTEGER TEXT INTEGER Table 18: Schema of the RecordCompanies table Column Name Type Record_Company TEXT TIME Signing_Date INTEGER Singer_ID Table 19: Demostrations of complex nested SQL queries. Question Answer SQL reference Golden Melody hosts, excluding the two with the least awards. Names of singers never nominated for Golden Melody Awards. Name and gender of singers without a record company. How many times is the 27th Golden Melody count of the 28thâ
2308.03427#97
2308.03427#99
2308.03427
[ "2302.13971" ]
2308.03427#99
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
s? "26th Golden Melody", "27th Golden Melody" "Jay Chou", "Jian Cui" "Penny Tai:Femal" 1 from Golden- select in ( MelodyAwards where Host not select Host from GoldenMelodyAwards group by Host order by avg ( Awards_Count ) asc limit 2 ) select Name from Singers where Singer_ID not in ( select Singer_ID from AwardNomi- nees ) Award_Name select Name, Gender from Singers where Singer_ID not in ( select Singer_ID from RecordCompanies ); select a.Awards_Count / b.Awards_Count from ( select Awards_Count from Gold- enMelodyAwards where Award_Name == â 27th Golden Melodyâ ( select Awards_Count from GoldenMelodyAwards where Award_Name == â 28th Golden Melodyâ
2308.03427#98
2308.03427#100
2308.03427
[ "2302.13971" ]
2308.03427#100
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
) b ) a , of two tables in the SQL database in Table 20, and 21 and list several examples in Table 22. For verifying the planning ability of the LLM-based AI agents, we select this type of query. Table 20: Schema of the Journal table # Journal Column Name Type TEXT Name TIME First_Issue_Date INTEGER Journal_ID Category TEXT Sponsor_Organization TEXT TEXT Country TEXT s Language INTEGER Publication_Count CoverPersonality Column Name Type Person_ID Journal_ID Count INTEGER INTEGER INTEGER # Table 21: Schema of the CoverPersonality table
2308.03427#99
2308.03427#101
2308.03427
[ "2302.13971" ]
2308.03427#101
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
24 . s l o o t e l p i t l u m g n i z i l i t u s e i r e u q d e t s e n x e l p m o c f o s n o i t a r t s o m e D : 2 2 e l b a T e c n e r e f e r e d o C e c n e r e f e r L Q S s l o o T g n i n n a l P r e w s n A ; h t a m t r o p m i - r u o J m o r f e g a u g n a L , e m a N t c e l e s , " L P E R n o h t y P " [ , 8 0 . 0 2 [ - o p x e e h t ) 3 ( p x e . h t a m n r u t e r t c e l e s ( n i t o n D I _ l a n r u o J e r e h w l a n ] " r o t a r e n e G L Q S " , e s e n i h C : t s i m o n o c E e h T " t s i l d n a 3 ) y t i l a n o s r e P r e v o C m o r f D I _ l a n r u o J ] " . h s i l g n E : t s e g i D s â
2308.03427#100
2308.03427#102
2308.03427
[ "2302.13971" ]
2308.03427#102
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
r e d a e R - n a l d n a s l a n r u o j ; h t a m t r o p m i - r u o J m o r f e g a u g n a L , e m a N t c e l e s , " L P E R n o h t y P " [ - i h C : t s i m o n o c E e h T " , 4 [ , l a i r o t c a f , ) 4 ( l a i r o t c a f . h t a m ( d c g . h t a m t c e l e s ( n i t o n D I _ l a n r u o J e r e h w l a n ] " r o t a r e n e G L Q S " - n E : t s e g i D s â
2308.03427#101
2308.03427#103
2308.03427
[ "2302.13971" ]
2308.03427#103
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
r e d a e R , e s e n D C G h t i ) 2 1 2 ) y t i l a n o s r e P r e v o C m o r f D I _ l a n r u o J ] " . h s i l g e h t t s i l o n h t i w . ; h t a m t r o p m i p u o r g l a n r u o J m o r f e g a u g n a L t c e l e s , " L P E R n o h t y P " [ ] " h s i l g n E " , 5 9 7 9 8 9 8 . 4 [ e r a u q s e h t ) 4 2 ( t r q s . h t a m - a c i l b u P ( g v a g n i v a h e g a u g n a L y b ] " r o t a r e n e G L Q S " y r e u q d n a - i l b u P ( g v a t c e l e s ( > ) t n u o C _ n o i t e g a u g n a l ) l a n r u o J m o r f ) t n u o C _ n o i t a c - m u n d e h s i l b u p e h t ; h t a m t r o p m i - n o s r e P r e v o C m o r f D I _ n o s r e P t c e l e s , " L P E R n o h t y P " [ - i X , i a H g n i Q " , 7 9 8 9 6 . 0 [ e s a b g o l ) 5 ( 0 1 g o l . h t a m ( x a m t c e l e s ( < t n u o C e r e h w y t i l a ] " r o t a r e n e G L Q S " o n a i t s i r C , g n a u H g n i m o a y f i t n e d i ) y t i l a n o s r e P r e v o C m o r f ) t n u o C ] " t n a y r B e b o K , o d l a n o R - r a e p p a - r e v o e h t y c n e u q e r f
2308.03427#102
2308.03427#104
2308.03427
[ "2302.13971" ]
2308.03427#104
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
n o i t s e u Q e t a l u c l a C # f o # l a i t n e n s e m a n # e h t # f o s e g a u g n o s r e p r e v o c o n h t i # w . y t i l a s â 4 e t u p m o C e r a p m o c d n a 2 1 2 # f o s e g a u g n a l d n a s e m a n s l a n r u o j # f o y t i l a n o s r e p # r e v o c e t a l u c l a C # 4 2 f o t o o r # e h t = # r o f e g a r e v a # e s o h w # f o # r e b s d e e c x e # s e u s s i e g a r e v a l l a r e v o # e h t # e t u p m o C # n e h t , ¢ 5 # f o 0 1 # s e r u g fi # r e v o c # n a h t # s s e l # g n i # D # I e l b a T r e v o C & # l a n r u o J y t i l a n o s r e P y t i l a n o s r e P r e v o C # l a n r u o J & # l a n r u o J y t i l a n o s r e P r e v o C
2308.03427#103
2308.03427#105
2308.03427
[ "2302.13971" ]
2308.03427#105
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
25 x a m l l a . s l a n r u o j s s o r c a # B Prompts Design Figure 8: The evaluation prompt for tool order planning. You are a strategy model. Given a problem and a set of tools, you need to â generate a sequence of tools to determine the solution to the problem. Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, it creates a â syntactically correct SQLite query statement.
2308.03427#104
2308.03427#106
2308.03427
[ "2302.13971" ]
2308.03427#106
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Python Generator: Given an input problem and some information, it generates << a syntactically correct Python code snippet. Please use the following format: Question: This is the original question. Error: This is the previously generated error output. Tool: These are the tools to be selected and the order in which they are <= called. Please note to generate a Tool different from the Error. Result: The final result output by the tool. Here are some examples of mapping problems to tools: Question: What is the square of the number of albums by Jolin Tsaif| Error: Tool: ["SQL Generator", "Python Generator"] Result: 100 Question: First, calculate the square of 40, denoted as A, and then find < the names of all the singers whose total number of fans is less than A.
2308.03427#105
2308.03427#107
2308.03427
[ "2302.13971" ]
2308.03427#107
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Error: Tool: ["Python Generator", "SQL Generator"] Result: ['Jolin Tsai'] Let's get started: Question: {question} Error: {error} Tool: 26 Figure 9: The evaluation prompt for tool order and subtask description planning. You are a strategy model. Given a problem and a set of tools, you need to <â generate a sequence of tools to determine the solution to the problem. Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, it creates a <â syntactically correct SQLite query statement.
2308.03427#106
2308.03427#108
2308.03427
[ "2302.13971" ]
2308.03427#108
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Python Generator: Given an input problem and some information, it generates < a syntactically correct Python code snippet. Please use the following format: Question: This is the original question. Error: This is the previously generated error output. Tool: These are the tools to be selected and the order in which they are <= called. Please note to generate a Tool different from the Error. Query: This is the sub-problem derived from the original question that < needs to be input when calling the tool. Please note to generate a <= Query different from the Error. Result: The final result output by the tool. Here are some examples of mapping problems to tools: Question:
2308.03427#107
2308.03427#109
2308.03427
[ "2302.13971" ]
2308.03427#109
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
What is the square of the number of albums by Jolin Tsai? Error: Tool: ["SQL Generator", "Python Generator"] Query: ["What is the number of albums by Jolin Tsai?", "What is the square <â of the number of albums by Jolin Tsai?"] Result: 100 Question: First, calculate the square of 40, denoted as A, and then find < the names of all the singers whose total number of fans is less than A.
2308.03427#108
2308.03427#110
2308.03427
[ "2302.13971" ]
2308.03427#110
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Error: Tool: ["Python Generator", "SQL Generator"] Query: ["A is the square of 40, what is the value of A?", "What are the â names of all the singers whose total number of fans is less than A?"] Result: ['Jolin Tsai'] Let's get started: Question: {question} Error: {error} Tool: {tools} Query: 27 Figure 10: The evaluation prompt for one-step tool-subtask pair planning. You are a strategy model. Given a problem and a set of tools, you need to â generate a sequence of tools to determine the solution to the problem. Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, it creates a â syntactically correct SQLite query statement.
2308.03427#109
2308.03427#111
2308.03427
[ "2302.13971" ]
2308.03427#111
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Python Generator: Given an input problem and some information, it generates â a syntactically correct Python code snippet. Please use the following format: Question: This is the original question Error: This is the previously generated error output Tasks: This is a list in Python. Each item in the list is a dictionary. The ~ key of the dictionary represents the selected Tool, and the value is <â the Query when calling the tool. Please note to generate a Tool and â Query different from the Error. Answer: The final answer Here are some examples of mapping problems to tools: Question:
2308.03427#110
2308.03427#112
2308.03427
[ "2302.13971" ]
2308.03427#112
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
What is the square of the number of albums by Jolin Tsaiff Error: Tasks: [{{"SQL Generator": "What is the number of albums by Jolin Tsai?"}}, = {{"Python Generator": "What is the square of the number of albums by so Jolin Tsai?"}}] Answer: The square of the number of albums by Jolin Tsai is 100 Question: First, calculate the square of 40, denoted as A, and then find â the names of all the singers whose total number of fans is less than A.
2308.03427#111
2308.03427#113
2308.03427
[ "2302.13971" ]