id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.00245#37
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Cost. On average, it costs 7,000 tokens in GPT-4 to analyze each potential bug. 6.2 RQ1: Precision LLift reports 26 positives among the Random-1000 dataset, where half of them are true bugs based on our manual inspection. This represents a precision of 50%. In keeping with UBITect and we focus on the analysis of Linux v4.14, 12 of the bugs still exist in the latest Linux kernel. We are in the process of reporting the 12 bugs to the Linux community. So far, we have submitted patches for 4 bugs and received confirmation that they are true bugs.
2308.00245#36
2308.00245#38
2308.00245
[ "2305.10601" ]
2308.00245#38
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian Table 3: True bugs identified by LLift from Random-1000, analyzing in Linux v4.14 Initializer Caller File Path Variable Line read_reg regmap_read ep0_read_setup regmap_read bcm3510_do_hab_cmd readCapabilityRid e1e_rphy pci_read_config_dword lan78xx_read_reg t1_tpi_read pci_read_config_dword ata_timing_compute pt_completion get_signal_parameters isc_update_profile ep0_handle_setup mdio_sc_cfg_reg_write bcm3510_check_firmware_version airo_get_range __e1000_resume adm8211_probe lan78xx_write_raw_otp my3126_phy_reset quirk_intel_purley_xeon_ras_cap opti82c46x_set_piomode pt_req_sense drivers/media/dvb-frontends/stv0910.c drivers/media/platform/atmel/atmel-isc.c drivers/usb/mtu3/mtu3_gadget_ep0.c drivers/net/ethernet/hisilicon/hns_mdio.c drivers/media/dvb-frontends/bcm3510.c drivers/net/wireless/cisco/airo.c drivers/net/ethernet/intel/e1000e/netdev.c drivers/net/wireless/admtek/adm8211.c drivers/net/usb/lan78xx.c drivers/net/ethernet/chelsio/cxgb/my3126.c arch/x86/kernel/quirks.c drivers/ata/pata_legacy.c drivers/block/paride/pt.c tmp sr setup.bRequestType reg_value ver.demod_version cap_rid.softCap phy_data reg buf val capid0 &tp buf 504 664 637 169 666 6936 6580 1814 873 193 562 564 368 Imprecise and Failed Cases. Despite the effectiveness of LLift, there are instances where it does not yield precise results, resulting in 13 false positives by mistakenly classifying must_init cases as may_init. Upon a careful examination of these cases, we attribute the imprecision to a variety of factors, which we discuss in detail in §6.7. Briefly, we give a breakdown of them here:
2308.00245#37
2308.00245#39
2308.00245
[ "2305.10601" ]
2308.00245#39
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Incomplete con- straint extraction (4 cases), Information gaps in UBITect (5 cases), Variable reuse (1 case), Indirect call (1 case), and Additional con- straints (1 case). Additionally, there is one false positive caused by inconsistent output (i.e., two false positives in three runs). Four cases exceed the maximum context length while exploring deeper functions in the progressive prompt. Table 4: Performance evaluation of bug detection tool with progressive ad- dition of design components: Post-Constraint Guided Path Analysis (PCA), Progressive Prompt (PP), Self-Validation (SV), and Task Decomposition (TD). (C) indicates the number of Consistent cases. Combination TN(C) TP(C) Precision Recall Accuracy F1 Score 12(9) Simple Prompt 13(9) PCA 5(3) PCA+PP 5(2) PCA+PP+SV PCA+PP+TD 22(14) PCA+PP+SV+TD 25(17) 2(1) 5(1) 6(1) 11(8) 6(4) 13(12) 0.12 0.26 0.21 0.33 0.55 0.87 0.15 0.38 0.46 0.85 0.46 1.00 0.35 0.45 0.28 0.40 0.70 0.95 0.13 0.31 0.29 0.48 0.50 0.93 Oracle 27(27) 13(13) - - - - Takeaway 2. LLift has proven effective in identifying UBI bugs, consistently detecting all known instances. Takeaway 1. LLift Can effectively summarize initializer behav- ior and discover new bugs with high precision (50%). 6.3 RQ2: Recall Estimate Conceptually, the core optimization (post-constraint guided path analysis) of LLift is sound, and we also prompt a series of rules to let LLMs tend to respond â may_init when uncertain. We expect LLift would not reject true bugs or with a high recall. We sample 300 negative cases from Random-1000 in an effort to see whether we will miss any true bugs. We confirm that all are true negatives.
2308.00245#38
2308.00245#40
2308.00245
[ "2305.10601" ]
2308.00245#40
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Despite the limited data sampled, this result indicates that integrating GPT-4 into our implementation does not introduce apparent unsoundness. Further, we test LLift on the Bug-50 dataset to see whether it will miss any bugs discovered by UBITect. LLift has demonstrated full effectiveness in identifying all real bugs from Bug-50. This result, while encouraging, does not imply that LLift is flawless. Detailed data analysis reveals that: 1) There remain some inconsistencies in 3 5 cases occasionally, though they are mitigated by majority voting; and 2) all the bugs found by UBITect have trivial post- ) and postcondition of may_init ( constraints : must_init ). Hence, LLift could identify them easily. It is noteworthy that these cases are already those cases detectable by UBITect. Such cases tend to be simpler in nature and can be verified by symbolic execution in UBITect. 6.4 RQ3: Contributions of Design Strategies In our effort to delineate the contributions of distinct design strate- gies to the final results, we undertook an evaluative exercise against the Cmp-40 dataset, employing varying configurations of our solu- tion, each entailing a unique combination of our proposed strate- gies. As illustrated in Table 4, the strategies under consideration encompass Post-constraint Analysis (PCA), Progressive Prompt (PP), Self-Validation (SV ), and Task Decomposition (TD). The find- ings underscore an overall trend of enhanced performance with the integration of additional design strategies. In this study, the Baseline corresponds to a straightforward prompt, "check this code to determine if there are any UBI bugs", a strategy that has been found to be rather insufficient for discover- ing new vulnerabilities, as corroborated by past studies [17, 21, 31], reflecting a modest recall rate of 0.15 and a precision of 0.12. Incorporating PCA offers a notable enhancement, enabling the LLM to uncover a wider array of vulnerabilities. As shown in Table 4, there is a substantial improvement in recall in comparison to the baseline, an anticipated outcome considering PCAâ s pivotal role in our solution. However, solely relying on this strategy still leaves a lot of room for optimization. The influence of Progressive Prompt (PP) on the results is quite intriguing.
2308.00245#39
2308.00245#41
2308.00245
[ "2305.10601" ]
2308.00245#41
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
While its impact appears to lower precision initially, the introduction of task decomposition and self-validation in conjunc- tion with PP reveals a substantial boost in performance. Without 8 The Hitchhikerâ s Guide to Program Analysis: A Journey with Large Language Models Table 5: Comparison of different LLMs on real bugs, from a subset of Bug-50 Caller GPT 4 3.5 Claude2 Bard â â hpet_msi_resume â â ctrl_cx2341x_getv4lflags â â axi_clkgen_recalc_rate max8907_regulator_probe â â â â ov5693_detect â â iommu_unmap_page â â mt9m114_detect â â ec_read_u8 â â compress_sliced_buf â â â â â â â â â â â â â â â â â â
2308.00245#40
2308.00245#42
2308.00245
[ "2305.10601" ]
2308.00245#42
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
PP, the LLM is restricted to deducing the function behavior merely based on the function contextâ s semantics without further code analysis. Even though this approach can be effective in a range of situations, it confines the reasoning ability to the information available in its training data. By checking the detailed conversation, we notice the omission of TD or SV tends to result in the LLM neglecting the post-constraints, subsequently leading to errors. Beyond influencing precision and recall, Task Decomposition (TD) and Self-Validation (SV ) also play a crucial role in enhancing consistency. In this context, a result is deemed consistent if the LLM yields the same outcome across its initial two runs. A comparison between our comprehensive final design encompassing all compo- nents, and the designs lacking TD and SV, respectively, reveals that both TD and SV notably augment the number of consistent results, and deliver 17 and 23 consistent results in its negative and positive results, respectively, underscoring their importance in ensuring reliable and consistent outcomes. Finally, TD also holds significance in terms of conserving tokens. In our evaluation phase, we identified two instances within the PCA+PP and PCA+PP+SV configurations where the token count surpassed the limitations set by GPT-4. However, this constraint was not breached in any case when TD was incorporated. Takeaway 3. All of LLiftâ s design strategies contributed to the positive results. 6.5 RQ4: Alternative Models Table 5 provides a comprehensive view of the performance of our solution, LLift, when implemented across an array of LLMs includ- ing GPT-4.0, GPT-3.5, Claude 2 [2], and Bard [12]. GPT-4 passes all tests, while GPT-3.5, Claude 2, and Bard exhibit recall rates of 89%, 67%, and 67%, respectively. Despite the unparalleled performance of GPT-4, the other LLMs still produce substantial and competitive results, thereby indicating the wide applicability of our approaches. It is imperative to note that not all design strategies in our tool- box are universally applicable across all language models. Bard and GPT-3.5, in particular, exhibit limited adaptability towards the progressive prompt and task decomposition strategies. Bardâ
2308.00245#41
2308.00245#43
2308.00245
[ "2305.10601" ]
2308.00245#43
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
s in- teraction patterns suggest a preference for immediate response generation, leveraging its internal knowledge base rather than re- questing additional function definitions, thereby hindering the effec- tiveness of the progressive prompt approach. Similarly, when task 9 # static int sgl_map_user_pages(...){ if ((pages = kmalloc(..., GFP_KERNEL)) == NULL) return -ENOMEM; res = get_user_pages_unlocked(..., pages, ...); /x Errors and no page if (res nr_pages) goto out_unmap;
2308.00245#42
2308.00245#44
2308.00245
[ "2305.10601" ]
2308.00245#44
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
< # mapped # should # return # here / out_unmap: if (res > 0) { for (j-0; j < res; 5+) put page (BREESE) 0; # res = # } kfree(pages) ; 3 # Figure 7: Case Study I (Loop and Index). Derived from drivers/scsi/st.c decomposition is implemented, these models often misinterpret or inaccurately collect post-constraints, subsequently compromising the results. To harness their maximum potential, we only apply the PCA design specifically (i.e., without other design strategies) for GPT-3.5 and Bard. Contrasting the GPT series, Bard and Claude 2 demonstrate less familiarity with the Linux kernel and are more prone to failures due to their unawareness of the may_init possibility of initializers.
2308.00245#43
2308.00245#45
2308.00245
[ "2305.10601" ]
2308.00245#45
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Takeaway 4. GPT-4 remains at the pinnacle of performance for LLift, yet other LLMs can achieve promising results. 6.6 Case Study In this case study, we pick three interesting cases demonstrating the effectiveness of LLift in analyzing function behaviors and detecting uninitialized variables. All these cases are undecided for the previous static analyzer, UBITect. We put the complete conversations on an anonymous online page for reference2. Loop and Index. Figure 7 presents an intriguing case involving the variable pages[j], which is reported by UBITect as used in Line 17 potentially without being initialized. Unfortunately, this case is a false positive which is hard to prune due to loops. Specifically, the initializer function get_user_pages_unlocked(), which is responsible for mapping user space pages into the kernel space, initializes the pages array allocated in Line 3. If get_user_pages_unlocked() is successfully executed, pages[0] through pages[res-1] pointers will be initialized to point to struct page instances. To summarize the behavior, i.e., must_init facts under condi- tions where the use is reachable, we must first extract the post- constraints that lead to the use of pages. Through interacting with ChatGPT, LLift successfully extracts it: { "initializer":
2308.00245#44
2308.00245#46
2308.00245
[ "2305.10601" ]
2308.00245#46
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
"res = get_user_pages_unlocked(uaddr, nr_pages, pages, rw == READ ? FOLL_WRITE : 0)", } "suspicious": ["pages[j]"], "postconstraint": "res < nr_pages && res > 0 && j < res", "postconstraint": "res < nr_pages && res > @ && j < res", After feeding the post-constraints to LLM, LLift then success- fully obtains the result: { 2https://sites.google.com/view/llift-open/case-studies static int hv_pci_enter_do(struct hv_device *hdev){ init_completion(&comp_pkt .host_event) ; pkt->completion_func = hv_pci_generic.compl; pkt->compl_ctxt = &comp_pkt; wait_for_completion(&comp_pkt .host_event) ; # static void hv_pci_generic_compl(void *context, struct hv_pci_compl *comp_pkt context; = ... he if (resp_packet_size offsetofend(...)) comp_pkt->completion_status resp->status; else comp_pkt->completion_status >= = 1; = # complete (&comp_pkt->host_event) ; Figure 8: Case Study II (Concurrency and Indirect Call). Derived from drivers/pci/host/pci-hyperv.c "ret": "success", "response": { "must_init": ["pages[j]"], "may_init": [], } } As we can see, GPT-4 exhibits impressive comprehension of this complex function. It perceives the variable pages[j] being used in a loop that iterates from 0 to res-1. This insight leads GPT- 4 to correctly deduce that all elements in the pages array must be initialized, i.e., they are must_init. This example underscores GPT-4â s proficiency in handling loop and even index sensitivity. Concurrency and Callback. Consider the case illustrated in Fig- ure 8. At first glance, UBITect flags Line 10 for potentially using the variable comp_pkt.completion_status before initialization. The functionâ s body seemingly lacks any code that initializes it, leading UBITect to report it as a potential bug. However, the mys- tery unravels when we examine hv_pci_generic_compl(), the actual initializer function assigned to pkt in Line 4. The variable in question is indeed initialized, but intriguingly, its initializer emerges from a concurrent function instead of within its own thread.
2308.00245#45
2308.00245#47
2308.00245
[ "2305.10601" ]
2308.00245#47
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Here wait_for_completion() is a synchronization primi- tive that pauses the current thread and waits for the new thread (i.e., hv_pci_generic_compl()) to complete. Despite this complexity, GPT-4 adeptly navigates the concurrency and callback handling, pinpointing the accurate initializer and outputting a precise result. It is worth noting that we do not encode any knowledge about the Linux kernel synchronization primitives. LLift prompts LLMs with â The â initializerâ must be the â actualâ function that initializes the variable.â and then LLMs can automatically identify the function hv_pci_generic_compl() as the initializer of comp_pkt.completion_status. Unfamiliar Function. As previously delineated in §2.3, LLMs possess the inherent ability to recognize the semantics (e.g., post- conditions) of common functions like sscanf(). However, some argue that â the LLM simply learns everything from the internet and acts merely as a search engineâ [6]. This viewpoint is challenged by the case illustrated in Figure 9. Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian int p9_check_zc_errors(...){ err = p9pdu_readf(req->rc, c->proto_version, "d", &ecode); err = 5 3
2308.00245#46
2308.00245#48
2308.00245
[ "2305.10601" ]
2308.00245#48
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
int p9pdu_readf(struct p9_fcall *pdu, int proto_version, const â char *fmt, ...) ret = p9pdu_vreadf(pdu, proto_version, fmt, ap); return ret; int p9pdu_vreadf(struct p9_fcall *pdu, int proto_version, const # oS char *fmt, va_list ap){ switch (*fmt) { case â d':{ int32_t *val if (pdu_read(... errcode break; va_arg(ap, = EFAULT; = int32_t *); # val = ...; // # initialization # return # errcode; # Figure 9: Case Study III (Unfamiliar Function), derived from net/9p The case presents an intriguing real-world bug. The function p9pdu_readf() mirrors sscanf() in structure, yet lacks a check of its return value, leaving the parameter ecode at risk of being uninitialized, i.e., if pdu_read() returns non-zero in line 19 (thus â breakâ early).
2308.00245#47
2308.00245#49
2308.00245
[ "2305.10601" ]
2308.00245#49
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Notably, unlike sscanf(), where GPT-4 can provide a precise summary of the function without asking for its definition, it does request the function definition of p9pdu_readf(), as it is not as ubiquitous as sscanf(). Furthermore, our solution not only produces the correct out- come for this particular case but also pinpoints that ecode could be initialized when p9pdu_readf() returns 0, demonstrating the efficacy of LLift for unfamiliar cases. The result is as follows:
2308.00245#48
2308.00245#50
2308.00245
[ "2305.10601" ]
2308.00245#50
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
{ "initializer": "err = p9pdu_readf(req->rc, c->proto_version, 'd', &ecode)", "suspicious": ["ecode"], "postconstraint": null, "response": { "must_init": [], "may_init": [{ "name": "ecode", "condition": "p9pdu_readf returns 0" }] } } 6.7 Reason for Imprecision Despite LLift achieving a precision of 50% in real-world applica- tions, the precision can still be improved in the future. Some can be solved with better prompts or better integration with static analysis. Challenges in Constraint Extraction. Beyond the four primary code patterns we addressed in §4.3, there exist additional forms of post-constraints. For instance, during error handling, the checks for failures may involve another function or macro. This problem can be addressed by either more examples during prompts (in-context learning), or lightweight program analysis (e.g., path exploration in symbolic execution to collect the post-constraints). Information Gaps in UBITect. For instance, UBITect does not provide explicit field names within a structure when a specific
2308.00245#49
2308.00245#51
2308.00245
[ "2305.10601" ]
2308.00245#51
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
10 The Hitchhikerâ s Guide to Program Analysis: A Journey with Large Language Models field is in use. This information gap can result in LLift lacking precision in its analysis. Additionally, UBITect only reports the variable utilized, not necessarily the same variable passed to an initializer. For example, consider an uninitialized variable a passed to an initializer, which is then assigned to variable b for usage. In such a scenario, LLift may fail to identify the initializer due to this incomplete information correctly. These challenges, primarily due to the interface design in UBITect, can be addressed with focused engineering efforts to enrich the output information from UBITect. Variable Reuse. Varaible reuse is an interesting problem of LLM. In general, LLM usually confuses different variables in different scopes (e.g., different function calls). For example, if the suspicious variable is ret and passed as a argument to its initializer (say, func(&ret)) and there is another stack variable defined in func also called ret, LLM will confuse them. Explicitly prompting and teaching LLM to note the difference does not appear to work. One solution is to leverage a simple static analysis to normalize the source code to ensure each variable has a unique name. Indirect Call. As mentioned §4.4, LLift follows a simple but impre- cise strategy to handle indirect calls. Theoretically, existing static analysis tools, such as MLTA [16], can give possible targets for in- direct calls. However, each indirect call may have multiple possible targets and dramatically increase the token usage. We leave the exploration of such an exhaustive strategy for future work. LLift may benefit from a more precise indirect call resolution. Additional Constraints. There are many variables whose values are determined outside of the function we analyze, e.g., precondi- tions capturing constraints from the outer caller. Since our analysis is fundamentally under-constrained, this can lead LLift to incor- rectly determine a must_init case to be may_init. Mitigating this imprecision relies on further analysis to provide more information. # 7 DISCUSSION AND FUTURE WORK Post-Constraint Analysis. Our approach prioritizes post- constraints over other constraints, such as preconditions. By focusing on the post-constraints, we enhance the precision and scalability significantly.
2308.00245#50
2308.00245#52
2308.00245
[ "2305.10601" ]
2308.00245#52
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Importantly, our utilization of large language models in program analysis suggests strong abilities in summarizing complex function behaviors involving loops, a classic hurdle in program analysis. Better Integration with Static Analysis. Our work presents op- portunities for greater integration and synergy with static analysis methods. Currently, our proposed solution operates largely inde- pendently of the static analysis methods, taking only inputs from static analysis initially. Looking into the future, we can consider integrating static analysis and LLMs in a holistic workflow. For example, this could involve selectively utilizing LLM as an assistant to overcome certain hurdles encountered by static analysis, e.g., difficulty in scaling up the analysis or summarizing loop invari- ants. In turn, further static analysis based on these findings can provide insights to refine the queries to the LLM. This iterative process could enable a more thorough and accurate analysis of complex cases. We believe such a more integrated approach is a very promising future direction.
2308.00245#51
2308.00245#53
2308.00245
[ "2305.10601" ]
2308.00245#53
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
11 Deploying on Open-sourced LLMs. The reproducibility of LLift could be potentially challenged, considering its dependency on GPT-4, a closed-source API subject to frequent updates. At the time of writing, Meta introduced Llama 2, an open-source language model with capabilities rivaling GPT-3.5. Our initial assessments suggest that Llama 2 can understand our instructions and appears well-suited to support LLift. The open-source nature of Llama 2 provides us with opportunities to deploy and refine the model further. We plan to leverage these prospects in future studies. 8 RELATED WORK Techniques of Utilizing LLMs. Wang et al. [33] propose an em- bodied lifelong learning agent based on LLMs. Pallagani et al. [23] explores the capabilities of LLMs for automated planning. Weng [35] summarizes recent work in building an autonomous agent based on LLMs and proposes two important components for plan- ning: Task Decomposition and Self-reflection, which are similar to the design of LLift. Beyond dividing tasks into small pieces, task decomposition techniques also include some universal strategies such as Chain-of-thought [34] and Tree-of-thought [38]. The gen- eral strategy of self-reflection has been used in several flavors: ReAct [39], Reflexion [29] and Chain of Hindsight [15]. Despite the similarity in name, self-reflection is fundamentally different from self-validation in LLift where the former focuses on using external sources to provide feedback to their models. Huang et al. [10] let an LLM self-improve its reasoning without supervised data by asking the LLM to lay out different possible results. LLMs for Program Analysis. Ma et al. [17] and Sun et al. [30] explore the capabilities of LLMs when performing various program analysis tasks such as control flow graph construction, call graph analysis, and code summarization. They conclude that while LLMs can comprehend basic code syntax, they are somewhat limited in performing more sophisticated analyses such as pointer analysis and code behavior summarization. In contrast to their findings, our research with LLift has yielded encouraging results. We conjecture that this might be due to several reasons: (1) benchmark selection, i.e., Linux kernel vs. others. (2) Prompt designs. (3) GPT-3.5 vs.
2308.00245#52
2308.00245#54
2308.00245
[ "2305.10601" ]
2308.00245#54
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
GPT- 4.0 â prior work only evaluated the results using only GPT-3.5. Pei et al. [26] use LLMs to reason about loop invariants with decent performance. In contrast, LLift leverages LLMs for a variety of tasks (including program behavior summarization) and integrates them successfully into a static analysis pipeline. LLMs for Software Engineering. Xia et al. [36] propose an au- tomated conversation-driven program repair tool using ChatGPT, achieving nearly 50% success rate. Pearce et al. [25] examine zero- shot vulnerability repair using LLMs and found promise in synthetic and hand-crafted scenarios but faced challenges in real-world ex- amples. Chen et al. [5] teach LLMs to debug its own predicted program to increase its correctness, but only performs on relatively simple programs. Lemieux et al. [14] leverages LLM to generate tests for uncovered functions when the search-based approach got coverage stalled. Feng and Chen [7] use LLM to replay Android bug automatedly. Recently, LangChain proposed LangSimith [13], a LLM-powered platform for debugging, testing, and evaluating. These diverse applications underline the vast potential of LLMs in software engineering. LLift complements these efforts by demon- strating the efficacy of LLMs in bug finding in the real world. 9 CONCLUSION This work presents a novel approach that utilizes LLMs to aid static analysis using a completely automated agent. By carefully considering the scope and designing the interactions with LLMs, our solution has yielded promising results. We believe our effort only scratched the surface of the vast design space, and hope our work will inspire future research in this exciting direction. REFERENCES [1] Toufique Ahmed, Kunal Suresh Pai, Premkumar Devanbu, and Earl T.
2308.00245#53
2308.00245#55
2308.00245
[ "2305.10601" ]
2308.00245#55
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Barr. Improving Few-Shot Prompts with Relevant Static Analysis Products. 2023. http://arxiv.org/abs/2304.06815 arXiv:2304.06815 [cs]. [2] Anthropic (2023). 2023. Claude 2. https://www.anthropic.com/index/claude-2 Jiuhai Chen, Lichang Chen, Heng Huang, and Tianyi Zhou. 2023. When do you [3] need Chain-of-Thought Prompting for ChatGPT? http://arxiv.org/abs/2304.032 62 arXiv:2304.03262 [cs]. [4] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021). [5] Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. 2023. Teaching Large Language Models to Self-Debug. http://arxiv.org/abs/2304.05128 [6] Ted Chiang. 2023. ChatGPT Is a Blurry JPEG of the Web. The New Yorker (Feb. 2023). https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a- blurry-jpeg-of-the-web Section: annals of artificial intelligence.
2308.00245#54
2308.00245#56
2308.00245
[ "2305.10601" ]
2308.00245#56
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
[7] Sidong Feng and Chunyang Chen. 2023. Prompting Is All Your Need: Automated Android Bug Replay with Large Language Models. https://doi.org/10.48550/arX iv.2306.01987 arXiv:2306.01987 [cs]. [8] Github. 2023. GitHub Copilot documentation. https://ghdocs-prod.azurewebsit es.net/_next/data/mHA_XfBBaMPyfcP0Q05C5/en/free-pro-team@latest/copi lot.json?versionId=free-pro-team%40latest&productId=copilot [9] Anjana Gosain and Ganga Sharma. 2015. Static Analysis: A Survey of Tech- niques and Tools. In Intelligent Computing and Applications (Advances in Intelli- gent Systems and Computing), Durbadal Mandal, Rajib Kar, Swagatam Das, and Bijaya Ketan Panigrahi (Eds.). Springer India, New Delhi, 581â 591. Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022. Large Language Models Can Self-Improve. http: //arxiv.org/abs/2210.11610 arXiv:2210.11610 [cs]. [11] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of Hallucination in Natural Language Generation.
2308.00245#55
2308.00245#57
2308.00245
[ "2305.10601" ]
2308.00245#57
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Comput. Surveys 55, 12 (Dec. 2023), 1â 38. https://doi.org/10.1145/3571730 Jack Krawczyk and Amarnag Subramanya. 2023. Bardâ s latest update: more features, languages and countries. https://blog.google/products/bard/google- bard-new-features-update-july-2023/ [13] LangChain (2023). 2023. Announcing LangSmith, a unified platform for de- bugging, testing, evaluating, and monitoring your LLM applications. https: //blog.langchain.dev/announcing-langsmith/ [14] Caroline Lemieux, Jeevana Priya Inala, Shuvendu K Lahiri, and Siddhartha Sen. 2023. CODAMOSA:
2308.00245#56
2308.00245#58
2308.00245
[ "2305.10601" ]
2308.00245#58
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Escaping Coverage Plateaus in Test Generation with Pre- trained Large Language Models. (2023). [15] Hao Liu, Carmelo Sferrazza, and Pieter Abbeel. 2023. Chain of Hindsight http://arxiv.org/abs/2302.02676 Aligns Language Models with Feedback. arXiv:2302.02676 [cs]. [16] Kangjie Lu and Hong Hu. 2019. Where Does It Go?:
2308.00245#57
2308.00245#59
2308.00245
[ "2305.10601" ]
2308.00245#59
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Refining Indirect-Call Targets with Multi-Layer Type Analysis. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. ACM, London United Kingdom. https://doi.org/10.1145/3319535.3354244 [17] Wei Ma, Shangqing Liu, Wenhan Wang, Qiang Hu, Ye Liu, Cen Zhang, Liming Nie, and Yang Liu. 2023. The Scope of ChatGPT in Software Engineering: A Thorough Investigation. http://arxiv.org/abs/2305.12138 arXiv:2305.12138 [cs]. [18] Bertrand Meyer. 1997. Object-Oriented Software Construction, 2nd Edition. Prentice-Hall. [19] OpenAI (2022). 2022. Introducing ChatGPT. https://openai.com/blog/chatgpt [20] OpenAI (2023). 2023. Function calling and other API updates. https://openai.c om/blog/function-calling-and-other-api-updates [21] OpenAI (2023). 2023. GPT-4 Technical Report. http://arxiv.org/abs/2303.08774 arXiv:2303.08774 [cs]. [22] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John 12 Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
2308.00245#58
2308.00245#60
2308.00245
[ "2305.10601" ]
2308.00245#60
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Training language models to follow instructions with human feedback. http://arxiv.org/ abs/2203.02155 arXiv:2203.02155 [cs]. [23] Vishal Pallagani, Bharath Muppasani, Keerthiram Murugesan, Francesca Rossi, Biplav Srivastava, Lior Horesh, Francesco Fabiano, and Andrea Loreggia. 2023. Understanding the Capabilities of Large Language Models for Automated Plan- ning. http://arxiv.org/abs/2305.16151 arXiv:2305.16151 [cs]. Jihyeok Park, Hongki Lee, and Sukyoung Ryu. 2022. A Survey of Parametric Static Analysis. ACM Comput. Surv. 54, 7 (2022), 149:1â 149:37. https://doi.org/ 10.1145/3464457 [25] Hammond Pearce, Benjamin Tan, Baleegh Ahmad, Ramesh Karri, and Bren- dan Dolan-Gavitt. 2023. Examining Zero-Shot Vulnerability Repair with Large Language Models. In 2023 IEEE Symposium on Security and Privacy (S&P). IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/SP46215.20 23.00001 [26] Kexin Pei, David Bieber, Kensen Shi, Charles Sutton, and Pengcheng Yin. 2023. Can Large Language Models Reason about Program Invariants?. In Proceedings of the 40th International Conference on Machine Learning.
2308.00245#59
2308.00245#61
2308.00245
[ "2305.10601" ]
2308.00245#61
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
[27] Luke Salamone. 2021. What is Temperature in NLP? https://lukesalamone.git hub.io/posts/what-is-temperature/ Section: posts. Jessica Shieh. 2023. Best practices for prompt engineering with OpenAI API | OpenAI Help Center. https://help.openai.com/en/articles/6654000-best- practices-for-prompt-engineering-with-openai-api [28] [29] Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023.
2308.00245#60
2308.00245#62
2308.00245
[ "2305.10601" ]
2308.00245#62
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Reflexion: Language Agents with Ver- bal Reinforcement Learning. http://arxiv.org/abs/2303.11366 arXiv:2303.11366 [cs]. [30] Weisong Sun, Chunrong Fang, Yudu You, Yun Miao, Yi Liu, Yuekang Li, Gelei Deng, Shenghan Huang, Yuchen Chen, Quanjun Zhang, Hanwei Qian, Yang Liu, and Zhenyu Chen. 2023. Automatic Code Summarization via ChatGPT: How Far Are We? http://arxiv.org/abs/2305.12865 arXiv:2305.12865 [cs]. [31] Haoye Tian, Weiqi Lu, Tsz On Li, Xunzhu Tang, Shing-Chi Cheung, Jacques Klein, and Tegawendé F. Bissyandé. 2023.
2308.00245#61
2308.00245#63
2308.00245
[ "2305.10601" ]
2308.00245#63
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Is ChatGPT the Ultimate Programming Assistant â How far is it? http://arxiv.org/abs/2304.11938 arXiv:2304.11938 [cs]. [32] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc. [33] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023.
2308.00245#62
2308.00245#64
2308.00245
[ "2305.10601" ]
2308.00245#64
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Voyager: An Open-Ended Em- bodied Agent with Large Language Models. http://arxiv.org/abs/2305.16291 arXiv:2305.16291 [cs]. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. http://arxiv.org/abs/2201.11903 arXiv:2201.11903 [cs]. [35] Lilian Weng. 2023. LLM-powered Autonomous Agents. lilianweng.github.io (Jun 2023). https://lilianweng.github.io/posts/2023-06-23-agent [36] Chunqiu Steven Xia and Lingming Zhang. 2023.
2308.00245#63
2308.00245#65
2308.00245
[ "2305.10601" ]
2308.00245#65
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
Keep the Conversation Going: Fixing 162 out of 337 bugs for $0.42 each using ChatGPT. http://arxiv.org/abs/ 2304.00385 [37] Frank F. Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. 2022. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming. ACM, San Diego CA USA, 1â 10. https://doi.org/10.1145/3520312.3534862 [38] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. http://arxiv.org/abs/2305.10601 arXiv:2305.10601 [cs]. [39] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023.
2308.00245#64
2308.00245#66
2308.00245
[ "2305.10601" ]
2308.00245#66
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models
ReAct: Synergizing Reasoning and Acting in Language Models. International Conference on Learning Representations (ICLR) (2023). [40] Yizhuo Zhai, Yu Hao, Hang Zhang, Daimeng Wang, Chengyu Song, Zhiyun Qian, Mohsen Lesani, Srikanth V. Krishnamurthy, and Paul Yu. 2020. UBITect: A Precise and Scalable Method to Detect Use-before-Initialization Bugs in Linux Kernel. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2020). [41] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A Survey of Large Language Models. arXiv:2303.18223 [cs.CL] [42] Shen Zheng, Jie Huang, and Kevin Chen-Chuan Chang. 2023. Why Does ChatGPT Fall Short in Providing Truthful Answers? http://arxiv.org/abs/2304.10513 arXiv:2304.10513 [cs].
2308.00245#65
2308.00245
[ "2305.10601" ]
2307.16789#0
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
3 2 0 2 t c O 3 ] I A . s c [ 2 v 9 8 7 6 1 . 7 0 3 2 : v i X r a Preprint u/s @ TOOLLLM: FACILITATING LARGE LANGUAGE MODELS TO MASTER 16000+ REAL-WORLD APIS Yujia Qin1â , Shihao Liang1â , Yining Ye1, Kunlun Zhu1, Lan Yan1, Yaxi Lu1, Yankai Lin3â , Xin Cong1, Xiangru Tang4, Bill Qian4, Sihan Zhao1, Lauren Hong1, Runchu Tian1, Ruobing Xie5, Jie Zhou5, Mark Gerstein4, Dahai Li2,6, Zhiyuan Liu1â , Maosong Sun1â 1Tsinghua University 2ModelBest Inc. 3Renmin University of China 4Yale University 5WeChat AI, Tencent Inc. 6Zhihu Inc. [email protected] # ABSTRACT Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using exter- nal tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the con- struction can be divided into three stages: (i) API collection: we collect 16, 464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruc- tion generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction.
2307.16789#1
2307.16789
[ "2302.13971" ]
2307.16789#1
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evalu- ator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demon- strates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench. The codes, trained models, and demo are publicly available at https://github.com/OpenBMB/ToolBench. # INTRODUCTION Tool learning (Qin et al., 2023b) aims to unleash the power of large language models (LLMs) to effec- tively interact with various tools (APIs) to accomplish complex tasks. By integrating LLMs with APIs, we can greatly expand their utility and empower them to serve as efficient intermediaries between users and the vast ecosystem of applications. Although open-source LLMs, e.g., LLaMA (Touvron et al., 2023a), have achieved versatile capabilities through instruction tuning (Taori et al., 2023; Chiang et al., 2023), they still lack the sophistication in performing higher-level tasks, such as appro- priately interacting with tools (APIs) to fulfill complex human instruction. This deficiency is because current instruction tuning largely focuses on basic language tasks, with a relative neglect of the tool-use domain. On the other hand, current state-of-the-art (SOTA) LLMs (e.g., ChatGPT (OpenAI,
2307.16789#0
2307.16789#2
2307.16789
[ "2302.13971" ]
2307.16789#2
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
â Indicates equal contribution. â Corresponding author. 1 Preprint Figure 1: Three phases of constructing ToolBench and how we train our API retriever and ToolLLaMA. During inference of an instruction, the API retriever recommends relevant APIs to ToolLLaMA, which performs multiple rounds of API calls to derive the final answer. The whole reasoning process is evaluated by ToolEval. 2022) and GPT-4 (OpenAI, 2023)), which have demonstrated impressive competencies in utilizing tools (Bubeck et al., 2023), are closed-source with their inner mechanisms opaque. This limits the democratization of AI technologies and the scope of community-driven innovation and development. In this regard, we deem it urgent to empower open-source LLMs to skillfully master diverse APIs. Although prior works have explored building instruction tuning data for tool use (Li et al., 2023a; Patil et al., 2023; Tang et al., 2023; Xu et al., 2023b), they fail to fully stimulate the tool-use capabilities within LLMs and have inherent limitations: (1) limited APIs: they either fail to in- volve real-world APIs (e.g., RESTAPI) (Patil et al., 2023; Tang et al., 2023) or consider only a small scope of APIs with poor diversity (Patil et al., 2023; Xu et al., 2023b; Li et al., 2023a); (2) constrained scenario: existing works are confined to instructions that only involve one single tool. In contrast, real-world scenarios may require that multiple tools are in- terleaved together for multi-round tool execution to solve a complex task. Besides, they often assume that users manually specify the ideal API set for a given instruction in advance, which is infeasible with a large collection of real-world APIs; (3) inferior planning and reasoning: existing works adopted either CoT (Wei et al., 2023) or ReACT (Yao et al., 2022) for model reasoning, which can- not fully elicit the capabilities stored in LLMs and thus fail to handle complex instructions. In addition, some works do not even execute APIs to obtain real responses (Patil et al., 2023; Tang et al., 2023), which serve as important information for subsequent model planning.
2307.16789#1
2307.16789#3
2307.16789
[ "2302.13971" ]
2307.16789#3
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
To facilitate tool-use capabilities within open-source LLMs, we introduce ToolLLM, a general tool-use frame- work including data construction, model training, and eval- uation. As illustrated in Figure 1, we collect a high-quality instruction-tuning dataset ToolBench. It is constructed automatically using ChatGPT (gpt-3.5-turbo-16k), which has been upgraded with function call (link) capabilities. The comparison between ToolBench and prior works is listed in Table 1. Specifically, the construction of ToolBench entails three phases:
2307.16789#2
2307.16789#4
2307.16789
[ "2302.13971" ]
2307.16789#4
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
â ¢ API Collection: we gather 16,464 representational state transfer (REST) APIs from RapidAPI (link), a platform that hosts massive real-world APIs provided by developers. These APIs span 49 diverse categories such as social media, e-commerce, and weather. For each API, we crawl detailed API documents from RapidAPI, including the functionality descriptions, required parameters, code snippets for API calls, etc. By comprehending these documents to learn to execute APIs, LLMs can generalize to new APIs unseen during training;
2307.16789#3
2307.16789#5
2307.16789
[ "2302.13971" ]
2307.16789#5
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
â ¢ Instruction Generation: we first sample APIs from the whole set and then prompt ChatGPT to generate diverse instructions for these APIs. To cover practical scenarios, we curate instructions 2 # Preprint Resource Real-world API? Real API Call&Response? Multi-tool Scenario? API Retrieval? Multi-step Reasoning? Number of tools Number of APIs Number of Instances Number of Real API Calls Avg. Reasoning Traces ToolBench (this work) â â â â â 3451 16464 126486 469585 4.0 APIBench (Patil et al., 2023) â â â â â
2307.16789#4
2307.16789#6
2307.16789
[ "2302.13971" ]
2307.16789#6
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
3 1645 17002 0 1.0 API-Bank (Li et al., 2023a) â â â â â 53 53 274 568 2.1 ToolAlpaca (Tang et al., 2023) â â â â â 400 400 3938 0 1.0 ToolBench (Xu et al., 2023b) â â â â â 8 232 2746 3926 5.9 ~ Table 1: A comparison of our ToolBench to notable instruction tuning dataset for tool learning. that involve both single-tool and multi-tool scenarios. This ensures that our model learns not only how to interact with individual tools but also how to combine them to accomplish complex tasks; â ¢ Solution Path Annotation: each solution path may contain multiple rounds of model reasoning and real-time API calls to derive the final response. However, even the most sophisticated LLM, i.e., GPT-4, achieves a low pass rate for complex human instructions, making annotation inefficient. To this end, we develop a novel depth-first search-based decision tree (DFSDT) to bolster the planning and reasoning ability of LLMs. Compared with conventional ReACT, DFSDT enables LLMs to evaluate a multitude of reasoning paths and make deliberate decisions to either retract steps or proceed along a promising path. In experiments, DFSDT significantly improves the annotation efficiency and successfully completes those complex instructions that cannot be fulfilled using ReACT. To assess the tool-use capabilities of LLMs, we develop an automatic evaluator, ToolEval, backed up by ChatGPT. It comprises two key metrics: (1) pass rate, which measures LLMâ s ability to successfully execute an instruction within limited budgets, and (2) win rate, which compares the quality and usefulness of two solution paths. We demonstrate that ToolEval achieves a high correlation with human evaluation and provides a robust, scalable, and reliable assessment for machine tool use. By fine-tuning LLaMA on ToolBench, we obtain ToolLLaMA. After evaluation based on our ToolEval, we derive the following findings: ToolLLaMA demonstrates a compelling capability to handle both single-tool and complex multi- tool instructions. As depicted in Figure 2, ToolLLaMA outperforms Text-Davinci-003 and Claude-2, achieves comparable performance to the â teacher modelâ
2307.16789#5
2307.16789#7
2307.16789
[ "2302.13971" ]
2307.16789#7
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
ChatGPT, and is only slightly inferior to GPT4. Besides, ToolLLaMA exhibits robust generalization to previously unseen APIs, requiring only the API documentation to adapt to new APIs effectively. This flexibility allows users to incorporate novel APIs seamlessly, thus enhancing the modelâ s practical utility. â ¢ We show that our DFSDT serves as a general decision-making strategy to enhance the reasoning capabilities of LLMs. DFSDT broadens the search space by considering multiple reasoning traces and achieves significantly better performance than ReACT.
2307.16789#6
2307.16789#8
2307.16789
[ "2302.13971" ]
2307.16789#8
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
⠢ We train a neural API retriever, which alleviates the need for manual selection from the large API pool in practice. As shown in Figure 1, given an instruction, the API retriever recommends a set of relevant APIs, which are sent to ToolLLaMA for multi-round decision making to derive the final answer. Despite sifting through a large pool of APIs, the retriever exhibits remarkable retrieval precision, returning APIs closely aligned with the ground truth. ⠢ ToolLLaMA exhibits strong generalization performance on an out-of-distribution (OOD) dataset APIBench (Patil et al., 2023). Despite not training on any of the APIs or instructions on APIBench, ToolLLaMA performs on par with Gorilla, a pipeline specifically designed for APIBench. # 2 DATASET CONSTRUCTION We introduce the three-stage construction process of ToolBench: API collection (§ 2.1), instruction generation (§ 2.2), and solution path annotation (§ 2.3). All procedures are based on ChatGPT (gpt-3.5-turbo-16k), requiring minimal human supervision and can be easily extended to new APIs. 3 Preprint Figure 3: The hierarchy of RapidAPI (left) and the process of instruction generation (right). # 2.1 API COLLECTION We start by introducing RapidAPI and its hierarchy, followed by how we crawl and filter APIs. RapidAPI Hub RapidAPI is a leading API marketplace that connects developers with thousands of real-world APIs, streamlining the process of integrating diverse services into applications. Developers can test and connect with various APIs by registering only a RapidAPI key. All APIs in RapidAPI can be classified into 49 coarse-grained categories (link), such as sports, finance, and weather. The categories associate an API with the most relevant topic. Additionally, the hub also provides 500+ fine-grained categorization called collections (link), e.g., Chinese APIs and database APIs. APIs in the same collection share a common characteristic and often have similar functionalities or goals. Hierarchy of RapidAPI As shown in Figure 3, each tool may be composed of multiple APIs.
2307.16789#7
2307.16789#9
2307.16789
[ "2302.13971" ]
2307.16789#9
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
For each tool, we crawl the following information: the name and description of the tool, the URL of the host, and all the available APIs belonging to the tool; for each API, we record its name, description, HTTP method, required parameters, optional parameters, request body, executable code snippets for API call, and an example API call response. This rich and detailed metadata serves as a valuable resource for LLMs to understand and effectively use the APIs, even in a zero-shot manner. Initially, we gathered 10, 853 tools (53, 190 APIs) from RapidAPI. However, the API Filtering quality and reliability of these APIs can vary significantly. In particular, some APIs may not be well-maintained, such as returning 404 errors or other internal errors. To this end, we perform a rigorous filtering process (details in appendix A.1) to ensure that the ultimate tool set of ToolBench is reliable and functional. Finally, we only retain 3, 451 high-quality tools (16, 464 APIs).
2307.16789#8
2307.16789#10
2307.16789
[ "2302.13971" ]
2307.16789#10
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
INSTRUCTION GENERATION Different from prior works, we specifically focus on two crucial aspects for instruction generation: (1) diversity: to train LLMs to handle a wide range of API usage scenarios, thereby boosting their generalizability and robustness; and (2) multi-tool usage: to mirror real-world situations that often demand the interplay of multiple tools, improving the practical applicability and flexibility of LLMs. To this end, instead of brainstorming instructions from scratch and then searching for relevant APIs, we sample different combinations of APIs and craft various instructions that involve them. Generating Instructions for APIs Define the total API set as SAPI, at each time, we sample a few N = {API1, · · · , APIN} from SAPI. We prompt ChatGPT to understand the functionalities APIs: Ssub of these APIs and then generate (1) possible instructions (Instâ ) that involve APIs in Ssub N , and (2) relevant APIs (Srel Nâ ², InstNâ ²]}, where Nâ ² denotes the number of generated instances. These (instruction, relevant API) pairs will be used for
2307.16789#9
2307.16789#11
2307.16789
[ "2302.13971" ]
2307.16789#11
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
4 Preprint Figure 4: A comparison of our DFSDT and conventional CoT or ReACT during model reasoning (left). We show part of the solution path annotation process using ChatGPT (right). training the API retriever in § 3.1. We use different sampling strategies (introduced later) to cover all APIs and most of their combinations, thus ensuring the diversity of our instructions. The prompt for ChatGPT is composed of (1) a general description of the intended instruction genera- tion task, (2) comprehensive documentation of each API in Ssub N , which helps ChatGPT understand their functionality and interplay, and (3) three in-context seed examples {seed1, seed2, seed3}. Each seed example is an ideal instruction generation written by human experts. These seed examples are leveraged to better regulate ChatGPTâ s behavior through in-context learning. In total, we wrote 12 / 36 diverse seed examples (Sseed) for the single-tool / multi-tool setting, and randomly sampled three examples at each time. Detailed prompts for instruction generation are described in appendix A.7. Overall, the generation process can be formulated as follows: ({[Srel 1 , Inst1], · · · , [Srel Nâ , InstNâ ² ]}|API1, · · · , APIN, seed1, · · · , seed3). ChatGPT {API1,··· ,APIN}â SAPI,{seed1,··· ,seed3}â Sseed Sampling Strategies for Different Scenarios As shown in Figure 3, for the single-tool instruc- tions (I1), we iterate over each tool and generate instructions for its APIs. However, for the multi-tool setting, since the interconnections among different tools in RapidAPI are sparse, random sampling tool combinations from the whole tool set often leads to a series of irrelevant tools that cannot be covered by a single instruction in a natural way. To address the sparsity issue, we leverage the RapidAPI hierarchy information. Since tools belonging to the same RapidAPI category or collection are generally related to each other in the functionality and goals, we randomly select 2-5 tools from the same category / collection and sample at most 3 APIs from each tool to generate the instruc- tions.
2307.16789#10
2307.16789#12
2307.16789
[ "2302.13971" ]
2307.16789#12
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
We denote the generated instructions as intra-category multi-tool instructions (I2) and intra-collection multi-tool instructions (I3), respectively. Through rigorous human evaluation, we find that instructions generated in this way already have a high diversity that covers various practical scenarios. We also provide visualization for instructions using Atlas (link) to support our claim. After generating the initial set of instructions, we further filter those with the hallucinated relevant APIs by assessing whether they exist in Ssub N . Finally, we collect nearly 200k qualified (instruction, relevant API) pairs, including 87413, 84815, and 25251 instances for I1, I2, and I3, respectively. 2.3 SOLUTION PATH ANNOTATION As shown in Figure 4, given an instruction Instâ , we prompt ChatGPT to search for a valid action sequence: {a1, · · · , aN}. Such a multi-step decision-making process is cast as a multi-round conver- sation for ChatGPT. At each round t, the model generates an action at based on previous interactions, i.e., ChatGPT(at|{a1, r1, · · · , atâ
2307.16789#11
2307.16789#13
2307.16789
[ "2302.13971" ]
2307.16789#13
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
1, rtâ 1}, Instâ ), where râ denotes the real API response. For each 5 # Preprint at, ChatGPT should specify its â thoughtâ , which API to use, and the specific parameters for this API, i.e., at has the following format: â Thought: · · · , API Name: · · · , Parameters: · · · â . To leverage the function call feature of ChatGPT, we treat each API as a special function and feed its API documentation into ChatGPTâ s function field. In this way, the model understands how to call the API.
2307.16789#12
2307.16789#14
2307.16789
[ "2302.13971" ]
2307.16789#14
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
For each instruction Instâ , we feed all the sampled APIs Ssub N to ChatGPTâ s as available functions. To let ChatGPT finish an action sequence, we define two additional functions, i.e., â Finish with Final Answerâ and â Finish by Giving Upâ . The former function has a parameter that corresponds to a detailed final answer to the original instruction; while the latter function is designed for cases where the provided APIs cannot complete the original instruction after multiple API call attempts. Depth First Search-based Decision Tree In our pilot studies, we find that CoT (Wei et al., 2023) or ReACT (Yao et al., 2022) has inherent limitations: (1) error propagation: a mistaken action may propagate the errors further and cause the model to be trapped in a faulty loop, such as continually calling an API in a wrong way or hallucinating APIs; (2) limited exploration: CoT or ReACT only explores one possible direction, leading to limited exploration of the whole action space. Hence even GPT-4 often fails to find a valid solution path, making annotation difficult. To this end, we propose to construct a decision tree to expand the search space and increase the possibility of finding a valid path. As depicted in Figure 4, our DFSDT allows the model to assess different reasoning paths and choose to either (1) proceed along a promising path or (2) abandon an existing node by calling the â
2307.16789#13
2307.16789#15
2307.16789
[ "2302.13971" ]
2307.16789#15
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Finish by Giving Upâ function and expand a new node. During node expansion, to diversify the child nodes and expand the search space, we prompt ChatGPT with the information of the previously generated nodes and explicitly encourage the model to generate a distinct node. For the searching process, we prefer depth-first search (DFS) instead of breadth-first search (BFS) because the annotation can be finished as long as one valid path is found. Using BFS will cost excessive OpenAI API calls. More details are described in appendix A.8. We perform DFSDT for all the generated instructions and only retain those passed solution paths. Ultimately, we generate 126, 486 (instruction, solution path) pairs, which are used to train ToolLLaMA in § 3.2. # 3 EXPERIMENTS In this section, we investigate the performance of ToolLLM framework. We first introduce the evaluation metric and evaluate the efficacy of API retriever and DFSDT in § 3.1. Then we present the main experiments in § 3.2, followed by a generalization experiment in § 3.3. 3.1 PRELIMINARY EXPERIMENTS ToolEval Considering the APIâ s temporal variability on RapidAPI and the infinite potential solution paths for an instruction, it is infeasible to annotate a fixed ground-truth solution path for each test instruction. Moreover, when comparing different models, it is crucial to ensure they employ the same version of APIs during evaluation. Considering that human evaluation can be time-consuming, we follow AlpacaEval (Li et al., 2023b) to develop an efficient evaluator ToolEval based on ChatGPT, which incorporates two evaluation metrics (details in appendix A.5): (1) Pass Rate: it calculates the proportion of successfully completing an instruction within limited budgets. The metric measures the executability of instructions for an LLM and can be seen as a basic requirement for ideal tool use; and (2) Win Rate: we provide an instruction and two solution paths to ChatGPT evaluator and obtain its preference (i.e., which one is better). We pre-define a set of criteria for both metrics and these criteria are organized as prompts for our ChatGPT evaluator. We evaluate multiple times based on ChatGPT to improve the reliability. Then we calculate the average results from the evaluator.
2307.16789#14
2307.16789#16
2307.16789
[ "2302.13971" ]
2307.16789#16
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Through rigorous testing (details in appendix A.5), we find that ToolEval demonstrates a high agreement of 87.1% in pass rate and 80.3% in win rate with human annotators. This shows that ToolEval can reflect and represent human evaluation to a large extent. Efficacy of API Retriever The API retriever aims to retrieve relevant APIs to an instruction. We employ Sentence-BERT (Reimers & Gurevych, 2019) to train a dense retriever based on BERT- BASE (Devlin et al., 2019). The API retriever encodes the instruction and API document into two embeddings, and calculates their relevance with embedding similarity. For training, we regard the relevant APIs of each instruction generated in § 2.2 as positive examples and sample a few other
2307.16789#15
2307.16789#17
2307.16789
[ "2302.13971" ]
2307.16789#17
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
6 Preprint Method I1 NDCG I2 NDCG I3 NDCG Average NDCG Method I1 BM25 Ada Ours @1 @5 @1 @5 @1 @5 @1 @5 17.0 18.4 45.4 57.5 84.9 84.2 19.7 58.8 89.7 12.0 36.8 68.2 11.0 30.7 77.9 25.2 54.6 81.7 20.4 46.8 87.1 18.5 49.6 78.0 37.8 ReACT@N 49.4 58.0 ReACT DFSDT # I2 40.6 49.4 70.6 # I3 Average 27.6 34.6 62.8 35.3 44.5 63.8 Table 2: Our API retriever v.s. two baselines for three types of instructions (I1, I2, I3). We report NDCG@1 and NDCG@5. Table 3: Pass rate of different reasoning strategies for three types of instructions (I1, I2, I3) based on ChatGPT. APIs as negative examples for contrastive learning. For baselines, we choose BM25 (Robertson et al., 2009) and OpenAIâ s text-embedding-ada-002 (link). We evaluate the retrieval performance using NDCG (J¨arvelin & Kek¨al¨ainen, 2002). We train and evaluate our model on single-tool instructions (I1), intra-category multi-tool instructions (I2), and intra-collection multi-tool instructions (I3). As shown in Table 2, our API retriever consistently outperforms baselines across all settings, indicating its feasibility in real-world scenarios with massive APIs. Also, the NDCG score of I1 is generally higher than I2 and I3, which means single-tool instruction retrieval is simpler than multi-tool setting. Superiority of DFSDT over ReACT Before solution path annotation, we validate the efficacy of DFSDT. Based on ChatGPT, we compare DFSDT and ReACT using the pass rate metric.
2307.16789#16
2307.16789#18
2307.16789
[ "2302.13971" ]
2307.16789#18
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Since DFSDT consumes more OpenAI API calls than ReACT, for a fairer comparison, we also establish a â ReACT@Nâ baseline, which conducts multiple times of ReACT until the total costs reach the same level of DFSDT. Once a valid solution is found by ReACT@N, we deem it a pass. From Table 3, it can be observed that DFSDT significantly outperforms the two baselines in all scenarios. Since we only retain those passed annotations as the training data, given the same budgets, using DFSDT could annotate more instructions. This makes DFSDT a more efficient way that saves the total annotation cost. We also find that the performance improvement of DFSDT is more evident for harder instructions (i.e., I2 and I3) than those simpler instructions (I1). This means that by expanding the search space, DFSDT can better solve those difficult, complex instructions that are unanswerable by the vanilla ReACT no matter how many times it is performed.
2307.16789#17
2307.16789#19
2307.16789
[ "2302.13971" ]
2307.16789#19
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Involving such â hard examplesâ in our dataset can fully elicit the tool-use capabilities for those complex scenarios. 3.2 MAIN EXPERIMENTS ToolLLaMA We fine-tune LLaMA-2 7B model (Touvron et al., 2023b) using the instruction- solution pairs. The original LLaMA-2 model has a sequence length of 4096, which is not enough under our setting since the API response can be very long. To this end, we use positional interpola- tion (Chen et al., 2023) to extend the context length to 8192 (training details in appendix A.3). Settings Ideally, by scaling the number and diversity of instructions and unique tools in the training data, ToolLLaMA is expected to generalize to new instructions and APIs unseen during training. This is meaningful since users can define customized APIs and expect ToolLLaMA to adapt according to the documentation. To this end, we strive to evaluate the generalization ability of ToolLLaMA at three levels: (1) Inst.: unseen instructions for the same set of tools in the training data, (2) Tool: unseen tools that belong to the same (seen) category of the tools in the training data, and (3) Cat.: unseen tools that belong to a different (unseen) category of tools in the training data. We perform experiments on three scenarios: single-tool instructions (I1), intra-category multi-tool instructions (I2), and intra-collection multi-tool instructions (I3). For I1, we conduct the evaluation for the aforementioned three levels (I1-Inst., I1-Tool, and I1-Cat.); for I2, since the training instructions already involve different tools of the same category, we only perform level 1 and level 3 for the generalization evaluation (I2-Inst. and I2-Cat.); similarly, we only perform level 1 generalization for I3 (I3-Inst.) since it already covers instructions that involve various combinations of tools from different categories (the tools in a RapidAPI collection may come from different RapidAPI categories). For each test instruction, we feed the ground-truth (oracle) APIs Ssub N to each model. This simulates the scenario where the user specifies the API set they prefer.
2307.16789#18
2307.16789#20
2307.16789
[ "2302.13971" ]
2307.16789#20
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Baselines We choose two LLaMA variants that have been fine-tuned for general-purpose dialogue, i.e., Vicuna (Chiang et al., 2023) and Alpaca (Taori et al., 2023). We also choose the â teacher modelâ 7 Preprint Model ChatGPT Claude-2 Text-Davinci-003 GPT4 Method ReACT DFSDT ReACT DFSDT ReACT DFSDT ReACT DFSDT I1-Inst. Pass Win 41.5 - 60.5 54.5 31.0 5.5 38.0 20.5 28.5 12.0 40.3 43.5 60.0 53.5 60.0 67.5 I1-Tool Pass Win 44.0 - 62.0 65.0 27.8 3.5 44.3 31.0 35.3 20.0 43.8 44.0 58.8 50.0 67.8 71.5 I1-Cat. Pass Win 44.5 - 57.3 60.5 33.8 5.5 43.3 18.5 31.0 20.0 46.8 46.0 63.5 53.5 66.5 67.0 I2-Inst. Pass Win 42.5 - 72.0 75.0 35.0 6.0 36.8 17.0 29.8 8.5 40.5 37.0 65.8 67.0 79.5 73.3 I2-Cat. Pass Win 46.5 - 71.5 64.8 31.5 6.0 33.5 20.5 29.8 14.5 43.3 42.0 60.3 72.0 63.3 77.5 I3-Inst.
2307.16789#19
2307.16789#21
2307.16789
[ "2302.13971" ]
2307.16789#21
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Pass Win 22.0 - 69.0 62.0 47.5 14.0 65.0 28.0 45.0 24.0 63.0 46.0 78.0 47.0 84.0 71.0 Vicuna Alpaca ToolLLaMA ReACT & DFSDT ReACT & DFSDT ReACT DFSDT DFSDT-Retriever 0.0 0.0 25.0 57.0 64.0 0.0 0.0 45.0 55.0 62.3 0.0 0.0 29.0 61.0 64.0 0.0 0.0 42.0 55.3 59.0 0.0 0.0 33.0 62.0 60.5 0.0 0.0 47.5 54.5 55.0 0.0 0.0 30.5 77.0 81.5 0.0 0.0 50.8 68.5 68.5 0.0 0.0 31.5 77.0 68.5 0.0 0.0 41.8 58.0 60.8 0.0 0.0 25.0 66.0 65.0 0.0 0.0 55.0 69.0 73.0 Average Pass Win 40.2 - 64.3 64.8 34.4 6.8 43.5 22.6 33.2 16.5 46.3 43.1 64.4 57.2 70.4 71.1 0.0 0.0 29.0 66.7 67.3 0.0 0.0 47.0 60.0 63.1 Table 4: Main experiments of ToolBench. Win rate is calculated by comparing each model with ChatGPT- ReACT. A win rate higher than 50% means the model performs better than ChatGPT-ReACT.
2307.16789#20
2307.16789#22
2307.16789
[ "2302.13971" ]
2307.16789#22
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Apart from ToolLLaMA-DFSDT-Retriever, all methods use the oracle API retriever (i.e., ground truth API). ChatGPT, Text-Davinci-003, GPT-4, and Claude-2 as baselines, and apply both DFSDT and ReACT to them. When calculating the win rate, each model is compared with ChatGPT-ReACT. The results are placed in Table 4, from which we derive that: 1.
2307.16789#21
2307.16789#23
2307.16789
[ "2302.13971" ]
2307.16789#23
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Although we conduct prompt engineering extensively, both Vicuna and Alpaca fail to pass any instruction (pass rate & win rate = 0), which means their instruction-following abilities do not cover the tool-use domain. This underscores the deficiency of current instruction tuning attempts, which largely focus on language skills; 2. For all LLMs, using DFSDT significantly outperforms ReACT in both pass rate and win rate. Notably, ChatGPT +DFSDT surpasses GPT-4+ReACT in pass rate and performs comparably in win rate. This underscores the superiority of DFSDT over ReACT in decision-making; 3. When using DFSDT, ToolLLaMA performs much better than Text-Dainci-003 and Claude-2, and achieves a result almost on par with ChatGPT (the teacher model). In general, despite generalizing to unseen instructions and tools, ToolLLaMA +DFSDT demonstrates competitive generalization performance in all scenarios, achieving a pass rate second to GPT4+DFSDT. Overall, these results demonstrate that ToolBench can sufficiently elicit the tool-use capabilities within LLMs and empower them to skillfully master even unseen APIs for various instructions. Integrating API Retriever with ToolLLaMA In real-world scenarios, asking users to manually recommend APIs from a large pool may not be practical. To emulate this practical setting and test the efficiency of our API retriever, we feed the top 5 APIs (instead of the ground truth APIs Ssub N ) recommended by our API retriever to ToolLLaMA. As shown in Table 4, using retrieved APIs even improves the performance (both pass rate and win rate) compared to the ground truth API set. This is because many APIs in the ground truth API set can be replaced by other similar APIs with better functionalities, which our API retriever can successfully identify. In other words, our retriever expands the search space of relevant APIs and finds more appropriate ones for the current instruction. It provides robust evidence of the excellent ability of our API retriever to retrieve relevant APIs, especially considering the vast pool (16, 000+) of APIs from which our API retriever selects. 3.3 OUT-OF-DISTRIBUTION (OOD) GENERALIZATION TO APIBENCH (PATIL ET AL., 2023)
2307.16789#22
2307.16789#24
2307.16789
[ "2302.13971" ]
2307.16789#24
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Settings We further extend ToolLLaMA to an OOD dataset APIBench to validate its generaliza- tion ability. To assess the generalization ability of ToolLLaMA in these new domains, we equip ToolLLaMA with two retrievers: our trained API retriever and the oracle retriever. We evaluate three domains of APIBench, i.e., TorchHub, TensorHub, and HuggingFace. We compare ToolLLaMA with Gorilla, a LLaMA-7B model fine-tuned using the training data of APIBench. Following the original paper, we adopt two official settings for Gorilla: zero-shot setting (ZS) and retrieval-aware setting (RS). The latter means (RS) the retrieved APIs are sent to the model as part of the prompts; while the former (ZS) does not incorporate the APIs in the prompts when training the model. We adopt the official evaluation metric and report the AST accuracy along with the hallucination rates.
2307.16789#23
2307.16789#25
2307.16789
[ "2302.13971" ]
2307.16789#25
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
8 Preprint Method TorchHub Hallu. (â ) AST (â ) Hallu. (â ) AST (â ) Hallu. (â ) AST (â ) HuggingFace TensorHub ToolLLaMA + Our Retriever Gorilla-ZS + BM25 Gorilla-RS + BM25 10.60 46.90 6.42 16.77 10.51 15.71 15.70 17.20 5.91 51.16 44.62 50.00 6.48 20.58 2.77 40.59 34.31 41.90 ToolLLaMA + Oracle Gorilla-ZS + Oracle Gorilla-RS + Oracle 8.66 52.88 6.97 88.80 44.36 89.27 14.12 39.25 6.99 85.88 59.14 93.01 7.44 12.99 2.04 88.62 83.21 94.16 Table 5: OOD generalization experiments on APIBench. For the Gorilla entries, ZS / RS means that Gorilla was trained in a zero-shot / retrieval-aware setting on APIBench. We report hallucination rate and AST accuracy. Results The results are shown in Table 5. In general, ToolLLaMA achieves remarkable OOD generalization performance on all three datasets, despite being trained on a completely different API domain and instruction domain. Specifically, ToolLLaMA+our API retriever outperforms Gorilla+BM25 from both training settings (ZS / RS) in terms of AST accuracy on HuggingFace and TorchHub. With the same oracle retriever, ToolLLaMA is consistently superior when compared to Gorilla-ZS. It should be noted that Gorilla model cannot be generalized to our ToolBench dataset due to our more complex settings, such as the multi-tool use and multi-step reasoning. # 4 RELATED WORK
2307.16789#24
2307.16789#26
2307.16789
[ "2302.13971" ]
2307.16789#26
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Tool Learning Recent studies have shed light on the burgeoning capabilities of LLMs in mastering tools and making decisions within complex environments (Vemprala et al., 2023; Nakano et al., 2021; Qin et al., 2023a; Shen et al., 2023; Wu et al., 2023; Schick et al., 2023; Hao et al., 2023; Qian et al., 2023; Song et al., 2023; Zhuang et al., 2023; Gao et al., 2023). Gaining access to external tools endows LLMs with real-time factual knowledge (Yang et al., 2023), multimodal functionalities (Gupta & Kembhavi, 2023), and specialized skills in vertical domains (Jin et al., 2023). However, open-source LLMs still lag far behind SOTA LLMs in tool use, and how tool-use ability is acquired by SOTA LLMs remains unclear.
2307.16789#25
2307.16789#27
2307.16789
[ "2302.13971" ]
2307.16789#27
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
In this paper, we aim to bridge this gap and fathom the underlying mechanism. Instruction Tuning Instruction tuning enhances LLMs in understanding human instructions and generating proper responses (Wei et al., 2021; Bach et al., 2022; Mishra et al., 2022). Since manually annotating instruction tuning data is time-consuming, self-instruct (Wang et al., 2022) proposes to generate high-quality data from SOTA LLMs, which facilitates a recent trend of data curation for multi-turn dialogue (Taori et al., 2023; Chiang et al., 2023; Xu et al., 2023a; Penedo et al., 2023; Ding et al., 2023). However, compared with the dialogue, tool learning is inherently more challenging given the vast diversity of APIs and the complexity of multi-tool instructions. As a result, even GPT-4 often fails to find a valid solution path. However, existing tool-learning dataset (Li et al., 2023a; Patil et al., 2023; Tang et al., 2023; Xu et al., 2023b) and their construction methods cannot effectively address real human needs as mentioned in § 1. Instead, our ToolBench is designed for practical scenarios and improves the previous pipeline for tool-learning data construction. Prompting LLMs for Decision Making Prompting facilitates LLMs to decompose high-level tasks into sub-tasks and generate grounded plans (Ahn et al., 2022; Huang et al., 2022a;b; Ye et al., 2023). ReACT (Yao et al., 2022) integrates reasoning with acting by allowing LLMs to give a proper reason for an action and incorporating environmental feedback for reasoning. However, these studies do not incorporate a mechanism for decision retraction, which becomes problematic as an initial error can lead to a cascade of subsequent errors. Recently, Reflexion (Shinn et al., 2023) mitigates this issue by asking LLMs to reflect on previous failures. Our DFSDT extends Reflexion to a more general method by allowing LLMs to assess different reasoning paths and select the most promising one. It should be noted DFSDT shares a similar idea to a concurrent work: tree-of-thought (ToT) reasoning (Yao et al., 2023).
2307.16789#26
2307.16789#28
2307.16789
[ "2302.13971" ]
2307.16789#28
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
However, our DFSDT targets general decision-making problems where the decision space is infinite, compared to ToTâ s relatively simple tasks that can be addressed by brute-force search, such as Game of 24 and Crosswords. The distinct target between DFSDT and ToT determines the significant difference in the implementation details. 9 Preprint # 5 CONCLUSION In this work, we introduce how to elicit the tool-use capabilities within LLMs. We first present an instruction tuning dataset, ToolBench, which covers 16k+ real-world APIs and various practical use- case scenarios including both single-tool and multi-tool tasks. The construction of ToolBench purely uses ChatGPT and requires minimal human supervision.
2307.16789#27
2307.16789#29
2307.16789
[ "2302.13971" ]
2307.16789#29
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Moreover, we propose DFSDT to reinforce the planning and reasoning ability of LLMs, enabling them to navigate through reasoning paths strategically. For efficient evaluation of tool learning, we devise an automatic evaluator ToolEval. By fine-tuning LLaMA on ToolBench, the obtained model ToolLLaMA matches the performance of ChatGPT and exhibits remarkable generalization ability to unseen APIs. Besides, we develop a neural API retriever to recommend relevant APIs for each instruction. The retriever can be integrated with ToolLLaMA as a more automated tool-use pipeline. In the experiments, we demonstrate the generalization ability of our pipeline to out-of-distribution domains. In general, this work paves the way for future research in the intersection of instruction tuning and tool use for LLMs.
2307.16789#28
2307.16789#30
2307.16789
[ "2302.13971" ]
2307.16789#30
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
# REFERENCES Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances. ArXiv preprint, abs/2204.01691, 2022. Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault F´evry, et al. Promptsource: An integrated development environment and repository for natural language prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 93â 104, 2022. S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595, 2023. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing.
2307.16789#29
2307.16789#31
2307.16789
[ "2302.13971" ]
2307.16789#31
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //lmsys.org/blog/2023-03-30-vicuna/. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171â 4186, Minneapolis, Minnesota, 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https:// aclanthology.org/N19-1423.
2307.16789#30
2307.16789#32
2307.16789
[ "2302.13971" ]
2307.16789#32
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, and Mike Zheng Shou.
2307.16789#31
2307.16789#33
2307.16789
[ "2302.13971" ]
2307.16789#33
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Assistgpt: A general multi-modal assistant that can plan, execute, inspect, and learn. arXiv preprint arXiv:2306.08640, 2023. Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14953â 14962, 2023. Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. arXiv preprint arXiv:2305.11554, 2023.
2307.16789#32
2307.16789#34
2307.16789
[ "2302.13971" ]
2307.16789#34
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
10 Preprint Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesv´ari, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 9118â
2307.16789#33
2307.16789#35
2307.16789
[ "2302.13971" ]
2307.16789#35
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
9147. PMLR, 2022a. Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. ArXiv preprint, abs/2207.05608, 2022b. Kalervo J¨arvelin and Jaana Kek¨al¨ainen.
2307.16789#34
2307.16789#36
2307.16789
[ "2302.13971" ]
2307.16789#36
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422â 446, 2002. Qiao Jin, Yifan Yang, Qingyu Chen, and Zhiyong Lu. Genegpt: Augmenting large language models with domain tools for improved access to biomedical information. ArXiv, 2023. Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li.
2307.16789#35
2307.16789#37
2307.16789
[ "2302.13971" ]
2307.16789#37
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Api-bank: A benchmark for tool-augmented llms. arXiv preprint arXiv:2304.08244, 2023a. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval, 2023b.
2307.16789#36
2307.16789#38
2307.16789
[ "2302.13971" ]
2307.16789#38
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Cross-task generalization via natural language crowdsourcing instructions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3470â 3487, 2022. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al.
2307.16789#37
2307.16789#39
2307.16789
[ "2302.13971" ]
2307.16789#39
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Webgpt: Browser-assisted question-answering with human feedback. ArXiv preprint, abs/2112.09332, 2021. # OpenAI. OpenAI: Introducing ChatGPT, 2022. URL https://openai.com/blog/chatgpt. OpenAI. Gpt-4 technical report, 2023. Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023.
2307.16789#38
2307.16789#40
2307.16789
[ "2302.13971" ]
2307.16789#40
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. Cheng Qian, Chi Han, Yi R Fung, Yujia Qin, Zhiyuan Liu, and Heng Ji. Creator:
2307.16789#39
2307.16789#41
2307.16789
[ "2302.13971" ]
2307.16789#41
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Disentangling abstract and concrete reasonings of large language models through tool creation. arXiv preprint arXiv:2305.14318, 2023. Yujia Qin, Zihan Cai, Dian Jin, Lan Yan, Shihao Liang, Kunlun Zhu, Yankai Lin, Xu Han, Ning Ding, Huadong Wang, et al. Webcpm: Interactive web search for chinese long-form question answering. arXiv preprint arXiv:2305.06849, 2023a. Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint arXiv:2304.08354, 2023b. Nils Reimers and Iryna Gurevych.
2307.16789#40
2307.16789#42
2307.16789
[ "2302.13971" ]
2307.16789#42
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019. Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333â 389, 2009. Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom.
2307.16789#41
2307.16789#43
2307.16789
[ "2302.13971" ]
2307.16789#43
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Toolformer: Language models can teach themselves to use tools. ArXiv preprint, abs/2302.04761, 2023. 11 # Preprint Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface, 2023. Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao.
2307.16789#42
2307.16789#44
2307.16789
[ "2302.13971" ]
2307.16789#44
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Reflexion: Language agents with verbal reinforcement learning, 2023. Yifan Song, Weimin Xiong, Dawei Zhu, Cheng Li, Ke Wang, Ye Tian, and Sujian Li. Restgpt: Connecting large language models with real-world applications via restful apis. arXiv preprint arXiv:2306.06624, 2023. Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, and Le Sun.
2307.16789#43
2307.16789#45
2307.16789
[ "2302.13971" ]
2307.16789#45
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Toolalpaca: General- ized tool learning for language models with 3000 simulated cases. arXiv preprint arXiv:2306.05301, 2023. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
2307.16789#44
2307.16789#46
2307.16789
[ "2302.13971" ]
2307.16789#46
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
2307.16789#45
2307.16789#47
2307.16789
[ "2302.13971" ]
2307.16789#47
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b. Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor. Chatgpt for robotics: Design principles and model abilities. Technical Report MSR-TR-2023-8, Microsoft, February 2023.
2307.16789#46
2307.16789#48
2307.16789
[ "2302.13971" ]
2307.16789#48
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan.
2307.16789#47
2307.16789#49
2307.16789
[ "2302.13971" ]
2307.16789#49
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Vi- sual chatgpt: Talking, drawing and editing with visual foundation models. ArXiv preprint, abs/2303.04671, 2023. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions, 2023a. Qiantong Xu, Fenglu Hong, Bo Li, Changran Hu, Zhengyu Chen, and Jian Zhang. On the tool manipulation capability of open-source large language models. arXiv preprint arXiv:2305.16504, 2023b. Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, and Xindong Wu.
2307.16789#48
2307.16789#50
2307.16789
[ "2302.13971" ]
2307.16789#50
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Chatgpt is not enough: Enhancing large language models with knowledge graphs for fact-aware language modeling. arXiv preprint arXiv:2306.11489, 2023. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. ArXiv preprint, abs/2210.03629, 2022. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023.
2307.16789#49
2307.16789#51
2307.16789
[ "2302.13971" ]
2307.16789#51
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
12 Preprint Yining Ye, Xin Cong, Yujia Qin, Yankai Lin, Zhiyuan Liu, and Maosong Sun. Large language model as autonomous decision maker. arXiv preprint arXiv:2308.12519, 2023. Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, and Chao Zhang. Toolqa: A dataset for llm question answering with external tools. arXiv preprint arXiv:2306.13304, 2023.
2307.16789#50
2307.16789#52
2307.16789
[ "2302.13971" ]
2307.16789#52
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
13 Preprint APPENDIX A IMPLEMENTATION DETAILS A.1 DETAILS FOR FILTERING RAPIDAPI We perform a rigorous filtering process to ensure that the ultimate tool set of ToolBench is reliable and functional. The filtering process is as follows: (1) initial testing: we begin by testing the basic functionality of each API to ascertain whether they are operational. We discard any APIs that do not meet this basic criterion; (2) example response evaluation: we make API calls to obtain an example response. Then we evaluate their effectiveness by response time and quality. APIs that consistently exhibit a long response time are omitted. Also, we filter out the APIs with low-quality responses, such as HTML source codes or other error messages. A.2 API RESPONSE COMPRESSION When examining the response returned by each API, we discover that some responses may contain redundant information and are too long to be fed into LLMs. This may lead to problems due to the limited context length of LLMs. Therefore, we perform a response compression to reduce the length of API responses while maintaining their critical information. Since each API has a fixed response format, we use ChatGPT to analyze one response example and remove unimportant keys within the response to reduce its length. The prompt of ChatGPT contains the following information for each API: (1) tool documentation, which includes tool name, tool description, API name, API description, parameters, and an example API response. This gives ChatGPT a hint of the APIâ s functionality; (2) 3 in-context learning examples, each containing an original API response and a compressed response schema written by experts. In this way, we obtain the response compression strategies for all APIs. During inference, when the API response length exceeds 1024 tokens, we compress the response by removing unimportant information. If the compressed response is still longer than 1024, we only retain the first 1024 tokens. Through human evaluation, we find that this compression retains important information contained in the API response and successfully removes the noises.
2307.16789#51
2307.16789#53
2307.16789
[ "2302.13971" ]
2307.16789#53
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
A.3 DETAILS FOR TRAINING TOOLLLAMA We train the model in a multi-round conversation mode. For the training data format, we keep the input and output the same as those of ChatGPT. Since it is unclear how ChatGPT organizes the function call field, we just concatenate this information into the input as part of the prompt for ToolLLaMA. For the training hyper parameters, we use a learning rate of 5 Ã 10â 5, a warmup ratio of 4 Ã 10â 2, a total batch size of 64, a maximum sequence length of 8192, and use a position interpolation ratio of 2.
2307.16789#52
2307.16789#54
2307.16789
[ "2302.13971" ]
2307.16789#54
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
We train the model for two epochs and select the model checkpoint with the best performance on the development set and then evaluate it on the test set. A.4 DETAILS FOR DFSDT In practice, it is essential to balance effectiveness with costs (the number of OpenAI API calls). Classical DFS algorithms generate multiple child nodes at each step, then sort all the child nodes, and select the highest-scoring node for expansion. After greedily expanding to the terminal node, DFS backtracks to explore nearby nodes, expanding the search space. Throughout the algorithm, the most resource-intensive part is the sorting process of child nodes. If we use an LLM to evaluate two nodes at a time, it requires approximately O(n log n) complexity of OpenAI API calls, where n is the number of child nodes. In fact, we find empirically that in most cases, the nodes ranked highest are often the node generated at first. Therefore, we skip the sorting process of child nodes and choose a pre-order traversal (a variant for DFS) for the tree search.
2307.16789#53
2307.16789#55
2307.16789
[ "2302.13971" ]
2307.16789#55
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
This design has the following advantages: â ¢ If the model does not retract an action (e.g., for the case of simple instructions), then DFSDT degrades to ReACT, which makes it as efficient as ReACT. 14 Preprint â ¢ After the algorithm finishes, the nodes explored by this method are almost the same as those found by a classical DFS search. Hence, it can also handle complex instructions that only DFS can solve. Overall, this design achieves a similar performance as DFS while significantly reducing costs. It should also be noted that ReACT can be viewed as a degraded version of DFSDT. Therefore, although ToolLLaMA is trained on data created by DFSDT, the model can be used either through ReACT or DFSDT during inference.
2307.16789#54
2307.16789#56
2307.16789
[ "2302.13971" ]
2307.16789#56
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
A.5 DETAILS FOR TOOLEVAL We adopt two metrics for automatic tool-use capability evaluation: pass rate and win rate. Details for Pass Rate To assess whether a solution path completes the tasks outlined in the original instruction and successfully passes it, we need to first consider the solvability of the instruction. In principle, an instruction can be classified as either (1) solvable: for example, at least one of the provided tools is potentially helpful in solving the original instruction; or (2) unsolvable: for example, all APIs are irrelevant to the instruction or the instruction provides invalid information such as invalid email address. To determine whether a solution path is deemed passed or not, we need to consider whether the instruction is solvable or unsolvable. In our evaluation, three types of labels can be given to each solution path, i.e., Pass, Fail, and Unsure. Specifically, we define different rules as follows: If the instruction is solvable:
2307.16789#55
2307.16789#57
2307.16789
[ "2302.13971" ]
2307.16789#57
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
1. If the model gives finish type â Finish by Giving Upâ , (a) After trying all the APIs extensively during and receiving no helpful information from APIs, the solution path is deemed a Pass. (b) If the model only calls a few API or receiving valid information from the APIs, the solution path is deemed a Fail. 2. If the model gives finish type â Finish with Final Answerâ , (a) If the APIs provide no valid information, and the model has tried all the APIs to retrieve useful information, but the final answer still does not resolve the original instruction or conveys a refusal (such as â Iâ m sorry, but I canâ t provide you with this, because the tools are unavailableâ
2307.16789#56
2307.16789#58
2307.16789
[ "2302.13971" ]
2307.16789#58
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
), the solution path is deemed a Pass. (b) If the tools provide valid information, and the final answer does not completely resolve the instruction or is a refusal, the solution path is deemed a Fail. (c) If the final answer completely resolves the original instruction, the solution path is deemed a Pass. (d) If it is unable to determine if the instruction is resolved based on the content of the final answer, the solution path is deemed an Unsure. If the instruction is unsolvable: 1. If the model gives finish type â Finish with Final Answerâ , (a) If the final answer resolves an instruction that was initially considered unresolvable, the solution path is deemed a Pass. (b) If the final answer is a refusal, the solution path is deemed a Pass. (c) If the final answer is hallucinated by the model itself and provides a false positive response (such as â
2307.16789#57
2307.16789#59
2307.16789
[ "2302.13971" ]
2307.16789#59
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Iâ ve completed the task, the final answer is *â ), the solution path is deemed a Fail. 2. If the model gives finish type â Finish by Giving Upâ , (a) Under this case, the solution path is deemed a Pass. For every solution path, we instruct the ChatGPT evaluator to generate multiple (â ¥ 4) predictions and perform a majority vote to derive the final pass rate. 15 Preprint Details for Win Rate Since pass rate only measures whether an instruction is completed or not, instead of how well it is completed, we adopt another metric: win rate. It is measured by comparing two solution paths for a given instruction. We assume that a passed candidate is better than a failed candidate and only compare those solution paths that are both â
2307.16789#58
2307.16789#60
2307.16789
[ "2302.13971" ]
2307.16789#60
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Passâ , or both â Failedâ annotated by the ChatGPT evaluator. Note that compared with another solution path, one solution path will be annotated with one of the following: win, lose, or tie. We build rules for the evaluatorâ s behavior to decide which solution path is better, and the criteria are listed as follows: 1. Information richness: whether the final answer contains all the necessary information to answer the original instruction. A significantly richer answer is better, while a similar level of richness that is sufficient to answer the question ties. 2. Factuality: whether it accurately describes what has been done, and what failed in the end. A more accurate description in the final answer is better. 3. Reasoning: whether a detailed and accurate reason for failure is provided if the query remains unresolved. A more detailed reason is better. 4.
2307.16789#59
2307.16789#61
2307.16789
[ "2302.13971" ]
2307.16789#61
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Milestone: calculating the number of milestones reached during execution. 5. Exploration: whether more potentially useful APIs were attempted during the execution process. The use of a greater number of APIs is better. 6. Cost: Having fewer repeated (redundant) API calls is better if the number of APIs used is the same. For every solution path, we also generate multiple (â ¥ 4) predictions and then perform a majority vote to derive the final win rate. In Table 4, for ease of reading, we split the ratio of tie into two pieces and add them to win and lose, respectively. In Table 6, we report the original numbers as a reference.
2307.16789#60
2307.16789#62
2307.16789
[ "2302.13971" ]
2307.16789#62
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Comparing Human Evaluation and ToolEval To validate the reliability of ChatGPT evalua- tor in both pass rate and win rate, we sample among four different methods (ChatGPT+ReACT, ChatGPT+DFSDT, ToolLLaMA+DFSDT and GPT4+DFSDT) to obtain solution pairs for 300 test in- structions for each method. Then we engage humans to annotate the pass rate for ChatGPT+DFSDT, ToolLLaMA+DFSDT and GPT4+DFSDT, and the win rate among ChatGPT+ReACT and Chat- GPT+DFSDT. Our ChatGPT evaluator demonstrates a high agreement of 87.1% in pass rate and 80.3% in win rate with human annotators. This result shows that our evaluator generates highly similar evaluation results to humans and can be viewed as a credible evaluator who simulates human evaluation on pass rate and win rate. It should also be noted that the evaluation for tool learning is far more intricate than traditional tasks such as dialogue.
2307.16789#61
2307.16789#63
2307.16789
[ "2302.13971" ]
2307.16789#63
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
The reason is that there may exist infinite â correctâ solution paths for each instruction. In our initial investigations, we surprisingly found that even human experts often disagree with each other in deciding which solution path is better, leading to a relatively low agreement. For instance, one may prefer a solution path that uses only a few APIs to derive the final answer quickly; while another may prefer a solution path that extensively tries all the APIs to cross-validate specific information. In this regard, we believe there is still a long way to go for a fair evaluation of the tool-use domain, and we believe this work has paved the way for it. We expect more future works to explore this interesting research problem. A.6 DETAILS FOR EXPERIMENTS ON APIBENCH When generalizing ToolLLaMA to APIBench, no training updates were made to ToolLLaMA, but instead of treating each API in the prompt as a function call. We define one function that represents selecting an API, providing the code for invoking it, and describing the generated output in natural language. We do not consider the zero-shot setting of APIBench where the prompts do not contain any API descriptions because the APIs from the three tested domains were never encountered during training.
2307.16789#62
2307.16789#64
2307.16789
[ "2302.13971" ]
2307.16789#64
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
16 Preprint Model ChatGPT Method DFSDT I1-Inst. I1-Tool I1-Cat. I2-Inst. I2-Cat. I3-Inst. Average Win 52.5 Tie Win 55.0 16.0 Tie Win 47.5 14.0 Tie Win 19.5 67.0 Tie Win 10.0 58.5 Tie Win 61.0 12.5 Tie Win 56.9 16.0 Tie 14.7 Claude-2 ReACT DFSDT 27.0 34.0 8.0 8.0 24.0 41.0 7.5 6.5 29.5 39.5 8.5 7.5 32.0 32.5 6.0 9.5 28.5 33.5 6.0 0.0 43.0 65.0 9.5 0.0 30.7 40.8 7.5 5.3 Text-Davinci-003 ReACT DFSDT 23.5 35.0 10.0 10.5 28.5 37.5 13.5 12.5 27.0 40.0 8.0 13.5 26.5 36.5 6.5 8.0 25.5 40.0 8.5 6.5 41.0 60.0 8.0 6.0 28.7 41.5 9.1 9.5 GPT4 ReACT DFSDT 52.5 60.5 15.0 14.0 53.5 62.5 10.5 10.5 56.0 58.0 15.0 17.0 59.5 67.0 12.5 12.5 52.5 57.0 15.5 12.5 76.0 80.0 4.0 8.0 58.3 64.2 12.1 12.4 Vicuna Alpaca (ReACT & DFSDT) (ReACT & DFSDT) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ToolLLaMA ReACT DFSDT Retriever 40.0 48.5 58.0 10.0 13.0 8.5 36.5 50.5 54.5 11.0 9.5 9.0 42.0 49.5 51.0 11.0 10.0 8.0 45.5 62.5 64.5 10.5 12.0 8.0 37.5 52.0 56.0 8.5 12.0 9.5 51.0 68.0 71.0 8.0 2.0 4.0 42.1 55.2 59.2 9.8 9.8 7.8
2307.16789#63
2307.16789#65
2307.16789
[ "2302.13971" ]
2307.16789#65
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Table 6: Win rate results before merging the tie label. Win rate is calculated by comparing each model with ChatGPT-ReACT. A win rate higher than 50% means the model performs better than ChatGPT-ReACT. Apart from ToolLLaMA-DFSDT-Retriever, all methods use the oracle API retriever (i.e., ground truth API). A.7 PROMPTS FOR INSTRUCTION GENERATION Below we list the detailed prompt for instruction generation, which consists of four parts: task description, in-context learning examples, sampled API list, and other requirements. Task Description of Single-tool Instructions: You will be provided with a tool, its description, all of the toolâ s available API functions, the descriptions of these API functions, and the parameters required for each API function. Your task involves creating 10 varied, innovative, and detailed user queries that employ multiple API functions of a tool. For instance, if the tool â
2307.16789#64
2307.16789#66
2307.16789
[ "2302.13971" ]
2307.16789#66
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
climate newsâ has three API calls - â get all climate change newsâ , â look up climate todayâ , and â historical climateâ , your query should articulate something akin to: first, determine todayâ s weather, then verify how often it rains in Ohio in September, and finally, find news about climate change to help me understand whether the climate will change anytime soon. This query exemplifies how to utilize all API calls of â climate newsâ . A query that only uses one API call will not be accepted. Additionally, you must incorporate the input parameters required for each API call. To achieve this, generate random information for required parameters such as IP address, location, coordinates, etc.
2307.16789#65
2307.16789#67
2307.16789
[ "2302.13971" ]
2307.16789#67
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
For instance, donâ t merely say â an addressâ , provide the exact road and district names. Donâ t just mention â a productâ , specify wearables, milk, a blue blanket, a pan, etc. Donâ t refer to â my companyâ , invent a company name instead. The first seven of the ten queries should be very specific. Each single query should combine all API call usages in different ways and include the necessary parameters. Note that you shouldnâ t ask â which API to useâ , rather, simply state your needs that can be addressed by these APIs. You should also avoid asking for the input parameters required by the API call, but instead directly provide the parameter in your query. The final three queries should be complex and lengthy, describing a complicated scenario where all the API calls can be utilized to provide assistance within a single query. You should first think about possible related API combinations, then give your query. Related apis are apis that can be used for a give query; those related apis have to strictly come from the provided api names. For each query, there should be multiple related apis; for different queries, overlap of related apis should be as little as possible. Deliver your response in this format: [Query1: ......, â related apisâ :[api1, api2, api3...],Query2: ......, â related apisâ :[api4, api5, api6...],Query3: ......, â related apisâ :[api1, api7, api9...], ...]
2307.16789#66
2307.16789#68
2307.16789
[ "2302.13971" ]
2307.16789#68
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Task Description of Multi-tool Instructions: You will be provided with several tools, tool descriptions, all of each toolâ s available API functions, the descriptions of these API functions, and the parameters required for each API function. Your task involves creating 10 varied, innovative, and detailed user queries that employ API functions of multiple tools. For instance, given three tools â nba newsâ , â cat-factsâ , and â hotelsâ : â nba newsâ has API functions â Get individual NBA source newsâ and â Get all NBA newsâ , â cat-factsâ has API functions â Get all facts about catsâ and â Get a random fact about catsâ , â hotelsâ has API functions â properties/get-details (Deprecated)â , â properties/list (Deprecated)â and â locations/v3/searchâ . Your query should articulate something akin to: â
2307.16789#67
2307.16789#69
2307.16789
[ "2302.13971" ]
2307.16789#69
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
I want to name my newborn cat after Kobe and host a 17 Preprint party to celebrate its birth. Get me some cat facts and NBA news to gather inspirations for the cat name. Also, find a proper hotel around my house in Houston Downtown for the party.â This query exemplifies how to utilize API calls of all the given tools. A query that uses API calls of only one tool will not be accepted. Additionally, you must incorporate the input parameters required for each API call. To achieve this, generate random information for required parameters such as IP address, location, coordinates, etc.
2307.16789#68
2307.16789#70
2307.16789
[ "2302.13971" ]