doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.16789
44
Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595, 2023. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //lmsys.org/blog/2023-03-30-vicuna/. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https:// aclanthology.org/N19-1423.
2307.16789#44
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
44
# 4.4 Automatic Correctness Evaluation The performance of both instruction-following and fine-tuned models in a retrieval-augmented gener- ation setup across multiple datasets is reported in Table 3 using several lexical matching and seman- tic similarity metrics. Unsurprisingly, traditional QA metrics like EM and F1 assign much lower scores to instruction-following models, compared to fine-tuned FiD. The only exception is Flan-T5, that outperforms FiD with a 17.72% gap. However, it should be noted that Flan-T5 is trained on a wide range of QA tasks, including NQ and HotpotQA (Section 3.2).
2307.16877#44
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
45
6.2.2 Accessibility. Educational resources should be accessible to students with a visual impairment. This is typically satisfied with a text-based description of visual media which can be read aloud. However, if a text-based description of the image is provided, then this may either (a) be sufficiently descriptive of the problem that it could be passed directly to an LLM without requiring a student to engage with the prompt construction strategy; or (b) add a further layer of complexity to the inductive reasoning required to deter- mine the problem that is being illustrated by the visualization. For example, Figure 5 is intended to convey that a program should accept 5 numbers and remove the highest and lowest values before calculating the average of the central 3 values. However, a textual description of the image may focus undue attention on the many details that provide context, but which are not directly related to the problem. 6.2.3 Natural language bias. Students for whom English is their native language may, in general, be able to produce prompts in Eng- lish that are more nuanced in their use of language, and are likely to have greater success in improving partially correct prompts. Stu- dents with more limited English language could be disadvantaged
2307.16364#45
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
45
Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, and Mike Zheng Shou. Assistgpt: A general multi-modal assistant that can plan, execute, inspect, and learn. arXiv preprint arXiv:2306.08640, 2023. Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14953–14962, 2023. Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. arXiv preprint arXiv:2305.11554, 2023. 10 Preprint
2307.16789#45
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
45
Although LLM-based evaluation, such as GPT4Based on our finding in Section 4.3, we con- sider Recall to get true estimate of model performance. Using recall, the performance gap between instruction-following and fine-tuned models nar- rows significantly, with some instruction-following models even outperforming FiD. Notably, GPT-3.5 outperforms the fine-tuned FiD across all three QA task – 7.55% gap in NQ, 10.27% in HotpotQA, and 8.24% in TopiOCQA. These results suggest that in retrieval-augmented settings, instruction-following models are equally, or even more capable than fine- tuned generators in generating correct responses w.r.t user information needs. # 5 Faithfulness w.r.t Provided Knowledge
2307.16877#45
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
46
in manipulating the LLM to produce the correct program, even when they understand the problem and the programming solution more effectively than a native English speaker. Instructors who plan to use prompt generation activities as part of formal graded assessment should consider the extent to which English language skills should impact grades in their course. 6.2.4 Prompts and specificity. Creating a prompt that gives a gen- eral description of the problem is reasonably straightforward, but as instructors are aware, being precise and complete when describing the requirements for a problem relies on experience and expertise. Students are typically very familiar with following the specifica- tions of a problem, but are often less familiar with the process of specifying desired functionality with precision. For example, our pilot study (see Section 3) revealed that graduate students were frequently not providing sufficient information in their prompt to the model. Similarly, traditional code writing exercises do not encourage students to think about corner cases, because these are typically provided in the problem description (usually carefully worded by an instructor) or shown in test case output. This sug- gests that explicitly training prompt construction, as we propose, may make a valuable contribution to computing education by focus- ing more attention on important dispositions, such as being precise and paying attention to detail.
2307.16364#46
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
46
10 Preprint Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesv´ari, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 9118–9147. PMLR, 2022a. Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. ArXiv preprint, abs/2207.05608, 2022b. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446, 2002. Qiao Jin, Yifan Yang, Qingyu Chen, and Zhiyong Lu. Genegpt: Augmenting large language models with domain tools for improved access to biomedical information. ArXiv, 2023.
2307.16789#46
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
46
# 5 Faithfulness w.r.t Provided Knowledge As previously noted, instruction-following mod- els often produce verbose responses. Conse- quently, responses from these models often contain supplementary information which can be halluci- nated (Rashkin et al., 2021a; Dziri et al., 2022b; Chiesurin et al., 2023). In this section, we con- duct an analysis of the faithfulness of instruction- following models w.r.t knowledge provided as part of the input. We posit that an optimal generator’s response should rely solely on the knowledge rel- evant to the user information need. Based on this hypothesis, we split our analysis into two parts – 1) faithfulness w.r.t relevant knowledge, where we prompt the instruction-following model with the user question paired with the corresponding gold passage and evaluate the groundedness of the re- sponse in the provided knowledge, and 2) faithful- ness w.r.t irrelevant knowledge, where we provide a related but irrelevant passage and measure how often the model refuses to answer.
2307.16877#46
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
47
Inappropriate solutions. When solving Prompt Problems, the 6.2.5 LLM might produce code which is too advanced relative to the timing of the course, and we may not wish to show this to learners. This could be both negative and positive — it might show students new approaches they have not seen before, but on the other hand it could be confusing and demotivating as students may feel like they should understand the code when they do not. For example, in our classroom evaluation, although most students commented positively on this aspect, we did see some evidence of students being confused by the outputs: “when the question prompt got harder, the code become harder as well and I wasn’t able to understand the code that was being generated”, and “some of the functions used in the latter exercises were new to me and I would not be able to diagnose any code errors within it”. One way of handling this issue could be through tool design, by including in the tool filters for certain programming constructs that should be used for given problems (instructors could define these along with the problems). These filters could either be post-filters (i.e. rejecting a model completion and requesting a new one if it includes concepts that are not desired) or pre-filters (i.e. where the prompt is modified to include which constructs are allowed).
2307.16364#47
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
47
Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. Api-bank: A benchmark for tool-augmented llms. arXiv preprint arXiv:2304.08244, 2023a. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval, 2023b. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Cross-task generalization via natural language crowdsourcing instructions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3470–3487, 2022. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. ArXiv preprint, abs/2112.09332, 2021.
2307.16789#47
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
47
In this section, we first describe the automatic faithfulness metrics (§5.1). Next, similar to correct- ness, we conduct a human evaluation and compute correlations for all metrics, followed by large-scale evaluation of faithfulness w.r.t relevant knowledge (§5.2). Finally, we analyze the capabilities of mod- els to refrain from answering in the presence of irrelevant knowledge (§5.3). # 5.1 Faithfulness Metrics Here we describe the metrics that we use for au- tomatic evaluation in Section 5.2. Given the user question or the conversation history (denoted by H), the gold passage K, and the model response u, the goal is to check if u is grounded in K. We con- sider both faithfulness and groundedness metrics in the literature for this task.
2307.16877#47
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
48
6.2.6 Problem difficulty. Prompt creation is a new kind of task that we (as a community) have limited experience with, and we have not typically asked students to complete similar tasks. It may be difficult for instructors to have an intuition for how hard it will be for students to construct prompts for various problems. In addition, further thought is needed about when to introduce such tasks into the curriculum. Novices in a typical CS1 course could potentially solve more complex problems earlier than they would otherwise if they had to generate code from scratch. However, it may be useful for students to have some minimal knowledge of programming in order to be able to diagnose problems in code generated by LLMs. 10 # 7 CONCLUSION In this work we present a novel pedagogical approach, known as ‘Prompt Problems’, designed to help students learn how to craft effective prompts for generating code using large language models (LLMs). This is an essential skill in the current era of rapidly advanc- ing AI and automated code generation. Learning effective prompt construction is important as it can help students express detailed specifications, encourage them to think about corner cases and apply computational thinking skills. Indeed, we motivate our work by presenting the findings from a pilot study involving graduate students which revealed struggles in providing sufficient details when writing prompts.
2307.16364#48
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
48
# OpenAI. OpenAI: Introducing ChatGPT, 2022. URL https://openai.com/blog/chatgpt. OpenAI. Gpt-4 technical report, 2023. Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023. Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. Cheng Qian, Chi Han, Yi R Fung, Yujia Qin, Zhiyuan Liu, and Heng Ji. Creator: Disentangling abstract and concrete reasonings of large language models through tool creation. arXiv preprint arXiv:2305.14318, 2023.
2307.16789#48
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
48
K-F1 Knowledge-F1 (denoted K-F1) is a lexical overlap metric that checks for F1 overlap between the tokens of u and K. Although it has been widely used for knowledge-grounded dialogue (Shuster et al., 2021; Dziri et al., 2022a), we argue it is un- suitable for assessing groundedness in information- seeking tasks. In information-seeking, model re- sponses tend to be shorter than the knowledge snip- pet. Hence, even if the model selects precise infor- mation from the knowledge, it is penalized for not utilizing the entire knowledge snippet by K-F1. K-Precision To counter the shortcomings of K- F1, we propose K-Precision – the proportion of tokens in the model response u that are present in K. The intuition nehind this is that in information- seeking, grounding u in K is inherently an asym- metric task, i.e., u can be a subset of K but K cannot be a subset of u. K-BertS Follwing Shuster et al. (2021) and Dziri et al. (2022a), we use of BERTScore to measure semantic similarity between K and u based on con- textual BERT token embeddings. We refer to this as K-BertS to differentiate it from BertS (Section 4).
2307.16877#48
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
49
We make three primary contributions in this paper. The first is the conceptualization of Prompt Problems as a nascent pedagogical strategy. The second is the design and implementation of a novel tool, Promptly, for delivering Prompt Problems at scale. The third contribution is an empirical evaluation of Promptly in a first-year Python programming course, where we explore student interactions with and perceptions of the tool. Future research should investigate different variations of the approach we have described, including permitting code-editing and dialogue-based interactions, which present both benefits and challenges. It is also essential to explore the right time to introduce students to the concept of prompt-based code generation, and how to integrate these problems in parallel with conventional teaching practices. REFERENCES [1] Joe Michael Allen, Kelly Downey, Kris Miller, Alex Daniel Edgcomb, and Frank Vahid. 2019. Many Small Programs in CS1: Usage Analysis from Multiple Univer- sities. In 2019 ASEE Annual Conference & Exposition ". ASEE Conferences, Tampa, Florida, 1–13. https://peer.asee.org/33084.
2307.16364#49
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
49
Yujia Qin, Zihan Cai, Dian Jin, Lan Yan, Shihao Liang, Kunlun Zhu, Yankai Lin, Xu Han, Ning Ding, Huadong Wang, et al. Webcpm: Interactive web search for chinese long-form question answering. arXiv preprint arXiv:2305.06849, 2023a. Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint arXiv:2304.08354, 2023b. Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019. Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333–389, 2009.
2307.16789#49
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
49
FaithCritic We use the hallucination critic model by Dziri et al. (2023) to evaluate whether a response entails a given passage.3 It outputs a score between 0 and 1 indicating how likely a given response is hallucinated. Here, lower scores are indicative of lesser hallucination within a model’s responses, hence, more groundedness. Q2 Q2 (Honovich et al., 2021) is an evaluation metric used to quantify factual consistency between responses and provided passages using automatic question generation, question answering, and natu- ral language inference (NLI) models. LLMCritic Similar to correctness, we inves- tigate prompting LLMs to act as evaluator for groundedness. More specifically, we prompt GPT- 3.5 and GPT-4 to annotate whether a given response uses only the knowledge present in the provided passage. The actual prompt is provided in Ap- pendix B (Figure 8). # 5.2 Faithfulness w.r.t Relevant Knowledge In this section, we investigate the faithfulness of model responses when they are provided a passage relevant to the user query. We first conduct human 3RoBERTa-Large checkpoint: huggingface.co/McGill- NLP/roberta-large-faithcritic
2307.16877#49
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
50
[2] Hannah McLean Babe, Sydney Nguyen, Yangtian Zi, Arjun Guha, Molly Q Feld- man, and Carolyn Jane Anderson. 2023. StudentEval: A Benchmark of Student- Written Prompts for Large Language Models of Code. arXiv:2306.04556 [cs.LG] [3] Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, and Eddie Antonio Santos. 2023. Programming Is Hard - Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 500–506. https://doi.org/10.1145/3545945.3569759
2307.16364#50
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
50
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. ArXiv preprint, abs/2302.04761, 2023. 11 # Preprint Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface, 2023. Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning, 2023. Yifan Song, Weimin Xiong, Dawei Zhu, Cheng Li, Ke Wang, Ye Tian, and Sujian Li. Restgpt: Connecting large language models with real-world applications via restful apis. arXiv preprint arXiv:2306.06624, 2023.
2307.16789#50
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
50
3RoBERTa-Large checkpoint: huggingface.co/McGill- NLP/roberta-large-faithcritic Metric Spearman Kendall K-F1 K-Precision K-Recall -2.67 46.482 -4.258 -2.074 41.536 -3.388 K-BertS (F1) K-BertS (Precision) K-BertS (Recall) FaithCritic Q2 (F1) Q2 (NLI) LLMCritic (GPT-3.5) LLMCritic (GPT-4) 3.583 19.721 -10.3 11.741 27.883 27.524 27.189 50.485 3.009 16.07 -8.22 9.528 23.932 24.228 26.789 49.742 Table 4: Correlation of evaluation metrics of faithful- ness with human judgments. LLMCritic (GPT-4) is most correlated with human judgements. K-Precision is a close second. evaluation on a subset of samples, and use it to com- pare several evaluation metrics. Finally, we present the results of large-scale automatic evaluation of instruction-following models.
2307.16877#50
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
51
[4] Bruno Pereira Cipriano and Pedro Alves. 2023. GPT-3 vs Object Oriented Pro- gramming Assignments: An Experience Report. In Proceedings of the 2023 Con- ference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 61–67. https://doi.org/10.1145/3587102.3588814 [5] Paul Denny, Brett A. Becker, Juho Leinonen, and James Prather. 2023. Chat Overflow: Artificially Intelligent Models for Computing Education - RenAIs- sance or ApocAIypse?. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 3–4. https: //doi.org/10.1145/3587102.3588773
2307.16364#51
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
51
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, and Le Sun. Toolalpaca: General- ized tool learning for language models with 3000 simulated cases. arXiv preprint arXiv:2306.05301, 2023. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
2307.16789#51
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
51
evaluation on a subset of samples, and use it to com- pare several evaluation metrics. Finally, we present the results of large-scale automatic evaluation of instruction-following models. We conduct experiments on all three information- seeking tasks. For HotpotQA and TopiOCQA, the gold passage(s) for each query is provided as part of the dataset. For NQ, we follow Karpukhin et al. (2020) and provide each question and reference answer as a query to BM25 and take the first ranked passage as the gold passage. For all instruction- following models, we use the prompt provided in Section 3.
2307.16877#51
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
52
[6] Paul Denny, Viraj Kumar, and Nasser Giacaman. 2023. Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language. In Proceedings of the 54th ACM Technical Symposium on Computer Science Educa- tion V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machin- ery, New York, NY, USA, 1136–1142. https://doi.org/10.1145/3545945.3569823 [7] Paul Denny, Andrew Luxton-Reilly, Ewan Tempero, and Jacob Hendrickx. 2011. CodeWrite: Supporting Student-Driven Practice of Java. In Proceedings of the 42nd ACM Technical Symposium on Computer Science Education (Dallas, TX, USA) (SIGCSE ’11). Association for Computing Machinery, New York, NY, USA, 471–476. https://doi.org/10.1145/1953163.1953299
2307.16364#52
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
52
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b. Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor. Chatgpt for robotics: Design principles and model abilities. Technical Report MSR-TR-2023-8, Microsoft, February 2023. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
2307.16789#52
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
52
Human Evaluation For each example, we pro- vide annotators with a question (or the conversa- tion history), response, and retrieved passages and task them with determining whether the response is grounded in the provided passages. We allow anno- tators to provide two labels – 1) to determine if the provided passage is actually a relevant passage to the user’s query, and 2) to determine if the model re- sponse is “completely,” “partially,” or “not” found in the presented passages. The model response is given a score of 1.0 if the label is “completely,” 0.5 for “partially” and 0 for “not.” We collect two annotations for each example and resolve all con- flicting annotations by collecting a third annotation and taking the majority vote. We randomly sample 50 examples from Natural Questions, HotpotQA, and TopiOCQA for evalu- ation. We first filter out annotations for which the passage is not relevant to the query. This resulted in 39 samples for NQ, 47 for HotpotQA, and 49 for TopiOCQA. The high number of non-relevant for NQ is probably due to heuristic matching of gold
2307.16877#52
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
53
[8] Paul Denny, James Prather, Brett A. Becker, Zachary Albrecht, Dastyni Loksa, and Raymond Pettit. 2019. A Closer Look at Metacognitive Scaffolding: Solving Test Cases Before Programming. In Proceedings of the 19th Koli Calling International Conference on Computing Education Research (Koli, Finland) (Koli Calling ’19). Association for Computing Machinery, New York, NY, USA, Article 11, 10 pages. https://doi.org/10.1145/3364510.3366170 [9] Paul Denny, James Prather, Brett A. Becker, James Finnie-Ansley, Arto Hellas, Juho Leinonen, Andrew Luxton-Reilly, Brent N. Reeves, Eddie Antonio San- tos, and Sami Sarsa. 2023. Computing Education in the Era of Generative AI. arXiv:2306.02608 [cs.CY]
2307.16364#53
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
53
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. Vi- sual chatgpt: Talking, drawing and editing with visual foundation models. ArXiv preprint, abs/2303.04671, 2023. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions, 2023a. Qiantong Xu, Fenglu Hong, Bo Li, Changran Hu, Zhengyu Chen, and Jian Zhang. On the tool manipulation capability of open-source large language models. arXiv preprint arXiv:2305.16504, 2023b. Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, and Xindong Wu. Chatgpt is not enough: Enhancing large language models with knowledge graphs for fact-aware language modeling. arXiv preprint arXiv:2306.11489, 2023.
2307.16789#53
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
53
Dataset: HotpotQA Question: Which 2008 American documentary film, Pond Hockey or I.O.U.S.A., delves into fiscal issues? Knowledge #1: Pond Hockey (film) - [...] The film is an examination of the changing culture of pond hockey. Knowledge #2: I.O.U.S.A. - I.O.U.S.A. is a 2008 American documentary film directed by Patrick Creadon. The film fo- cuses on the shape and impact of the United States national debt [...] and was known as the "Fiscal Wake-Up Tour." Ref. Answer: I.O.U.S.A. Response (Alpaca): Pond Hockey delves into fiscal issues. I.O.U.S.A. focuses on the shape and impact of the United States national debt. Scores: K-F1: 15.91, K-Precision: 82.35, K-BertS (F1): 84.64, Q2 (F1): 20.0, Q2 (NLI): 20.0, FaithCritic: 0.53, LLMCritic(GPT-3.5): 0.0, LLMCritic(GPT-3.5): 0.0
2307.16877#53
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
54
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. ArXiv preprint, abs/2210.03629, 2022. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023. 12 Preprint Yining Ye, Xin Cong, Yujia Qin, Yankai Lin, Zhiyuan Liu, and Maosong Sun. Large language model as autonomous decision maker. arXiv preprint arXiv:2308.12519, 2023. Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, and Chao Zhang. Toolqa: A dataset for llm question answering with external tools. arXiv preprint arXiv:2306.13304, 2023. 13 Preprint APPENDIX A IMPLEMENTATION DETAILS A.1 DETAILS FOR FILTERING RAPIDAPI
2307.16789#54
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
54
Dataset: NQ Question: When did they replace lead with graphite in pen- cils? Knowledge: Pencil - [...] many people have the misconception that the graphite in the pencil is lead, [...] even though it never contained the element lead. [...] Ref. Answer: never contained the element lead Response (Flan-T5): 1835 Scores: K-F1: 0.0, K-Precision: 0.0, K-BertS (F1): 76.22, Q2 (F1): 0.0, Q2 (NLI): 0.0, FaithCritic: 99.96, LLMCritic(GPT- 3.5): 100.0, LLMCritic(GPT-3.5): 0.0
2307.16877#54
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
55
[11] Barbara J. Ericson, Paul Denny, James Prather, Rodrigo Duran, Arto Hellas, Juho Leinonen, Craig S. Miller, Briana B. Morrison, Janice L. Pearce, and Susan H. Rodger. 2022. Parsons Problems and Beyond: Systematic Literature Review and Empirical Study Designs. In Proceedings of the 2022 Working Group Reports on Innovation and Technology in Computer Science Education (Dublin, Ireland) (ITiCSE-WGR ’22). Association for Computing Machinery, New York, NY, USA, 191–234. https://doi.org/10.1145/3571785.3574127 James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, and James Prather. 2022. The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. In Proceedings of the 24th Aus- tralasian Computing Education Conference (Virtual Event, Australia) (ACE ’22). Association for Computing Machinery, New York, NY, USA, 10–19. https: //doi.org/10.1145/3511861.3511863 James
2307.16364#55
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
55
13 Preprint APPENDIX A IMPLEMENTATION DETAILS A.1 DETAILS FOR FILTERING RAPIDAPI We perform a rigorous filtering process to ensure that the ultimate tool set of ToolBench is reliable and functional. The filtering process is as follows: (1) initial testing: we begin by testing the basic functionality of each API to ascertain whether they are operational. We discard any APIs that do not meet this basic criterion; (2) example response evaluation: we make API calls to obtain an example response. Then we evaluate their effectiveness by response time and quality. APIs that consistently exhibit a long response time are omitted. Also, we filter out the APIs with low-quality responses, such as HTML source codes or other error messages. A.2 API RESPONSE COMPRESSION When examining the response returned by each API, we discover that some responses may contain redundant information and are too long to be fed into LLMs. This may lead to problems due to the limited context length of LLMs. Therefore, we perform a response compression to reduce the length of API responses while maintaining their critical information.
2307.16789#55
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
55
Dataset: TopiOCQA Question: What are dragonfly flight speeds? Knowledge: Dragonfly (Flight speed) - [...] a maximum speed of 36–54 km/h (22–34 mph) with average cruising speed of about 16 km/h (9.9 mph). Dragonflies can travel at 100 body- lengths per second in forward flight, and three lengths per second backwards Ref. Answer: It can travel at 100 body-lengths per second in forward flight, and three lengths per second backwards. Response (GPT-3.5): Dragonflies like the hawkers have a maximum speed of around 30 miles per hour with an average cruising speed of about 10-15 miles per hour. They can also travel at 100 body-lengths per second in forward flight Scores: K-F1: 54.54, K-Precision: 72.97, K-BertS (F1): 89.48, Q2 (F1): 40.0, Q2 (NLI): 40.0, FaithCritic: 99.55, LLMCritic(GPT-3.5): 100.0, LLMCritic(GPT-3.5): 100.0 Figure 6: Examples of non-faithful responses along- side relevant metric scores. Text in purple indicates hallucination, while teal responses are grounded to teal provided knowledge.
2307.16877#55
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
56
for Computing Machinery, New York, NY, USA, 10–19. https: //doi.org/10.1145/3511861.3511863 James Finnie-Ansley, Paul Denny, Andrew Luxton-Reilly, Eddie Antonio Santos, James Prather, and Brett A. Becker. 2023. My AI Wants to Know If This Will Be on the Exam: Testing OpenAI’s Codex on CS2 Programming Exercises. In Proceedings of the 25th Australasian Computing Education Conference (Melbourne, VIC, Australia) (ACE ’23). Association for Computing Machinery, New York, NY, USA, 97–104. https://doi.org/10.1145/3576123.3576134
2307.16364#56
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
56
Since each API has a fixed response format, we use ChatGPT to analyze one response example and remove unimportant keys within the response to reduce its length. The prompt of ChatGPT contains the following information for each API: (1) tool documentation, which includes tool name, tool description, API name, API description, parameters, and an example API response. This gives ChatGPT a hint of the API’s functionality; (2) 3 in-context learning examples, each containing an original API response and a compressed response schema written by experts. In this way, we obtain the response compression strategies for all APIs. During inference, when the API response length exceeds 1024 tokens, we compress the response by removing unimportant information. If the compressed response is still longer than 1024, we only retain the first 1024 tokens. Through human evaluation, we find that this compression retains important information contained in the API response and successfully removes the noises. A.3 DETAILS FOR TRAINING TOOLLLAMA
2307.16789#56
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
56
Figure 6: Examples of non-faithful responses along- side relevant metric scores. Text in purple indicates hallucination, while teal responses are grounded to teal provided knowledge. passage to the question. We consider three models – GPT-3.5, Flan-T5, and Alpaca, resulting in 405 samples. We compute scores from all evaluation metrics on this subset, including LLMCritic (for both GPT-3.5 and GPT-4). These are presented in Table 9 (Appendix D).
2307.16877#56
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
57
[14] Majeed Kazemitabaar, Justin Chow, Carl Ka To Ma, Barbara J. Ericson, David Weintrop, and Tovi Grossman. 2023. Studying the Effect of AI Code Generators on Supporting Novice Learners in Introductory Programming. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 455, 23 pages. https://doi.org/10.1145/3544548.3580919 [15] Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2018. A Systematic Liter- ature Review of Automated Feedback Generation for Programming Exercises. ACM Transactions on Computing Education (TOCE) 19, 1 (2018), 1–43.
2307.16364#57
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
57
A.3 DETAILS FOR TRAINING TOOLLLAMA We train the model in a multi-round conversation mode. For the training data format, we keep the input and output the same as those of ChatGPT. Since it is unclear how ChatGPT organizes the function call field, we just concatenate this information into the input as part of the prompt for ToolLLaMA. For the training hyper parameters, we use a learning rate of 5 × 10−5, a warmup ratio of 4 × 10−2, a total batch size of 64, a maximum sequence length of 8192, and use a position interpolation ratio of 2. We train the model for two epochs and select the model checkpoint with the best performance on the development set and then evaluate it on the test set. A.4 DETAILS FOR DFSDT
2307.16789#57
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
57
In Table 4, we present correlations between dif- ferent automatic groundedness metrics and human evaluation. We find that LLMCritic based on GPT- 4 correlates the most with human evaluation. KDataset Model K-F1 ↑ K-Precision ↑ K-BertS (F1) ↑ Q2 (F1) ↑ Q2 (NLI) ↑ FaithCritic ↓ NQ GPT-3.5 Flan-T5 Alpaca Llama-2 19.66 5.84 13.29 20.42 65.78 94.04 70.44 70.9 85.34 80.9 83.40 84.94 38.17 36.54 30.18 – 43.07 38.27 33.46 – 19.37 82.42 69.92 32.37 HotpotQA GPT-3.5 Flan-T5 Alpaca Llama-2 16.61 3.26 9.55 17.7 81.19 92.12 87.03 76.9 84.18 78.57 82.68 83.65 49.32 36.03 43.51 – 56.07 37.97 49.05 – 38.95 64.31 50.32 38.53 TopiOCQA GPT-3.5 Flan-T5 Alpaca Llama-2 26.82
2307.16877#57
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
58
[16] Sam Lau and Philip J Guo. 2023. From “Ban It Till We Understand It” to “Resistance is Futile”’: How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools such as ChatGPT and GitHub Copilot. ACM ICER 2023 to appear. https://pg.ucsd.edu/publications/cs- instructors-adapting-to-chatgpt-copilot-ai-tools_ICER-2023.pdf Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, and Arto Hellas. 2023. Comparing Code Explanations Created by Students and Large Language Models. arXiv:2304.03938 [cs.CY] Juho Leinonen, Arto Hellas, Sami Sarsa, Brent Reeves, Paul Denny, James Prather, and Brett A. Becker. 2023. Using Large Language Models to Enhance Programming Error Messages. In Proceedings of the 54th ACM Technical Sym- posium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing
2307.16364#58
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
58
A.4 DETAILS FOR DFSDT In practice, it is essential to balance effectiveness with costs (the number of OpenAI API calls). Classical DFS algorithms generate multiple child nodes at each step, then sort all the child nodes, and select the highest-scoring node for expansion. After greedily expanding to the terminal node, DFS backtracks to explore nearby nodes, expanding the search space. Throughout the algorithm, the most resource-intensive part is the sorting process of child nodes. If we use an LLM to evaluate two nodes at a time, it requires approximately O(n log n) complexity of OpenAI API calls, where n is the number of child nodes. In fact, we find empirically that in most cases, the nodes ranked highest are often the node generated at first. Therefore, we skip the sorting process of child nodes and choose a pre-order traversal (a variant for DFS) for the tree search. This design has the following advantages: • If the model does not retract an action (e.g., for the case of simple instructions), then DFSDT degrades to ReACT, which makes it as efficient as ReACT. 14 Preprint • After the algorithm finishes, the nodes explored by this method are almost the same as those found by a classical DFS search. Hence, it can also handle complex instructions that only DFS can solve.
2307.16789#58
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16789
59
Overall, this design achieves a similar performance as DFS while significantly reducing costs. It should also be noted that ReACT can be viewed as a degraded version of DFSDT. Therefore, although ToolLLaMA is trained on data created by DFSDT, the model can be used either through ReACT or DFSDT during inference. A.5 DETAILS FOR TOOLEVAL We adopt two metrics for automatic tool-use capability evaluation: pass rate and win rate. Details for Pass Rate To assess whether a solution path completes the tasks outlined in the original instruction and successfully passes it, we need to first consider the solvability of the instruction. In principle, an instruction can be classified as either (1) solvable: for example, at least one of the provided tools is potentially helpful in solving the original instruction; or (2) unsolvable: for example, all APIs are irrelevant to the instruction or the instruction provides invalid information such as invalid email address. To determine whether a solution path is deemed passed or not, we need to consider whether the instruction is solvable or unsolvable. In our evaluation, three types of labels can be given to each solution path, i.e., Pass, Fail, and Unsure. Specifically, we define different rules as follows: If the instruction is solvable: 1. If the model gives finish type “Finish by Giving Up”,
2307.16789#59
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
59
Table 5: Results for faithfulness w.r.t relevant knowledge. We report both token-based and model-based metrics. For all metrics except FaithCritic, higher scores indicate greater response groundedness. Precision, the token-overlap based metrics that is invariant to the length of the knowledge snippet in a close second, better than other model-based faithfulness metrics like K-BertS, FaithCritic, and Q2. This indicates that models trained to detect hallucinations in knowledge-grounded dialogues do not generalize well to information-seeking QA tasks. We present some examples of model hallu- cinations in Figure 6, along with associated scores of evaluation metrics. Automatic Evaluation In Table 5, we present the results for faithfulness w.r.t relevant knowledge on NQ, HotpotQA, and TopiOCQA. Taditional faith- fulness metrics such as K-F1, K-BertS, and Faith- Critic, rank either Llama-2 or GPT-3.5 as the most faithful model for all the three tasks.
2307.16877#59
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
60
If the instruction is solvable: 1. If the model gives finish type “Finish by Giving Up”, (a) After trying all the APIs extensively during and receiving no helpful information from APIs, the solution path is deemed a Pass. (b) If the model only calls a few API or receiving valid information from the APIs, the solution path is deemed a Fail. 2. If the model gives finish type “Finish with Final Answer”, (a) If the APIs provide no valid information, and the model has tried all the APIs to retrieve useful information, but the final answer still does not resolve the original instruction or conveys a refusal (such as “I’m sorry, but I can’t provide you with this, because the tools are unavailable”), the solution path is deemed a Pass. (b) If the tools provide valid information, and the final answer does not completely resolve the instruction or is a refusal, the solution path is deemed a Fail. (c) If the final answer completely resolves the original instruction, the solution path is deemed a Pass. (d) If it is unable to determine if the instruction is resolved based on the content of the final answer, the solution path is deemed an Unsure. If the instruction is unsolvable: 1. If the model gives finish type “Finish with Final Answer”,
2307.16789#60
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
60
On the other hand, K-Precision, the metric most correlated with human judgments, denotes a com- pletely different trend. GPT-3.5 is the least faithful for NQ, while Llama-2 is least faithful for Hot- potQA and TopiOCQA. K-Precision ranks Flan-T5 as the most faithful instruction-following model for all three tasks. We hypothesize that K-F1 faces a similar issue as F1 in correctness evaluation – there is a length mismatch between the model response and the provided knowledge snippet. Our prelimi- nary examination of model responses reveals that Flan-T5 responses are generally short, which is probably why K-F1 assigns it a low score. These findings further highlight that verbose re- sponses from instruction-following models are of- ten not grounded in provided passages. For exam- ple, in Figure 6, GPT-3.5 hallucinates by outputting numbers that are completely different from what was provided, whereas Alpaca fails to reason prop- erly based on provided passages. # 5.3 Faithfulness w.r.t Irrelevant Knowledge
2307.16877#60
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
61
[20] Andrew Luxton-Reilly, Paul Denny, Diana Kirk, Ewan Tempero, and Se-Young Yu. 2013. On the Differences between Correct Student Solutions. In Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education (Canterbury, England, UK) (ITiCSE ’13). Association for Computing Machinery, New York, NY, USA, 177–182. https://doi.org/10.1145/2462476.2462505 [21] Stephen MacNeil, Joanne Kim, Juho Leinonen, Paul Denny, Seth Bernstein, Brett A. Becker, Michel Wermelinger, Arto Hellas, Andrew Tran, Sami Sarsa, James Prather, and Viraj Kumar. 2023. The Implications of Large Language Models for CS Teachers and Students. In Proceedings of the 54th ACM Tech- nical Symposium on Computer Science Education V. 2 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 1255. https://doi.org/10.1145/3545947.3573358
2307.16364#61
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
61
If the instruction is unsolvable: 1. If the model gives finish type “Finish with Final Answer”, (a) If the final answer resolves an instruction that was initially considered unresolvable, the solution path is deemed a Pass. (b) If the final answer is a refusal, the solution path is deemed a Pass. (c) If the final answer is hallucinated by the model itself and provides a false positive response (such as “I’ve completed the task, the final answer is *”), the solution path is deemed a Fail. 2. If the model gives finish type “Finish by Giving Up”, (a) Under this case, the solution path is deemed a Pass. For every solution path, we instruct the ChatGPT evaluator to generate multiple (≥ 4) predictions and perform a majority vote to derive the final pass rate. 15 Preprint
2307.16789#61
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
61
# 5.3 Faithfulness w.r.t Irrelevant Knowledge In the retrieval-augmented setting, an ideal model should comprehend passage contents and avoid an- swering if the passage lacks relevant information. To test this, we provide the models with an irrele- vant passage by selecting the 1001 ranked passage from the list of retrieved passages. Prompt Setup Our preliminary experiments demonstrated that without an explicit instruction, Flan-T5 and Alpaca did not refrain from answering at all. Hence, we modified the prompt to make this behavior more explicit and instructed the model to output I don’t know if the passage is deemed irrel- evant, as demonstrated in Figure 9 (Appendix B). We report the proportion of model responses that contain I don’t know and other observed synony- mous expressions.4 Note that for these experiments, we only investigate whether a model refused to an- swer. We do not verify the correctness of any gen- erated responses. Moreover, to measure the impact of this new instruction, we also experiment with providing the gold passage and report the propor- tion of model responses that do not contain I don’t know and other synonymous expressions.
2307.16877#61
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
62
[22] Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, and Juho Leinonen. 2023. Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). As- sociation for Computing Machinery, New York, NY, USA, 931–937. https: //doi.org/10.1145/3545945.3569785 [23] Kamil Malinka, Martin Peresíni, Anton Firc, Ondrej Hujnák, and Filip Janus. 2023. On the Educational Impact of ChatGPT: Is Artificial Intelligence Ready to Obtain a University Degree?. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 47–53. https: 11 //doi.org/10.1145/3587102.3588827
2307.16364#62
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
62
For every solution path, we instruct the ChatGPT evaluator to generate multiple (≥ 4) predictions and perform a majority vote to derive the final pass rate. 15 Preprint Details for Win Rate Since pass rate only measures whether an instruction is completed or not, instead of how well it is completed, we adopt another metric: win rate. It is measured by comparing two solution paths for a given instruction. We assume that a passed candidate is better than a failed candidate and only compare those solution paths that are both “Pass”, or both “Failed” annotated by the ChatGPT evaluator. Note that compared with another solution path, one solution path will be annotated with one of the following: win, lose, or tie. We build rules for the evaluator’s behavior to decide which solution path is better, and the criteria are listed as follows: 1. Information richness: whether the final answer contains all the necessary information to answer the original instruction. A significantly richer answer is better, while a similar level of richness that is sufficient to answer the question ties. 2. Factuality: whether it accurately describes what has been done, and what failed in the end. A more accurate description in the final answer is better. 3. Reasoning: whether a detailed and accurate reason for failure is provided if the query remains unresolved. A more detailed reason is better.
2307.16789#62
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
62
Results We present our results in Table 6. We find that when provided with an irrelevant passage, Llama-2 most often refuses to answer on open- domain and multi-hop QA datasets (more than 99% in NQ and HotpotQA). GPT-3.5 performs the best for TopiOCQA, refraining to answer on 88.15% turns. However, for both of these models, the incli- nation to not answer also extends to when the gold passage is actually present. In comparison, Flan- T5 is well balanced on datasets it was exposed to 4“UNANSWERABLE”, “..passages do not contain..” Incorrect Psg. ↑ Gold Psg. ↓ Dataset Model NQ GPT-3.5 Flan-T5 Alpaca Llama-2 98.5 91.99 0.06 99.34 48.01 24.76 0.00 75.84 HotpotQA GPT-3.5 Flan-T5 Alpaca Llama-2 98.54 77.14 0.09 99.16 26.39 1.58 0.11 76.96 TopiOCQA GPT-3.5 Flan-T5 Alpaca Llama-2 88.15 40.77 1.27 87.59 32.42 7.68 0.80 61.77
2307.16877#62
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
63
11 //doi.org/10.1145/3587102.3588827 [24] Steven Moore, Richard Tong, Anjali Singh, Zitao Liu, Xiangen Hu, Yu Lu, Joleen Liang, Chen Cao, Hassan Khosravi, Paul Denny, Chris Brooks, and John Stamper. 2023. Empowering Education with LLMs-The Next-Gen Interface and Content Generation. In International Conference on Artificial Intelligence in Education. Springer, 32–37. https://doi.org/10.1007/978-3-031-36336-8_4 [25] Yulia Pechorina, Keith Anderson, and Paul Denny. 2023. Metacodenition: Scaf- folding the Problem-Solving Process for Novice Programmers. In Proceedings of the 25th Australasian Computing Education Conference (Melbourne, VIC, Aus- tralia) (ACE ’23). Association for Computing Machinery, New York, NY, USA, 59–68. https://doi.org/10.1145/3576123.3576130 [26] Leo Porter and Daniel Zingaro. 2023. Learn AI-Assisted Python Programming:
2307.16364#63
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
63
3. Reasoning: whether a detailed and accurate reason for failure is provided if the query remains unresolved. A more detailed reason is better. 4. Milestone: calculating the number of milestones reached during execution. 5. Exploration: whether more potentially useful APIs were attempted during the execution process. The use of a greater number of APIs is better. 6. Cost: Having fewer repeated (redundant) API calls is better if the number of APIs used is the same. For every solution path, we also generate multiple (≥ 4) predictions and then perform a majority vote to derive the final win rate. In Table 4, for ease of reading, we split the ratio of tie into two pieces and add them to win and lose, respectively. In Table 6, we report the original numbers as a reference.
2307.16789#63
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
63
Table 6: Percentage of model responses that contain I don’t know and other synonymous expressions when provided with an incorrect passage (higher is better) or the gold passage (lower is better). during training, however, it remains overconfident on TopiOCQA, which was not included in the train- ing. Alpaca adheres the least to the instruction and answers even if the passage is not relevant to the information need of the user. Appendix E demon- strates some failure examples of these models in both scenarios. Further research is required to opti- mally design and prompt models to better identify when to answer and when not to answer. # 6 Discussion and Limitations Below, we highlight several key findings of this paper and discuss some of its limitations.
2307.16877#63
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
64
[26] Leo Porter and Daniel Zingaro. 2023. Learn AI-Assisted Python Programming: With Github Copilot and ChatGPT. Manning, Shelter Island, NY. James Prather, Brent N. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett Powell, James Finnie-Ansley, and Eddie Antonio Santos. 2023. “It’s Weird That it Knows What I Want”’: Usability and Interactions with Copilot for Novice Programmers. arXiv:2304.02491 [cs.HC] 27 [28] Brent Reeves, Sami Sarsa, James Prather, Paul Denny, Brett A. Becker, Arto Hellas, Bailey Kimmel, Garrett Powell, and Juho Leinonen. 2023. Evaluating the Performance of Code Generation Models for Solving Parsons Problems With Small Prompt Variations. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 299–305. https://doi.org/10.1145/3587102.3588805
2307.16364#64
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
64
Comparing Human Evaluation and ToolEval To validate the reliability of ChatGPT evalua- tor in both pass rate and win rate, we sample among four different methods (ChatGPT+ReACT, ChatGPT+DFSDT, ToolLLaMA+DFSDT and GPT4+DFSDT) to obtain solution pairs for 300 test in- structions for each method. Then we engage humans to annotate the pass rate for ChatGPT+DFSDT, ToolLLaMA+DFSDT and GPT4+DFSDT, and the win rate among ChatGPT+ReACT and Chat- GPT+DFSDT. Our ChatGPT evaluator demonstrates a high agreement of 87.1% in pass rate and 80.3% in win rate with human annotators. This result shows that our evaluator generates highly similar evaluation results to humans and can be viewed as a credible evaluator who simulates human evaluation on pass rate and win rate.
2307.16789#64
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
64
# 6 Discussion and Limitations Below, we highlight several key findings of this paper and discuss some of its limitations. Which Evaluation Metrics are Best? Our analy- sis on correctness (§4) and faithfulness (§5) demon- strates that widely-used metrics are not suitable for evaluating the correctness (due to errors such as elaborate answers, open-ended questions, and list of named-entities) and faithfulness (due to partially grounded responses). Correlating the metrics with human judgements (Table 2 and Table 5) reveals that Recall and GPT4-Eval are the best lexical and model-based metrics for correctness and K- Precision and LLMCritic (GPT-4) are the best lexical and model-based metrics for faithfulness, respectively. However, these model-based metrics, especially the ones based on LLMs, are usually slow to run, expensive, difficult to reproduce, and may exhibit systematic biases. While we propose that Recall and K-Precision are the most widely-accessible and human-aligned metrics for correctness and faithfulness, respec- tively, we emphasize that these simple lexicalbased metrics are easy to hack. One model can copy all the retrieved knowledge as the output, leading to high Recall and K-Precision metrics. However, such a model will be penalized heavily irrelevant when evaluated for faithfulness w.r.t. knowledge.
2307.16877#64
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
65
[29] Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. 2022. Automatic Generation of Programming Exercises and Code Explanations Using Large Lan- guage Models. In Proceedings of the 2022 ACM Conference on International Com- puting Education Research - Volume 1 (Lugano and Virtual Event, Switzerland) (ICER ’22). Association for Computing Machinery, New York, NY, USA, 27–43. https://doi.org/10.1145/3501385.3543957 [30] Leonard Tang, Elizabeth Ke, Nikhil Singh, Bo Feng, Derek Austin, Nakul Verma, and Iddo Drori. 2022. Solving Probability And Statistics Problems By Probabilis- tic Program Synthesis At Human Level And Predicting Solvability. In Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tuto- rials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium: 23rd International Conference, AIED 2022, Durham, UK, July 27–31, 2022, Proceedings, Part II (Durham, United Kingdom). Springer-Verlag, Berlin, Heidelberg, 612–615. https://doi.org/10.1007/978-3-031-11647-6_127
2307.16364#65
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
65
It should also be noted that the evaluation for tool learning is far more intricate than traditional tasks such as dialogue. The reason is that there may exist infinite “correct” solution paths for each instruction. In our initial investigations, we surprisingly found that even human experts often disagree with each other in deciding which solution path is better, leading to a relatively low agreement. For instance, one may prefer a solution path that uses only a few APIs to derive the final answer quickly; while another may prefer a solution path that extensively tries all the APIs to cross-validate specific information. In this regard, we believe there is still a long way to go for a fair evaluation of the tool-use domain, and we believe this work has paved the way for it. We expect more future works to explore this interesting research problem. A.6 DETAILS FOR EXPERIMENTS ON APIBENCH When generalizing ToolLLaMA to APIBench, no training updates were made to ToolLLaMA, but instead of treating each API in the prompt as a function call. We define one function that represents selecting an API, providing the code for invoking it, and describing the generated output in natural language. We do not consider the zero-shot setting of APIBench where the prompts do not contain any API descriptions because the APIs from the three tested domains were never encountered during training. 16 Preprint
2307.16789#65
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
65
Instruction-Following Models According to the most human aligned and easy to use metrics (i.e., Recall and K-Precision), we conclude that GPT- 3.5 outperforms other models on majority of the datasets in correctness w.r.t information need. How- ever, when analyzing the faithfulness w.r.t relevant knowledge, Flan-T5 is shown to be the best model in all three datasets. Moreover, our further analysis on the models’ faithfulness w.r.t irrelevant knowl- edge demonstrates that models struggle to correctly identify whether the provided knowledge is rele- vant or not. Limitations It is worth mentioning that the exper- iments for evaluating the faithfulness of the models are conducted in a modified setting, where a rele- vant or irrelevant passage is provided in the prompt on purpose. This is different from the real-world scenario, where the retrieved passages can contain a mix of relevant and irrelevant knowledge. Finally, it should also be noted that beyond qual- itative investigation, we did not explore a wide range of prompts for the tasks studied in this work. Recent work has shown that the performance of instruction-following models can vary greatly de- pending upon the provided prompt (Zhao et al., 2021; Liu et al., 2023b). We leave it to future works to investigate better prompts for instruction- following models in a retrieval-augmented setting. # 7 Conclusion
2307.16877#65
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16364
66
[31] Matti Tedre and Henriikka Vartiainen. 2023. K-12 Computing Education for the AI Era: From Data Literacy to Data Agency. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 1–2. https://doi.org/10.1145/3587102.3593796 [32] Michel Wermelinger. 2023. Using GitHub Copilot to Solve Simple Programming Problems. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 172–178. https://doi.org/10.1145/ 3545945.3569830 Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, and Douglas C. Schmidt. 2023. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. arXiv:2302.11382 [cs.SE]
2307.16364#66
Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators
With their remarkable ability to generate code, large language models (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in focus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models. This paper introduces a novel pedagogical concept known as a `Prompt Problem', designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course (n=54). We explore student interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice.
http://arxiv.org/pdf/2307.16364
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
cs.HC, cs.AI
null
null
cs.HC
20230731
20230731
[ { "id": "2306.04556" }, { "id": "2302.11382" }, { "id": "2304.02491" }, { "id": "2306.02608" }, { "id": "2304.03938" } ]
2307.16789
66
Model ChatGPT Method DFSDT I1-Inst. I1-Tool I1-Cat. I2-Inst. I2-Cat. I3-Inst. Average Win 52.5 Tie Win 55.0 16.0 Tie Win 47.5 14.0 Tie Win 19.5 67.0 Tie Win 10.0 58.5 Tie Win 61.0 12.5 Tie Win 56.9 16.0 Tie 14.7 Claude-2 ReACT DFSDT 27.0 34.0 8.0 8.0 24.0 41.0 7.5 6.5 29.5 39.5 8.5 7.5 32.0 32.5 6.0 9.5 28.5 33.5 6.0 0.0 43.0 65.0 9.5 0.0 30.7 40.8 7.5 5.3 Text-Davinci-003 ReACT DFSDT 23.5 35.0 10.0 10.5 28.5 37.5 13.5 12.5 27.0 40.0 8.0 13.5 26.5 36.5 6.5 8.0 25.5 40.0 8.5 6.5 41.0 60.0 8.0 6.0 28.7 41.5 9.1 9.5 GPT4 ReACT DFSDT
2307.16789#66
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
66
# 7 Conclusion We extensively study the capability of instruction- following models to correctly and faithfully re- spond to questions in three QA settings (natural, multi-hop, and conversational). First, we uncover various issues with using traditional metrics, like F1 score, to evaluate the correctness of models. Through correlation with human judgement, we find that LLM-based metrics (e.g. GPT-4) and token-level Recall are promising metrics for evalu- ating the correctness w.r.t information need. More- over, our further faithfulness analysis shows that LLM-based metrics like LLMCritic (GPT-4) and lexical-based K-Precision are more aligned with human judgements in evaluating the faithfulness of the models given the relevant knowledge. Overall, we find that GPT-3.5 is better at provid- ing correct responses for all tasks, whereas Flan-T5 comes out on top for faithfulness. However, all models struggle to accurately respond with “I don’t know” given an irrelevant passage when explicitly instructed to do so. While Recall and K-Precision are the most hu- man judgement aligned and widely-accessible alter- native metrics, they are easy to hack. Therefore, we encourage the community to come up with more reliable metrics. # References
2307.16877#66
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
67
8.5 6.5 41.0 60.0 8.0 6.0 28.7 41.5 9.1 9.5 GPT4 ReACT DFSDT 52.5 60.5 15.0 14.0 53.5 62.5 10.5 10.5 56.0 58.0 15.0 17.0 59.5 67.0 12.5 12.5 52.5 57.0 15.5 12.5 76.0 80.0 4.0 8.0 58.3 64.2 12.1 12.4 Vicuna Alpaca (ReACT & DFSDT) (ReACT & DFSDT) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ToolLLaMA ReACT DFSDT Retriever 40.0 48.5 58.0 10.0 13.0 8.5 36.5 50.5 54.5 11.0 9.5 9.0 42.0 49.5 51.0 11.0 10.0 8.0 45.5
2307.16789#67
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
67
# References Vaibhav Adlakha, Shehzaad Dhuliawala, Kaheer Sule- man, Harm de Vries, and Siva Reddy. 2022. Topi- ocqa: Open-domain conversational question answer- ing with topic switching. Transactions of the Associ- ation for Computational Linguistics, 10:468–483. Raviteja Anantha, Svitlana Vakulenko, Zhucheng Tu, Shayne Longpre, Stephen Pulman, and Srinivas Chappidi. 2021. Open-domain question answering goes conversational via question rewriting. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 520–534, Online. Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learning to retrieve reasoning paths over wikipedia graph for question answering. In International Conference on Learning Representations.
2307.16877#67
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16877
68
Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguis- tics. Petr Baudis and Jan Sedivý. 2015. Modeling of the question answering task in the yodaqa system. In Conference and Labs of the Evaluation Forum. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533–1544, Seattle, Wash- ington, USA. Association for Computational Linguis- tics. Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020. Interpreting Pretrained Contextualized Representa- tions via Reductions to Static Embeddings. In Pro- ceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4758– 4781, Online. Association for Computational Lin- guistics.
2307.16877#68
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
69
Table 6: Win rate results before merging the tie label. Win rate is calculated by comparing each model with ChatGPT-ReACT. A win rate higher than 50% means the model performs better than ChatGPT-ReACT. Apart from ToolLLaMA-DFSDT-Retriever, all methods use the oracle API retriever (i.e., ground truth API). A.7 PROMPTS FOR INSTRUCTION GENERATION Below we list the detailed prompt for instruction generation, which consists of four parts: task description, in-context learning examples, sampled API list, and other requirements.
2307.16789#69
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
69
Samuel R. Bowman. 2023. Eight things to know about large language models. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Jannis Bulian, Christian Buck, Wojciech Gajewski, Ben- jamin Börschinger, and Tal Schuster. 2022. Tomayto, tomahto. beyond token-level answer equivalence for question answering evaluation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 291–305, Abu Dhabi, United Arab Emirates. Association for Computa- tional Linguistics.
2307.16877#69
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
70
Task Description of Single-tool Instructions: You will be provided with a tool, its description, all of the tool’s available API functions, the descriptions of these API functions, and the parameters required for each API function. Your task involves creating 10 varied, innovative, and detailed user queries that employ multiple API functions of a tool. For instance, if the tool ‘climate news’ has three API calls - ‘get all climate change news’, ‘look up climate today’, and ‘historical climate’, your query should articulate something akin to: first, determine today’s weather, then verify how often it rains in Ohio in September, and finally, find news about climate change to help me understand whether the climate will change anytime soon. This query exemplifies how to utilize all API calls of ‘climate news’. A query that only uses one API call will not be accepted. Additionally, you must incorporate the input parameters required for each API call. To achieve this, generate random information for required parameters such as IP address, location, coordinates, etc. For instance, don’t merely say ‘an address’, provide the exact road and district names. Don’t just mention
2307.16789#70
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
70
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Association for Computational Linguistics (ACL). Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evalua- tions? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 15607–15631, Toronto, Canada. Association for Computational Linguistics. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality.
2307.16877#70
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
71
coordinates, etc. For instance, don’t merely say ‘an address’, provide the exact road and district names. Don’t just mention ‘a product’, specify wearables, milk, a blue blanket, a pan, etc. Don’t refer to ‘my company’, invent a company name instead. The first seven of the ten queries should be very specific. Each single query should combine all API call usages in different ways and include the necessary parameters. Note that you shouldn’t ask ‘which API to use’, rather, simply state your needs that can be addressed by these APIs. You should also avoid asking for the input parameters required by the API call, but instead directly provide the parameter in your query. The final three queries should be complex and lengthy, describing a complicated scenario where all the API calls can be utilized to provide assistance within a single query. You should first think about possible related API combinations, then give your query. Related apis are apis that can be used for a give query; those related apis have to strictly come from the provided api names. For each query, there should be multiple related apis; for different queries, overlap of related apis should be as little as
2307.16789#71
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
71
Sabrina Chiesurin, Dimitris Dimakopoulos, Marco An- tonio Sobrevilla Cabezudo, Arash Eshghi, Ioannis Papaioannou, Verena Rieser, and Ioannis Konstas. 2023. The dangers of trusting stochastic parrots: Faithfulness and trust in open-domain conversational question answering. In Findings of the Association for Computational Linguistics: ACL 2023, pages 947– 959, Toronto, Canada. Association for Computational Linguistics. Paul F Christiano, Jan Leike, Tom Brown, Miljan Mar- tic, Shane Legg, and Dario Amodei. 2017. Deep Reinforcement Learning from Human Preferences. In Advances in Neural Information Processing Sys- tems, volume 30. Curran Associates, Inc. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi
2307.16877#71
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
72
have to strictly come from the provided api names. For each query, there should be multiple related apis; for different queries, overlap of related apis should be as little as possible. Deliver your response in this format: [Query1: ......, ‘related apis’:[api1, api2, api3...],Query2: ......, ‘related apis’:[api4, api5, api6...],Query3: ......, ‘related apis’:[api1, api7, api9...], ...]
2307.16789#72
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
72
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Al- bert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdh- ery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Ja- cob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 845–855, Melbourne, Australia. Association for Computational Linguistics.
2307.16877#72
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
73
Task Description of Multi-tool Instructions: You will be provided with several tools, tool descriptions, all of each tool’s available API functions, the descriptions of these API functions, and the parameters required for each API function. Your task involves creating 10 varied, innovative, and detailed user queries that employ API functions of multiple tools. For instance, given three tools ‘nba news’, ‘cat-facts’, and ‘hotels’: ‘nba news’ has API functions ‘Get individual NBA source news’ and ‘Get all NBA news’, ‘cat-facts’ has API functions ‘Get all facts about cats’ and ‘Get a random fact about cats’, ‘hotels’ has API functions ‘properties/get-details (Deprecated)’, ‘properties/list (Deprecated)’ and ‘locations/v3/search’. Your query should articulate something akin to: ‘I want to name my newborn cat after Kobe and host a 17 Preprint
2307.16789#73
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
73
Nouha Dziri, Ehsan Kamalloo, Sivan Milton, Os- mar Zaiane, Mo Yu, Edoardo M. Ponti, and Siva Reddy. 2022a. Faithdial : A Faithful Benchmark for Information-Seeking Dialogue. Transactions of the Association for Computational Linguistics, 10:1473– 1490. Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chan- dra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, and Yejin Choi. 2023. Faith and Fate: Limits of Transformers on Composi- tionality. ArXiv:2305.18654 [cs]. Nouha Dziri, Andrea Madotto, Osmar Zaïane, and Avishek Joey Bose. 2021. Neural Path Hunter: Re- ducing Hallucination in Dialogue Systems via Path Grounding. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing, pages 2197–2214, Online and Punta Cana, Do- minican Republic. Association for Computational Linguistics.
2307.16877#73
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
74
party to celebrate its birth. Get me some cat facts and NBA news to gather inspirations for the cat name. Also, find a proper hotel around my house in Houston Downtown for the party.’ This query exemplifies how to utilize API calls of all the given tools. A query that uses API calls of only one tool will not be accepted. Additionally, you must incorporate the input parameters required for each API call. To achieve this, generate random information for required parameters such as IP address, location, coordinates, etc. For instance, don’t merely say ‘an address’, provide the exact road and district names. Don’t just mention ‘a product’, specify wearables, milk, a blue blanket, a pan, etc. Don’t refer to ‘my company’, invent a company name instead. The first seven of the ten queries should be very specific. Each single query should combine API calls of different tools in various ways and include the necessary parameters. Note that you shouldn’t ask ‘which API to use’, rather, simply state your needs that can be addressed by these APIs. You should also avoid asking for the input parameters required by the API call, but instead directly provide the
2307.16789#74
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
74
Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, and Siva Reddy. 2022b. On the origin of hallucinations in conversational models: Is it the datasets or the models? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 5271–5285, Seattle, United States. Association for Computational Linguistics. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text de- generation. In International Conference on Learning Representations. Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021. Q2: Evaluating Factual Consistency in Knowledge- Grounded Dialogues via Question Generation and In Proceedings of the 2021 Question Answering. Conference on Empirical Methods in Natural Lan- guage Processing, pages 7856–7870, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
2307.16877#74
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
75
rather, simply state your needs that can be addressed by these APIs. You should also avoid asking for the input parameters required by the API call, but instead directly provide the parameters in your query. The final three queries should be complex and lengthy, describing a complicated scenario where all the provided API calls can be utilized to provide assistance within a single query. You should first think about possible related API combinations, then give your query. Related APIs are APIs that can be used for a given query; those related APIs have to strictly come from the provided API names. For each query, there should be multiple related APIs; for different queries, overlap of related APIs should be as little as possible. Deliver your response in this format: [Query1: ......, ‘related apis’:[[tool name, api name], [tool name, api name], [tool name, api name]...],Query2: ......, ‘related apis’:[[tool name, api name], [tool name, api name], [tool name, api name]...],Query3: ......, ‘related apis’:[[tool name, api name], [tool name, api name], [tool name, api name]...], ...]
2307.16789#75
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
75
Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O’Horo, Gabriel Pereyra, Jeff Wang, Christo- pher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Ves Stoyanov. 2023. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open do- main question answering. In Proceedings of the 16th Conference of the European Chapter of the Associ- ation for Computational Linguistics: Main Volume, pages 874–880, Online. Association for Computa- tional Linguistics. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi- Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Atlas: Few-shot learning with retrieval augmented language models.
2307.16877#75
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16877
76
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics. Ehsan Kamalloo, Nouha Dziri, Charles Clarke, and Davood Rafiei. 2023. Evaluating open-domain ques- tion answering in the era of large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5591–5606, Toronto, Canada. Association for Computational Linguistics. Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906.
2307.16877#76
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
77
For example, with tool ASCII Art, the given api names are ‘figlet’, ‘list figlet styles’, ‘cowsay’, ‘list cowsay styles’, ‘matheq’. Some sample queries and related apis would be: “Query”: “Need to create an ASCII art representation of a mathematical equation. The equation is ‘y = mx + c’, where m and c are constants. Help me generate the ASCII art for this equation. Also please generate an ASCII art representation of the text ‘Newton’s Second Law of Motion’.”, “related apis”: [’figlet’, ‘list figlet styles’, ‘matheq’] “Query”: “Working on a research paper on cows and need to include ASCII art representations of various cows. Can you first retrieve available ASCII art styles for cows? Then, can you generate ASCII art for cows like the Jersey, Holstein, and Guernsey? Finally, I want the cow to say ‘Moo!’ in the ASCII art.”, “related apis”:
2307.16789#77
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
77
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: A benchmark for question answering research. Transactions of the Association for Compu- tational Linguistics, 7:452–466. Angeliki Lazaridou, Elena Gribovskaya, Wojciech Internet- Stokowiec, and Nikolai Grigorev. 2022. augmented language models through few-shot prompting for open-domain question answering. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open do- main question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy.
2307.16877#77
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
78
Holstein, and Guernsey? Finally, I want the cow to say ‘Moo!’ in the ASCII art.”, “related apis”: [’figlet’, ‘list figlet styles’, ‘cowsay’, ‘list cowsay styles’] “Query”: “I’m writing a blog post on ASCII art and need to include some examples. Can you generate ASCII art for the following strings: ‘ASCII’, ‘art’, and ‘gallery’? You can first retrieve available figlet styles and then generate ASCII art for the strings using the styles.”, “related apis”: [’figlet’, ‘list figlet styles’] “Query”: “Greetings! I’m putting together a quirky slideshow about our furry friends and need your help to sprinkle some ASCII art goodness. Could you kindly fetch me the catalog of ASCII art styles available for animals? Also, I’m particularly keen on featuring ASCII art for creatures like pandas, cows, elephants, and penguins. And if they could say something cute like
2307.16789#78
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
78
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- In Advances in Neural Infor- intensive nlp tasks. mation Processing Systems, volume 33, pages 9459– 9474. Curran Associates, Inc. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334. Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paran- jape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023a. Lost in the middle: How language models use long contexts.
2307.16877#78
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16877
79
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023b. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv., 55(9). Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023c. G-eval: Nlg evaluation using gpt-4 with better human align- ment. Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. 2022. When not to trust language models: Investigating effectiveness and limitations of parametric and non- parametric memories. arXiv preprint. Sabrina J. Mielke, Arthur Szlam, Emily Dinan, and Y.-Lan Boureau. 2022. Reducing conversational agents’ overconfidence through linguistic calibration. ArXiv:2012.14983 [cs].
2307.16877#79
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
80
For example, with tool [’Entrepreneur Mindset Collection’, ‘Random Words’, ‘thedigitalnews- feederapi’, ‘Chemical Elements’], the given api names are (tool ‘Entrepreneur Mindset Collec- tion’)’Random Quote in JSON format’, (tool ‘Random Words’)’Get multiple random words’, (tool ‘Random Words’)’Get a random word’, (tool ‘thedigitalnewsfeederapi’)’getting specific cricket articles’, (tool ‘thedigitalnewsfeederapi’)’Getting Cricket Articles’, (tool ‘thedigitalnewsfeeder- api’)’getting specific news articles’, (tool ‘thedigitalnewsfeederapi’)’Getting News Articles’, (tool ‘thedigitalnewsfeederapi’)’getting all news articles’, (tool ‘Chemical Elements’)’Get All Chemical Elements’. Some sample queries and related apis would be: “Query”: “For my best friend’s surprise birthday party, I require inspiration for
2307.16789#80
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
80
Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jenni- maria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Hein- rich Küttler, Linqing Liu, Pasquale Minervini, Pon- tus Stenetorp, Sebastian Riedel, Sohee Yang, Min- joon Seo, Gautier Izacard, Fabio Petroni, Lucas Hos- seini, Nicola De Cao, Edouard Grave, Ikuya Ya- mada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Bar- las Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev,
2307.16877#80
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16877
82
volume 133 of Proceedings of Machine Learning Re- search, pages 86–111. PMLR. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-task generaliza- tion via natural language crowdsourcing instructions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3470–3487, Dublin, Ireland. Association for Computational Linguistics. OpenAI. 2023. Gpt-4 technical report. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022a. Training language models to follow instructions with human feedback.
2307.16877#82
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
83
Also, I would appreciate details about the local hotels in my area for accommodation options. Your assistance is greatly appreciated.”, “related apis”: [[’Random Words’, ‘Get multiple random words’], [’thedigitalnewsfeederapi’, ‘Getting News Articles’], [’thedigitalnewsfeederapi’, ‘Getting all news articles’]] “Query”: “In the midst of organizing a team-building event for my esteemed company, I eagerly seek your valued input for invigorating activities. Might I kindly request a collection of random quotes that encapsulate the essence of teamwork and motivation? Additionally, I am keen on exploring news articles that showcase triumphant team-building events, as they serve as a wellspring of inspiration.”, “related apis”: [[’Entrepreneur Mindset Collection’, ‘Random Quote in JSON format’], [’thedigi- talnewsfeederapi’, ‘Getting News Articles’]] “Query”: “I need specific cricket articles that discuss the health benefits of sports for my research paper on exercise. I also
2307.16789#83
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
83
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022b. Training language models to follow instructions with human feedback. ArXiv:2203.02155 [cs]. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Ashwin Paranjape, Omar Khattab, Christopher Potts, Matei Zaharia, and Christopher D Manning. 2022. Hindsight: Posterior-guided training of retrievers for improved open-ended generation. In International Conference on Learning Representations. Baolin Peng, Chunyuan Li, Pengcheng He, Michel Gal- ley, and Jianfeng Gao. 2023. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277.
2307.16877#83
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
84
News Articles’]] “Query”: “I need specific cricket articles that discuss the health benefits of sports for my research paper on exercise. I also want to know which chemical elements are associated with exercising, like increased iron (Fe) and its impact on bone marrow.”, “related apis”: [[’thedigitalnewsfeederapi’, ‘getting specific cricket articles’], [’Chemical Elements’, ‘Get All Chemical Elements’]] “Query”: “I’m starting a new business venture and I need to make a speech announcing the new dawn. Provide me some quotes and words for me to start with. I would like to gather news articles about successful entrepreneurs for inspiration.”, “related apis”: [[’Entrepreneur Mindset Collection’, ‘Random Quote in JSON format’], [’Random Words’, ‘Get multiple random words’], [’thedigital- newsfeederapi’, ‘getting specific news articles’]] These are only examples to show you how to write the query. Do not use APIs listed in the above examples, but rather, use the ones listed below in the
2307.16789#84
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
84
Peng Qi, Haejun Lee, Tg Sido, and Christopher Man- ning. 2021. Answering open-domain questions of varying reasoning steps from text. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing, pages 3599–3614, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susan- nah Young, Eliza Rutherford, Tom Hennigan, Ja- cob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Mari- beth Rauh, Po-Sen Huang, Amelia Glaese, Jo- hannes Welbl, Sumanth Dathathri, Saffron Huang,
2307.16877#84
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
86
Sampled API List (An example) { "tool_description": "EntreAPI Faker is used to dynamically create mock, demo, test and sample data for your application", "name": "EntreAPI Faker", "api_list": [ { "name": "Longitute", "url": "https://entreapi-faker.p.rapidapi.com/address/ longitude", "description": "Generate a random longitude.", "method": "GET", "required_parameters": [], "optional_parameters": [ { "name": "max", "type": "NUMBER", "description": "Maximum value for latitude.", "default": "" }, { "name": "min", "type": "NUMBER", "description": "Minimum value for latitude.", "default": "" }, { "name": "precision", "type": "NUMBER", "description": "Precision for latitude.", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" 19 Preprint }, { }, { }, {
2307.16789#86
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16789
87
"name": "Boolean", "url": "https://entreapi-faker.p.rapidapi.com/datatype /boolean", "description": "Randomly generate a boolean value.", "method": "GET", "required_parameters": [], "optional_parameters": [], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Past", "url": "https://entreapi-faker.p.rapidapi.com/date/ past", "description": "Randomly generate a date value in the past.", "method": "GET", "required_parameters": [], "optional_parameters": [ { "name": "refDate", "type": "STRING", "description": "Starting reference date", "default": "" }, { "name": "years", "type": "NUMBER", "description": "Number of years for the range of dates.", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Image Url", "url": "https://entreapi-faker.p.rapidapi.com/image/ imageUrl", "description": "Randomly generate an image URL.", "method": "GET", "required_parameters": [],
2307.16789#87
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]
2307.16877
87
Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Michael Collins, Dipanjan Das, Slav Petrov, Gau- rav Singh Tomar, Iulia Turc, and D. Reitter. 2021a. Measuring attribution in natural language generation models. ArXiv, abs/2112.12870.
2307.16877#87
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16877
88
Hannah Rashkin, David Reitter, Gaurav Singh Tomar, and Dipanjan Das. 2021b. Increasing Faithfulness in Knowledge-Grounded Dialogue with Controllable Features. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 704–718, Online. Association for Computa- tional Linguistics. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249–266. Devendra Sachan, Mostofa Patwary, Mohammad Shoeybi, Neel Kant, Wei Ping, William L. Hamil- ton, and Bryan Catanzaro. 2021. End-to-end training of neural retrievers for open-domain question answer- ing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 6648–6662, Online. Association for Computational Linguistics.
2307.16877#88
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16877
89
6648–6662, Online. Association for Computational Linguistics. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Tr- ishala Neeraj, Jos Rozen, Abheesht Sharma, An- drea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multi- task prompted training enables zero-shot task gener- alization. In International Conference on Learning Representations.
2307.16877#89
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https://github.com/McGill-NLP/instruct-qa
http://arxiv.org/pdf/2307.16877
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
cs.CL, cs.AI
null
null
cs.CL
20230731
20230731
[ { "id": "2201.08239" }, { "id": "2004.04906" }, { "id": "2304.03277" }, { "id": "2203.02155" }, { "id": "2012.14983" }, { "id": "2205.14334" }, { "id": "2305.18654" }, { "id": "2112.11446" } ]
2307.16789
90
"name": "useRandomize", "type": "BOOLEAN", "description": "Add a random number parameter to the returned URL.", "default": "" }, { "name": "category", "type": "STRING", "description": "The category for the image. Can be one: abstract, animal, avatar, business, cats, city, fashion, food, nature, nightlife, people, sports, technics, transport", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Sentence", "url": "https://entreapi-faker.p.rapidapi.com/lorem/ sentence", "description": "Randomly generate a sentence of Lorem Ipsum.", "method": "GET", "required_parameters": [], "optional_parameters": [ { "name": "wordCount", "type": "NUMBER", "description": "Number of words in the sentence.", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Gender", "url": "https://entreapi-faker.p.rapidapi.com/name/ gender", "description": "Randomly select a gender.", "method": "GET",
2307.16789#90
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.
http://arxiv.org/pdf/2307.16789
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230731
20231003
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2308.12519" }, { "id": "2306.08640" }, { "id": "2305.10601" }, { "id": "2304.08244" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2305.14318" }, { "id": "2306.13304" }, { "id": "2304.08354" }, { "id": "2306.11489" }, { "id": "2306.05301" }, { "id": "1908.10084" }, { "id": "2306.06624" }, { "id": "2305.06849" }, { "id": "2305.11554" }, { "id": "2212.10560" }, { "id": "2305.15334" }, { "id": "2305.14233" }, { "id": "2303.12712" }, { "id": "2109.01652" }, { "id": "2306.15595" } ]