doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.16364 | 9 | Recently, the focus has started to shift from assessing the capabili- ties of LLMs to using them in teaching and learning practice [21, 24]. Sarsa et al. showed that LLMs can generate viable programming questions including test cases and explanations [29]. Complement- ing this reshaping of the practices of teaching and learning, the importance of details such as context [18] and prompting [6] have begun to emerge. For example, White et al. present a prompt struc- turing framework for constructing prompts so they can be applied across problem domains, a catalog of prompts that have been suc- cessfully applied to improve LLM responses, and a demonstration of how prompts can be constructed from patterns and how such patterns can be combined effectively [33]. There is increasing inter- est in understanding the types of prompts that students construct
2 | 2307.16364#9 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 9 | ~
Table 1: A comparison of our ToolBench to notable instruction tuning dataset for tool learning.
that involve both single-tool and multi-tool scenarios. This ensures that our model learns not only how to interact with individual tools but also how to combine them to accomplish complex tasks; ⢠Solution Path Annotation: each solution path may contain multiple rounds of model reasoning and real-time API calls to derive the final response. However, even the most sophisticated LLM, i.e., GPT-4, achieves a low pass rate for complex human instructions, making annotation inefficient. To this end, we develop a novel depth-first search-based decision tree (DFSDT) to bolster the planning and reasoning ability of LLMs. Compared with conventional ReACT, DFSDT enables LLMs to evaluate a multitude of reasoning paths and make deliberate decisions to either retract steps or proceed along a promising path. In experiments, DFSDT significantly improves the annotation efficiency and successfully completes those complex instructions that cannot be fulfilled using ReACT. | 2307.16789#9 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 9 | To summarize, our contributions are as follows:
four instruction-following models â Llama-2, GPT-3.5, Flan-T5, and Alpaca â in retrieval- augmented settings across three diverse QA tasks. We collect human annotations for both correctness and faithfulness.
⢠We analyze several metrics in relation to hu- man judgments, finding that GPT-4-based eval- uation as the most correlated for both correct- ness and faithfulness. Additionally, we analyze failures of traditional QA metrics and highlight that models are unfairly penalized for verbosity.
⢠We propose simple token-overlap based metrics for both correctness and faithfulness â recall for correctness and K-Precision for faithfulness â and demonstrate their strong correlation with human judgments.
⢠Our results indicate that instruction-following models can surpass the performance of fine- tuned models in terms of correctness. However, these models struggle to be faithful to provided knowledge, often demonstrating a tradeoff be- tween the ability to remain faithful to relevant and irrelevant knowledge.
# 2 Related Work | 2307.16877#9 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 10 | 2
Input Output (fa, 21, (t4, 21, (3, 41] (3, 1]] (la; 2) 31, (9, 2, 31, [4, 5, 6], â_= [4, 5, 6], (7, 8, 91] (7, 8, 11] (Ga 2; 3) 41; ((11, 12, 3, 41, [5:6 7, 8, â> (15, 16, 7, 81, 9; 10) 11, 121; (910,121, 13, 14, 15, 16)] (13, 14, 5, 6]] Ibi, 2, 3, 4; 5), [[19, 20, 4, 51, [6, 7, 8, 9, 10], (24, 25, 9, 101, (11, 12, 13, 14, 15], â= (11, 12, 14, 15], (16, 17, 18, 19, 20], (16, 17, ee (21, 22, 23, 24, 25]] (21, 6, 71]
Figure 1: An example Prompt Problem that displays the data visually so that students cannot simply copy and paste the description into an LLM. The goal is to swap the top-left and bottom-right non-overlapping quadrants of the matrix. | 2307.16364#10 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 10 | To assess the tool-use capabilities of LLMs, we develop an automatic evaluator, ToolEval, backed up by ChatGPT. It comprises two key metrics: (1) pass rate, which measures LLMâs ability to successfully execute an instruction within limited budgets, and (2) win rate, which compares the quality and usefulness of two solution paths. We demonstrate that ToolEval achieves a high correlation with human evaluation and provides a robust, scalable, and reliable assessment for machine tool use.
By fine-tuning LLaMA on ToolBench, we obtain ToolLLaMA. After evaluation based on our ToolEval, we derive the following findings: | 2307.16789#10 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 10 | # 2 Related Work
Instruction-Following Models Fine-tuning pre- trained models on a collection of NLP tasks for- matted as natural language instructions result in instruction-following models. These models can generalize to new unseen tasks based solely on instruction and optionally a few demonstrations, of- ten outperforming LLMs in zero-shot and few-shot settings while being only a fraction of their size (Mishra et al., 2022). Depending on the nature of the datasets used for training, these models can be broadly classified into three categories.
The majority of instruction-following models in the research community are trained on publicly available NLP datasets verbalized by human anno- tators (Wei et al., 2022; Mishra et al., 2022; Wang et al., 2022b; Chung et al., 2022; Iyer et al., 2023). The number of tasks ranges from a few tens (e.g. 62 in Wei et al. 2022) to several hundred (e.g. 1800+ in Iyer et al. 2023). | 2307.16877#10 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 11 | when communicating with LLMs. Babe et al. developed a bench- mark dataset of 1,749 prompts aimed at 48 problems, written by 80 novice Python programming students [2] which can be used by others for LLM benchmarking as well as tool development.
A logical next step towards integrating LLMs into teaching prac- tice is developing tools and resources to aid students in effectively working with LLMs for learning. Lao and Guo interviewed 19 in- troductory programming instructors from nine countries across six continents and found that some instructors are embracing the idea of integrating AI tools into current courses via mechanisms such as giving personalized help to students and aiding instructors with time-consuming tasks [16]. MacNeil et al. used LLM-generated code explanations successfully in a web software development e- book [22], and Zingaro and Porter are completing a textbook for teaching introductory programming with Copilot and ChatGPT from day one [26]. Integrating LLMs into computer science courses seems inevitable and stands to transform the way the subject is taught at all levels [5, 31]. We believe that Prompt Problems will be one important step along the journey towards integrating the use of LLMs in computer science education.
# 3 PILOT STUDY | 2307.16364#11 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 11 | By fine-tuning LLaMA on ToolBench, we obtain ToolLLaMA. After evaluation based on our ToolEval, we derive the following findings:
ToolLLaMA demonstrates a compelling capability to handle both single-tool and complex multi- tool instructions. As depicted in Figure 2, ToolLLaMA outperforms Text-Davinci-003 and Claude-2, achieves comparable performance to the âteacher modelâ ChatGPT, and is only slightly inferior to GPT4. Besides, ToolLLaMA exhibits robust generalization to previously unseen APIs, requiring only the API documentation to adapt to new APIs effectively. This flexibility allows users to incorporate novel APIs seamlessly, thus enhancing the modelâs practical utility. ⢠We show that our DFSDT serves as a general decision-making strategy to enhance the reasoning capabilities of LLMs. DFSDT broadens the search space by considering multiple reasoning traces and achieves significantly better performance than ReACT.
⢠We train a neural API retriever, which alleviates the need for manual selection from the large API pool in practice. As shown in Figure 1, given an instruction, the API retriever recommends a set of relevant APIs, which are sent to ToolLLaMA for multi-round decision making to derive the final answer. Despite sifting through a large pool of APIs, the retriever exhibits remarkable retrieval precision, returning APIs closely aligned with the ground truth. | 2307.16789#11 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 11 | Ouyang et al. (2022a) conjecture that public NLP datasets are limited in scope and lack sufficient diversity in user inputs. To address this, they train InstructGPT on a mix of human-written prompts submitted to the OpenAI API and prompts created by expert labelers. The model is further fine-tuned with human feedback to align it more closely with human preferences (RLHF; Christiano et al. 2017). Llama-2 (Touvron et al., 2023b) is another recent model in this category, trained on a mix of public NLP datasets and high-quality expert annotations of dialogue-style instructions, followed by RLHF.
Finally, self-instruct (Wang et al., 2022a) is an alternative paradigm to reduce reliance on human- generated task instructions. Starting from a small manually-annotated task pool, an LLM is prompted to generate instructions and demonstrations of new tasks. The resultant synthetic dataset is used to train a language model to follow instructions (Taori et al., 2023; Peng et al., 2023). | 2307.16877#11 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 12 | # 3 PILOT STUDY
In order to understand how students might try to use LLM tools like ChatGPT to communicate program requirements, we asked a group of graduate students at the University of Auckland to participate in a prompt writing assignment. This assignment took place during a single class session in April 2023. We provided a visual represen- tation of a problem (see Figure 1) and asked participants to query ChatGPT to write a program that could convert the shown input to the shown output. The problem description was provided visu- ally to prevent participants from easily copying and pasting it and, instead, to encourage them to formulate a suitable prompt them- selves. We also asked students to keep a log of their interactions with ChatGPT, organized by their prompt followed by ChatGPTâs response, etc. in sequential order. We hoped that this would provide
information about the challenges that students face when using LLMs to write code, and to suggest where a tool might potentially help to address those difficulties.
Thirty-six graduate students participated in the pilot study by completing the activity described above and reflecting on it by writing an open-response review of the task. Fifteen participants chose to provide us with their ChatGPT conversations. We discuss these below in Section 3.1. Thirty-one participants shared their reflections on the task, which we discuss in Section 3.2.
# 3.1 Interactions | 2307.16364#12 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 12 | ⢠ToolLLaMA exhibits strong generalization performance on an out-of-distribution (OOD) dataset APIBench (Patil et al., 2023). Despite not training on any of the APIs or instructions on APIBench, ToolLLaMA performs on par with Gorilla, a pipeline specifically designed for APIBench.
# 2 DATASET CONSTRUCTION
We introduce the three-stage construction process of ToolBench: API collection (§ 2.1), instruction generation (§ 2.2), and solution path annotation (§ 2.3). All procedures are based on ChatGPT (gpt-3.5-turbo-16k), requiring minimal human supervision and can be easily extended to new APIs.
3
Preprint
Figure 3: The hierarchy of RapidAPI (left) and the process of instruction generation (right).
# 2.1 API COLLECTION
We start by introducing RapidAPI and its hierarchy, followed by how we crawl and filter APIs. | 2307.16789#12 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 12 | Datasets for instruction-tuning often contain sev- eral QA tasks. However, these tasks are either reading comprehension (i.e. answering a question about a provided passage) or closed-book QA (i.e., without using a large information source). In this work, we explore a more practical setting, where an instruction-following model is paired with a re- triever, a paradigm known as retrieval-augmented generation (RAG; Lewis et al. 2020).
Retrieval-Augmented Generation RAG entails using a retriever to select relevant passages from an information source, which are subsequently passed to a generator to produce a response. This two- step retrieve-generate process has been shown to reduce hallucinations (Shuster et al., 2021), while lending interpretability and configurability to the model (Lewis et al., 2020). | 2307.16877#12 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 13 | # 3.1 Interactions
Listing 1 shows an example of a student in the pilot study going back-and-forth with ChatGPT when trying to solve the problem. Notice the âconversationalâ beginning and the subsequent iterative refinement. Each successive prompt attempts to refine the code that is already generated, rather than modify the original prompt to be more complete. We expected computer science graduate students to have very few problems writing effective prompts, however several interesting trends emerged from this activity. First, many students began with an incomplete prompt, starting the conversation with a greeting and no information on the code they wanted the LLM to produce. In each of these cases, what followed was a series of iterative attempts by the participant to incrementally clarify their intent, rather than rewriting the original prompt. Several partici- pants also struggled with understanding the problem and therefore wrote poorly formed prompts with little chance of achieving the desired result. In situations like these, many student conversations with ChatGPT were full of requests for ineffectual changes. | 2307.16364#13 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 13 | # 2.1 API COLLECTION
We start by introducing RapidAPI and its hierarchy, followed by how we crawl and filter APIs.
RapidAPI Hub RapidAPI is a leading API marketplace that connects developers with thousands of real-world APIs, streamlining the process of integrating diverse services into applications. Developers can test and connect with various APIs by registering only a RapidAPI key. All APIs in RapidAPI can be classified into 49 coarse-grained categories (link), such as sports, finance, and weather. The categories associate an API with the most relevant topic. Additionally, the hub also provides 500+ fine-grained categorization called collections (link), e.g., Chinese APIs and database APIs. APIs in the same collection share a common characteristic and often have similar functionalities or goals.
Hierarchy of RapidAPI As shown in Figure 3, each tool may be composed of multiple APIs. For each tool, we crawl the following information: the name and description of the tool, the URL of the host, and all the available APIs belonging to the tool; for each API, we record its name, description, HTTP method, required parameters, optional parameters, request body, executable code snippets for API call, and an example API call response. This rich and detailed metadata serves as a valuable resource for LLMs to understand and effectively use the APIs, even in a zero-shot manner. | 2307.16789#13 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 13 | RAG is a dominant paradigm for several information-seeking QA tasks such as open- domain QA (Chen et al. 2017; Lee et al. 2019; Sachan et al. 2021, inter alia), multi-hop QA (Asai et al. 2020; Qi et al. 2021; Izacard et al. 2022; inter alia), and conversational QA (Anantha et al. 2021; Adlakha et al. 2022; inter alia). Various works differ on how to train the generator to utilize in- formation from the retrieved passages, for e.g, by extracting snippets (Chen et al., 2017; Clark and Gardner, 2018; Wang et al., 2019; Karpukhin et al., 2020) or by jointly attending encoded passages and previously generated tokens (Fusion-in-Decoder; Izacard and Grave 2021). | 2307.16877#13 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 14 | Second, participants would prompt ChatGPT for code to solve the problem, paste the code into the automated assessment tool used in the exercise to check correctness, and then paste the failed test cases back into ChatGPT verbatim without any additional prompts or context. Participant #12, after several prompts that resulted in code that was not quite complete, told ChatGPT: âThis is very wrong, do the same as you had before but make sure that the top right quarter and the bottom left quarter are not edited by the function.â ChatGPT responded with an apology and more almost-correct code.
Third, some students misunderstood the assignment altogether, such as Participant #13 who initially prompted ChatGPT: âgive me a python 3 swap function which swap the the elements along the diagonalâ. After taking the code generated by ChatGPT and submitting it to the automated assessment tool used in the activity, they tried another incorrect prompt: âgive me a python 3 swap function which works by first swapping the elements of each row in place, and then swapping the elements of each column in place.â
These examples indicate that many students, even ones many years into their programming education, do not necessarily un- derstand how to write effective prompts. They could benefit from explicit prompt writing practice that could teach them to under- stand the problem, write a single thorough prompt, and check the code generated by the LLM as having complete test case coverage. | 2307.16364#14 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 14 | Initially, we gathered 10, 853 tools (53, 190 APIs) from RapidAPI. However, the API Filtering quality and reliability of these APIs can vary significantly. In particular, some APIs may not be well-maintained, such as returning 404 errors or other internal errors. To this end, we perform a rigorous filtering process (details in appendix A.1) to ensure that the ultimate tool set of ToolBench is reliable and functional. Finally, we only retain 3, 451 high-quality tools (16, 464 APIs).
INSTRUCTION GENERATION | 2307.16789#14 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 14 | Recent works have also explored using off-the- shelf language models as generators in the RAG pipeline, alleviating the need to fine-tune or learn additional parameters. Lazaridou et al. (2022) demonstrated that few-shot prompting an LM con- ditioned on the web results outperforms a vanilla LM for several open-domain QA tasks. Shi et al. (2023) showcase that pairing LLMs like GPT-3 (Brown et al., 2020) with retrievers improves lan- guage modeling performance as well. Separate from these works, we evaluate retrieval-augmented instruction-following models based only on natural language instruction. In the absence of training instances or demonstrations, these models do not learn the distribution of reference answers of the target QA dataset, raising new challenges for eval- uation. | 2307.16877#14 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 15 | # 3.2 Reflections
When reflecting on the task in our pilot study, many of the students mentioned that code producing LLMs need supervision to produce correct code. After working with ChatGPT to produce correct code
3
Listing 1 Transcript of a conversation with ChatGPT. Prompt 1: Hi, I want you to write a Python program.
ChatGPT response 1: Sure, what would you like the program to do?
Prompt 2: I have a square matrix, and I want to swap the first half of the rows with the second half of the rows.
ChatGPT response 2: Hereâs a Python program that should do what youâre asking for: def swap_matrix_rows(matrix):
n = len(matrix) half_n = n // 2 for i in range(half_n): matrix[i], matrix[half_n + i] = matrix[half_n + i], matrix[i] return matrix
Prompt 3: Thatâs not exactly what I meant. I actually want to swap the first half of the elements in each row with the corresponding elements in the second half of the rows. | 2307.16364#15 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 15 | INSTRUCTION GENERATION
Different from prior works, we specifically focus on two crucial aspects for instruction generation: (1) diversity: to train LLMs to handle a wide range of API usage scenarios, thereby boosting their generalizability and robustness; and (2) multi-tool usage: to mirror real-world situations that often demand the interplay of multiple tools, improving the practical applicability and flexibility of LLMs. To this end, instead of brainstorming instructions from scratch and then searching for relevant APIs, we sample different combinations of APIs and craft various instructions that involve them. Generating Instructions for APIs Define the total API set as SAPI, at each time, we sample a few N = {API1, · · · , APIN} from SAPI. We prompt ChatGPT to understand the functionalities APIs: Ssub of these APIs and then generate (1) possible instructions (Instâ) that involve APIs in Ssub N , and (2) relevant APIs (Srel Nâ², InstNâ²]}, where Nâ² denotes the number of generated instances. These (instruction, relevant API) pairs will be used for
4
Preprint
Figure 4: A comparison of our DFSDT and conventional CoT or ReACT during model reasoning (left). We show part of the solution path annotation process using ChatGPT (right). | 2307.16789#15 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 15 | Evaluation in QA Lexical matching between a set of reference answers and model response re- mains a dominant approach for evaluation across multiple NLP tasks. As QA tasks generally consist of short reference answers, previous works have pri- marily relied on Exact Match (EM) and F1 to evalu- ate and benchmark models (Rajpurkar et al., 2016; Reddy et al., 2019). For tasks that require generat- ing longer sequences, such as summarization and translation, subsequence-based lexical matching is generally employed (Papineni et al. 2002; Banerjee and Lavie 2005; Lin 2004, inter alia). | 2307.16877#15 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 16 | Prompt 3: Thatâs not exactly what I meant. I actually want to swap the first half of the elements in each row with the corresponding elements in the second half of the rows.
ChatGPT response 3: No problem, hereâs an updated version of the Python program that should swap the first half of the elements in each row with the corresponding elements in the second half of the rows: def swap_matrix_elements(matrix): n = len(matrix)
half_n = n // 2 for i in range(half_n): for j in range(len(matrix[i]) // 2): matrix[i][j], matrix[half_n + i] [j - len(matrix[i]) // 2] = matrix[half_n + i] [j - len(matrix[i]) // 2], matrix[i][j] return matrix
Prompt 4: Thatâs not quite what I had in mind either. What I want to do is swap the first half of the columns in the first half of the rows with the last half of the columns in the last half of the rows. | 2307.16364#16 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 16 | Figure 4: A comparison of our DFSDT and conventional CoT or ReACT during model reasoning (left). We show part of the solution path annotation process using ChatGPT (right).
training the API retriever in § 3.1. We use different sampling strategies (introduced later) to cover all APIs and most of their combinations, thus ensuring the diversity of our instructions.
The prompt for ChatGPT is composed of (1) a general description of the intended instruction genera- tion task, (2) comprehensive documentation of each API in Ssub N , which helps ChatGPT understand their functionality and interplay, and (3) three in-context seed examples {seed1, seed2, seed3}. Each seed example is an ideal instruction generation written by human experts. These seed examples are leveraged to better regulate ChatGPTâs behavior through in-context learning. In total, we wrote 12 / 36 diverse seed examples (Sseed) for the single-tool / multi-tool setting, and randomly sampled three examples at each time. Detailed prompts for instruction generation are described in appendix A.7. Overall, the generation process can be formulated as follows: | 2307.16789#16 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 16 | A major shortcoming of lexical matching is that it depends on a set of reference answers which may be incomplete. To overcome this limitation, subsequent model-based metrics compute the se- mantic similarity between the reference answer and the model response using contextualized embed- dings (Zhang et al., 2020) or train a specialized clas- sifier (Bulian et al., 2022) to predict equivalence. More recently, several works resort to prompting LLMs like GPT-4 (OpenAI, 2023) to act as evalua- tors (Chiang et al., 2023; Peng et al., 2023; Chiang and Lee, 2023; Kamalloo et al., 2023; Liu et al., 2023c). In this work, we explore evaluating both correctness and faithfulness using GPT-4. | 2307.16877#16 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 17 | ChatGPT response 4: My apologies, hereâs an updated version of the Python program that should swap the first half of the columns in the first half of the rows with the last half of the columns in the last half of the rows: def swap_matrix_elements(matrix): n = len(matrix)
half_n = n // 2 half_m = len(matrix[0]) // 2 for i in range(half_n): for j in range(half_m): matrix[i][j], matrix[half_n + i] [half_m + j] = matrix[half_n + i] [half_m + j], matrix[i][j] return matrix
Prompt 5: Oops, sorry, I wasnât clear. I actually want to swap the first half of the elements in the first half of the rows with the corresponding elements in the last half of the rows.
run against test cases, many students realized that writing code this way required a different kind of critical thinking. Instead of thinking through how to structure code to properly address the requirements, users instead need to carefully read and test the code generated for them to ensure it precisely fits their needs. Participants recognized
Visual Learner LLM generates Code representation enters code from executed of problem prompt prompt against tests | 2307.16364#17 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16877 | 17 | Concurrent to our work, Kamalloo et al. (2023) evaluate the correctness of InstructGPT in zero- shot and few-shot settings along with several fine- tuned models for open-domain QA. They highlight the shortcomings of traditional QA metrics and pro- pose BEM (Bulian et al., 2022) and LLM-based evaluation as viable alternatives. However, they do not consider InstructGPT in retrieval-augmented In contrast to their work, we investi- settings. gate both correctness and faithfulness of multi- ple instruction-following models across three di- verse QA tasks and propose simple token-overlap based metrics that correlate highly with human judgments. | 2307.16877#17 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 18 | Visual Learner LLM generates Code representation enters code from executed of problem prompt prompt against tests
that training was needed â both training of novice students in how to use LLMs so that they are more of a help than a hindrance, and training of the models so that they provide responses targeted to novice learners. For instance, participants said:
P11: â[It] made me reflect on the importance of critical thinking and proper supervision in using these models.â
P19: âI think itâs a double-edged sword in itself. Perhaps they need more supervision and training in the education field before they can become the best study companion for students outside of the classroom.â
Figure 2: The Promptly tool implements a simple variation of Prompt Problems in which the code generated by the LLM is automatically executed against a set of test cases and can not be edited directly. In order to modify the code, the learner is directed to edit the prompt.
Other students realized that prompt engineering to create code is a different kind of programming compared to how they learned. Co- ercing an LLM to generate correct code will need to be an iterative exercise in prompt refinement, not prompt conversation. Telling the LLM to revise its previous code, built from a previous prompt, may or may not work due to the way tools like ChatGPT maintain conversational context. Examples of this from participants can be seen below: | 2307.16364#18 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 18 | Sampling Strategies for Different Scenarios As shown in Figure 3, for the single-tool instruc- tions (I1), we iterate over each tool and generate instructions for its APIs. However, for the multi-tool setting, since the interconnections among different tools in RapidAPI are sparse, random sampling tool combinations from the whole tool set often leads to a series of irrelevant tools that cannot be covered by a single instruction in a natural way. To address the sparsity issue, we leverage the RapidAPI hierarchy information. Since tools belonging to the same RapidAPI category or collection are generally related to each other in the functionality and goals, we randomly select 2-5 tools from the same category / collection and sample at most 3 APIs from each tool to generate the instruc- tions. We denote the generated instructions as intra-category multi-tool instructions (I2) and intra-collection multi-tool instructions (I3), respectively. Through rigorous human evaluation, we find that instructions generated in this way already have a high diversity that covers various practical scenarios. We also provide visualization for instructions using Atlas (link) to support our claim. | 2307.16789#18 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 18 | Faithfulness and Groundedness Conversational models have been shown to produce factually in- correct or unsupported statements (Rashkin et al., 2021b; Dziri et al., 2022b), known as hallucina- tions. To alleviate those issues, various works at- tempt to reduce hallucinations via methods such as iterative refinement (Dziri et al., 2021), linguistic calibration (Mielke et al., 2022; Lin et al., 2022), or by editing instances of hallucinations (Dziri et al., 2022a), thus improving faithfulness of these mod- els. Several metrics have also been developed to measure faithfulness. Honovich et al. (2021) proposed Q2, an automatic faithfulness evaluation metric that checks for factual consistency based on automatic question generation and question an- swering. FaithCritic (Dziri et al., 2022a) is another model-based metric that predicts the degree of hal- lucination in a modelâs response.
For information-seeking, previous works have considered groundedness â the extent to which the generator relies on retrieved passages (Paranjape et al., 2022), quantified using Knowledge-F1 (K- F1;Shuster et al. 2021). In this work, we consider | 2307.16877#18 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 19 | text-davinci-003 which, although now officially a legacy model, is less likely to generate superfluous text and comments in the re- sponses. We found that the gpt-3.5-turbo model requires significant additional prompting to increase the likelihood of generating only executable code, but that relying on prompting alone can be unre- liable. Future work will explore additional filtering approaches in order to transition to this newer model. All relevant data, including prompts, responses and testing outcomes is stored using Firestoreâs NoSQL database.
P12: âI also found it interesting how difficult it was to get ChatGPT to write a somewhat simple program. I understand now that it is almost like a different kind of programming to get it to do what you want, and I can see the potential uses in education.â | 2307.16364#19 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 19 | After generating the initial set of instructions, we further filter those with the hallucinated relevant APIs by assessing whether they exist in Ssub N . Finally, we collect nearly 200k qualified (instruction, relevant API) pairs, including 87413, 84815, and 25251 instances for I1, I2, and I3, respectively.
2.3 SOLUTION PATH ANNOTATION
As shown in Figure 4, given an instruction Instâ, we prompt ChatGPT to search for a valid action sequence: {a1, · · · , aN}. Such a multi-step decision-making process is cast as a multi-round conver- sation for ChatGPT. At each round t, the model generates an action at based on previous interactions, i.e., ChatGPT(at|{a1, r1, · · · , atâ1, rtâ1}, Instâ), where râ denotes the real API response. For each
5
# Preprint
at, ChatGPT should specify its âthoughtâ, which API to use, and the specific parameters for this API, i.e., at has the following format: âThought: · · · , API Name: · · · , Parameters: · · · â. | 2307.16789#19 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 19 | a model response to be faithful, if it is grounded in the passage relevant to the userâs information need. Concurrent to our work, Chiesurin et al. (2023) investigated hallucination of retrieval-augmented GPT-3 in for conversational QA (Adlakha et al., 2022) task. They found that GPT-3 is likely to produce responses that appear trustworthy but are unfaithful.
# 3 Experimental Setup
# 3.1 Tasks
We evaluate our approach on the validation splits of three information-seeking QA tasks. The total number of questions and passages for each dataset are provided in Table 1. We describe the datasets used for each task below.
Open-domain QA Natural Questions (NQ; Kwiatkowski et al. 2019) includes questions sourced from Google queries, with reference an- swers written by human annotators. We use the open version of NQ (Lee et al., 2019) that consists of short answers based on 100-token passages from English Wikipedia (indexed in Dec. 2018).
Multi-hop QA We use HotpotQA (Yang et al., 2018) for this task, where each question requires reasoning across two Wikipedia passages. The pas- sages are taken from the initial paragraphs from En- glish Wikipedia articles (indexed in October 2017). | 2307.16877#19 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 20 | P15: âThe most difficult part of this exercise is to properly instruct ChatGPT so that it could fully understand the requirements. ChatGPT appears to be âpolite but stubbornâ, as it would generate code that could be run successfully but does not produce the correct output. When asking ChatGPT for a fix, it would politely provide a new snippet of code but the new code didnât effectively fix the problem, and sometimes there were no modifications made to the new code at all.â These representative samples from the reflections by students indicated to us that learning how to successfully write prompts would need to be a skill taught explicitly in introductory program- ming courses, alongside other skills that are traditionally taught. We propose the idea of Prompt Problems to address this new gap in programming education.
# 4.1 Tool Design
Within the Promptly tool, sets of Prompt Problems are organized into course repositories from which students can select after logging in. Each Prompt Problem within a course repository consists of a visual representation of a problem â that is, an image that does not include a textual description of the problem â and a set of associated test cases that are used to verify the code that is generated by the LLM. | 2307.16364#20 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 20 | To leverage the function call feature of ChatGPT, we treat each API as a special function and feed its API documentation into ChatGPTâs function field. In this way, the model understands how to call the API. For each instruction Instâ, we feed all the sampled APIs Ssub N to ChatGPTâs as available functions. To let ChatGPT finish an action sequence, we define two additional functions, i.e., âFinish with Final Answerâ and âFinish by Giving Upâ. The former function has a parameter that corresponds to a detailed final answer to the original instruction; while the latter function is designed for cases where the provided APIs cannot complete the original instruction after multiple API call attempts.
Depth First Search-based Decision Tree In our pilot studies, we find that CoT (Wei et al., 2023) or ReACT (Yao et al., 2022) has inherent limitations: (1) error propagation: a mistaken action may propagate the errors further and cause the model to be trapped in a faulty loop, such as continually calling an API in a wrong way or hallucinating APIs; (2) limited exploration: CoT or ReACT only explores one possible direction, leading to limited exploration of the whole action space. Hence even GPT-4 often fails to find a valid solution path, making annotation difficult. | 2307.16789#20 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 20 | Conversational QA We use TopiOCQA (Ad- lakha et al., 2022) for this task, a dataset for open- domain information-seeking dialogue. At each turn of the conversation, an agent responds to a userâs questions based on knowledge from Wikipedia. Each turn has an associated 200-token gold passage from English Wikipedia (indexed in Oct. 2020).
# Instruction-following Models
To evaluate retrieval-augmented instruction- following language models, we present the models with an instruction, followed by the retrieved passages and the query. The prompt template for open-domain QA and multi-hop QA tasks is given in Figure 2, whereas conversational QA differs slightly, replacing the question with conversation history (Figure 3). We consider four instruction- following models that primarily differ based on the type of training data used. We use the same generation parameters for all instruction-following models, described in Appendix A.1.
Dataset # Questions # Passages Natural Questions HotpotQA TopiOCQA 3,610 7,405 2,514 21,015,324 5,233,329 25,700,593
Table 1: Statistics for datasets used in this work. We use the validation split from each dataset for our evaluation as the test sets are hidden. | 2307.16877#20 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 21 | Each set of Prompt Problems for a given course are presented in order, and a student can navigate through these using âBackâ and âNextâ buttons (see Figure 3). Once a Prompt Problem is selected, the student is shown the visual representation of the problem, and a partial prompt to complete. For problems where the solution is a Python program, this partial prompt begins: âWrite a Python program that...â, which provides guidance to the student. If the problem requires students to write a single function, then the partial prompt is: âWrite a Python function called...â. As soon as any text for extending the prompt is entered by the student, the âClick here to ask ChatGPT!â button is enabled. Clicking this button constructs a prompt that is sent to the LLM. This prompt consists of the verbatim text entered by the student, as well as some additional prompting to guide the model to produce only code and no additional explanatory text.
# 4 A TOOL FOR DELIVERING PROMPT PROBLEMS AT SCALE: PROMPTLY
We have developed a web-based tool called Promptly to support one particular variation of Prompt Problems, in which the code generated by the LLM is not editable by the learner (see Figure 2). Other variations of Prompt Problems are possible and we discuss these in Section 6.1. | 2307.16364#21 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 21 | To this end, we propose to construct a decision tree to expand the search space and increase the possibility of finding a valid path. As depicted in Figure 4, our DFSDT allows the model to assess different reasoning paths and choose to either (1) proceed along a promising path or (2) abandon an existing node by calling the âFinish by Giving Upâ function and expand a new node. During node expansion, to diversify the child nodes and expand the search space, we prompt ChatGPT with the information of the previously generated nodes and explicitly encourage the model to generate a distinct node. For the searching process, we prefer depth-first search (DFS) instead of breadth-first search (BFS) because the annotation can be finished as long as one valid path is found. Using BFS will cost excessive OpenAI API calls. More details are described in appendix A.8. We perform DFSDT for all the generated instructions and only retain those passed solution paths. Ultimately, we generate 126, 486 (instruction, solution path) pairs, which are used to train ToolLLaMA in § 3.2.
# 3 EXPERIMENTS | 2307.16789#21 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 21 | Table 1: Statistics for datasets used in this work. We use the validation split from each dataset for our evaluation as the test sets are hidden.
Please answer the following question given the following passages: - title: {Passage title} {Passage text} - title: {Passage title} {Passage text} ... Question: {Question} Answer:
Figure 2: The prompt template used for open-domain QA and multi-hop QA tasks.
Please answer the following question given the following passages and the conversation history: - title: {Passage title} {Passage text} - title: {Passage title} {Passage text} ... User: {Question 1} Agent: {Answer 1} ... User: {Question k} Agent:
Figure 3: Prompt template for conversational QA task. | 2307.16877#21 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 22 | Our concrete implementation of the tool uses React and NodeJS as its key frameworks, and Material design for the styling of UI components. The client-side React implementation is accessible via Firebase Hosting, and the Express (NodeJS) backend is powered by Firebase Functions, operating within a serverless framework. The backend communicates with OpenAIâs API and transmits responses to a JobeInABox1 sandbox which is hosted on an EC2 AWS instance. We explored the use of several specific OpenAI models, including text-davinci-003 and gpt-3.5-turbo. Our current implementation uses
Once the code response is received from the LLM, it is then sent to a sandbox for execution against the set of test cases. We use the publicly available sandbox associated with the CodeRunner tool [19]. If the generated code passes all of the test cases for the prompt problem, then the student receives a success message and is directed to progress to the next problem. If any of the test cases fail, then the first failing test case is shown to the student. At this
1github.com/trampgeek/jobeinabox
4 | 2307.16364#22 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16877 | 22 | Figure 3: Prompt template for conversational QA task.
Flan-T5 We use the 11B parameter version of T5 (Raffel et al., 2020), which has been trained by Chung et al. (2022) using the instruction fine- tuning methods proposed by Wei et al. (2022). Flan-T5 is trained on multiple publicly-available instruction-following datasets (Sanh et al., 2022; Wang et al., 2022b; Wei et al., 2022). Together, these datasets encompass more than 1800 tasks, of which over 200 are QA tasks. Out of the three datasets on which we evaluate, the training split of NQ and HotpotQA are included in Flan-T5âs training regime.
GPT-3.5 We use the turbo version of GPT-3.51 which is described2 as a sibling to the InstructGPT model (Ouyang et al., 2022a). The modelâs training
1openai.com/blog/introducing-chatgpt-and-whisper-apis 2openai.com/blog/chatgpt
incorporates user data submitted to the OpenAI API as well as expert annotations, however, the exact distribution of training tasks and datasets is not publicly available. | 2307.16877#22 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 23 | 1github.com/trampgeek/jobeinabox
4
© Exercise #1 @ ciass Resistration Enter your name: Bob Hello Bob Visual representation of <________ problem (in this case, an animation illustrates user interaction with program) Write me a Python program that asks the user to enter their name, and then prints the word "Hello" followed by a space, followed by their name ChatGPT response: Hello " + input( Code Running response: <_â_ ââ \ Prompt entry 4 LLM response Execution output (in this case, a success message as all tests pass)
Figure 3: Interface layout for a Prompt Problem within the web-based Promptly tool (with figure annotations added in blue).
point, they are able to edit the prompt and resubmit in order to generate a new code response. | 2307.16364#23 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 23 | 3.1 PRELIMINARY EXPERIMENTS
ToolEval Considering the APIâs temporal variability on RapidAPI and the infinite potential solution paths for an instruction, it is infeasible to annotate a fixed ground-truth solution path for each test instruction. Moreover, when comparing different models, it is crucial to ensure they employ the same version of APIs during evaluation. Considering that human evaluation can be time-consuming, we follow AlpacaEval (Li et al., 2023b) to develop an efficient evaluator ToolEval based on ChatGPT, which incorporates two evaluation metrics (details in appendix A.5): (1) Pass Rate: it calculates the proportion of successfully completing an instruction within limited budgets. The metric measures the executability of instructions for an LLM and can be seen as a basic requirement for ideal tool use; and (2) Win Rate: we provide an instruction and two solution paths to ChatGPT evaluator and obtain its preference (i.e., which one is better). We pre-define a set of criteria for both metrics and these criteria are organized as prompts for our ChatGPT evaluator. We evaluate multiple times based on ChatGPT to improve the reliability. Then we calculate the average results from the evaluator. | 2307.16789#23 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 23 | incorporates user data submitted to the OpenAI API as well as expert annotations, however, the exact distribution of training tasks and datasets is not publicly available.
Alpaca We use the 7B variant of Alpaca (Taori et al., 2023), a fine-tuned version of LLaMA (Tou- vron et al., 2023a) trained on demonstrations gener- ated using GPT-3 (Brown et al., 2020). The demon- strations were collected using the self-instruct framework (Wang et al., 2022a).
Llama-2 We use the 7B chat version of Llama-2 (Touvron et al., 2023b). The model is initially boot- strapped on similar instruction-following dataset as Flan-T5, followed by fine-tuning for dialogue-style instructions. | 2307.16877#23 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 24 | point, they are able to edit the prompt and resubmit in order to generate a new code response.
Figure 3 shows a screenshot of the tool interface once the learner has logged in and selected their course. The following instructional message is shown but not included in the screenshot: âYour task is to view the visual representation of the problem and then type a prompt which describes the task sufficiently well for the language model to generate a correct solution in Python. If the code that is generated is not correct, you will see test output below the coding area and you can try again by modifying the prompt!â. In the screenshot in Figure 3, the first problem in a sequence of three problems for the course is shown, and has just been solved by the learner.
Enter a number: 3 Child Below 8 Child 8-12 Tween 13-19 Teenager 20 or above Adult
Below 8 Child 8-12 Tween 13-19 Teenager 20 or above Adult
# 4.2 Classroom Evaluation
Prompt Problems are a novel type of activity for learners in an introductory programming course, and so we are interested in un- derstanding their perceptions of the Promptly tool, and on their interactions with it when solving problems. We organise our inves- tigation of the way students use Promptly around the following two research questions:
RQ1: How do students interact with the Promptly tool in terms of overall success rates and on the lengths of the prompts they construct? | 2307.16364#24 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 24 | Through rigorous testing (details in appendix A.5), we find that ToolEval demonstrates a high agreement of 87.1% in pass rate and 80.3% in win rate with human annotators. This shows that ToolEval can reflect and represent human evaluation to a large extent.
Efficacy of API Retriever The API retriever aims to retrieve relevant APIs to an instruction. We employ Sentence-BERT (Reimers & Gurevych, 2019) to train a dense retriever based on BERT- BASE (Devlin et al., 2019). The API retriever encodes the instruction and API document into two embeddings, and calculates their relevance with embedding similarity. For training, we regard the relevant APIs of each instruction generated in § 2.2 as positive examples and sample a few other
6
Preprint | 2307.16789#24 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 24 | Fine-tuned Generators To compare against instruction-following models, we select FiD (Izac- ard and Grave, 2021) as our fine-tuned baseline for all three tasks. This encoder-decoder model separately encodes each retrieved passage with the query, resulting in a set of vectors. The decoder then autoregressively generates the answer by at- tending to the input passages and the previously generated tokens. For NQ and TopiOCQA, we use the publicly available FiD checkpoints, while for HotpotQA, we fine-tune our own variant using the default hyperparameters.
# 4 Correctness w.r.t Information Need
In this section, we investigate if retrieval- augmented instruction-following models can pro- duce responses that satisfy user information needs. We first describe our experimental setup by pro- viding details of the retriever used in each task (§4.1) and the metrics used for evaluating model re- sponses (§4.2). Next, we describe our human eval- uation setup and present the results from our anal- ysis (§4.3). Finally, equipped with a better under- standing of evaluation metrics, we conduct large- scale evaluation of instruction-following models and present the results (§4.4).
# 4.1 Retrieval | 2307.16877#24 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 25 | RQ1: How do students interact with the Promptly tool in terms of overall success rates and on the lengths of the prompts they construct?
RQ2: What are studentsâ perceptions of the Promptly tool and on learning programming through constructing prompts for LLMs?
Figure 4: Producing a categorization based on age.
To explore these questions, we deployed Promptly as an un- graded (i.e. optional) laboratory task in a large introductory Python programming course taught at the University of Auckland. Students in this course typically have no prior programming experience. The lab was conducted in the second week of the course, at which point students were writing single-file scripts, without the use of func- tions, and had learned about standard input and output, arithmetic, and conditional statements.
Three problems were available on Promptly for students to attempt. The first of these was the problem previously illustrated
5
Enter five decimal numbers (separated by spaces): 2.0 3.0 3.0 3.0 4.0 8.0+7.5+9.0 17 7 mp 8. 6.5+7.0+6.0 3 ==> 6.5
Figure 5: Calculating the average of the âmiddleâ values out of a set of five values (using the metaphor of judges scoring an athletic competition, where the highest and lowest values are excluded). | 2307.16364#25 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 25 | 6
Preprint
Method I1 NDCG I2 NDCG I3 NDCG Average NDCG Method I1 BM25 Ada Ours @1 @5 @1 @5 @1 @5 @1 @5 17.0 18.4 45.4 57.5 84.9 84.2 19.7 58.8 89.7 12.0 36.8 68.2 11.0 30.7 77.9 25.2 54.6 81.7 20.4 46.8 87.1 18.5 49.6 78.0 37.8 ReACT@N 49.4 58.0 ReACT DFSDT
# I2
40.6 49.4 70.6
# I3 Average
27.6 34.6 62.8
35.3 44.5 63.8
Table 2: Our API retriever v.s. two baselines for three types of instructions (I1, I2, I3). We report NDCG@1 and NDCG@5.
Table 3: Pass rate of different reasoning strategies for three types of instructions (I1, I2, I3) based on ChatGPT. | 2307.16789#25 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 25 | # 4.1 Retrieval
For each task, we use a task-specific variant of DPR (Dense Passage Retrieval; Karpukhin et al. 2020) as the retriever. The general architecture of DPR consists of a question and a passage encoder. The dot product between the dense vector represen- tations of the passage and the query is used as a ranking function.
For NQ, we adopt a pre-trained checkpoint from Karpukhin et al. (2020). This checkpoint was trained on four QA datasets â NQ, Trivi- aQA (Joshi et al., 2017), WebQuestions (Berant et al., 2013), and CuratedTREC (Baudis and Se- divý, 2015). For HotpotQA, we utilize a multi-hop variant of DPR proposed by Xiong et al. (2021). This version retrieves reasoning chains iteratively, selecting subsequent passages based on the query and previously retrieved passages. For TopiOCQA, we utilize the checkpoint provided by Adlakha et al. (2022). This variant of DPR is uniquely suited for conversational QA tasks as it encodes the conver- sation history in the question encoder. | 2307.16877#25 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 26 | in Figure 3, where the goal was to write a program that would ask the user to enter their name, accept input from the command line, and then display âHello â followed by the name as standard output. The other two problems are illustrated in Figures 4 and 5. The second problem (Figure 4) required a program that accepts an integer input from the user representing an age, and then prints a textual cateogrization of that age. The third problem (Figure 5) required a program that accepted five floating point inputs and then calculated the average of the three middle values (i.e. after removing the maximum and minimum values).
For all three problems, the visual representation of the problem included a short animated image (â¼10 second duration), shown as a command-prompt style window. The command-prompt anima- tion illustrated entry of user input, one key at a time, and then the subsequent display of output from the program. For the second and third problems, the visual representation also included a corre- sponding image that highlighted several pairs of inputs with their corresponding output. | 2307.16364#26 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 26 | Table 3: Pass rate of different reasoning strategies for three types of instructions (I1, I2, I3) based on ChatGPT.
APIs as negative examples for contrastive learning. For baselines, we choose BM25 (Robertson et al., 2009) and OpenAIâs text-embedding-ada-002 (link). We evaluate the retrieval performance using NDCG (J¨arvelin & Kek¨al¨ainen, 2002). We train and evaluate our model on single-tool instructions (I1), intra-category multi-tool instructions (I2), and intra-collection multi-tool instructions (I3).
As shown in Table 2, our API retriever consistently outperforms baselines across all settings, indicating its feasibility in real-world scenarios with massive APIs. Also, the NDCG score of I1 is generally higher than I2 and I3, which means single-tool instruction retrieval is simpler than multi-tool setting. | 2307.16789#26 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 26 | In all of the tasks, the retriever selects passages from the associated Wikipedia dump, as detailed in Section 3.1. The number of retrieved passages provided to instruction-following models and fine- tuned models for each task are provided in Ap- pendix A.2.
# 4.2 Evaluation Metrics
Evaluation in QA usually involves comparing model responses to human-annotated gold answers. The metrics used for this comparison can be di- vided into two categories:
Lexical Match These metrics score a model re- sponse based on its token overlap with the gold standard answer. While some metrics perform bag- of-words matching (e.g., Exact Match (EM), F1), others consider the order of the tokens by n-gram matching such as METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004). | 2307.16877#26 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 27 | In terms of interactions with the tool (RQ1) we calculate, for each of the three problems, the average number of prompt submissions that were required to solve it, the number of students who were successful, and the average number of words used in the submitted prompts. To investigate student perceptions of the activity and tool (RQ2), students were invited to provide feedback on their experience using Promptly. This feedback was not graded, and was given in response to the following prompt: âWe would appreciate hearing about your experiences completing the exercises and in particular, how you think the experience of writing prompts may help you to learn programmingâ.
# 5 RESULTS
Our study was conducted in July 2023, and participation by students was optional. A total of 54 students attempted at least one problem on Promptly, which represents approximately 12% of the enrolled students in the course.
6
# 5.1 Student interactions with Promptly | 2307.16364#27 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 27 | Superiority of DFSDT over ReACT Before solution path annotation, we validate the efficacy of DFSDT. Based on ChatGPT, we compare DFSDT and ReACT using the pass rate metric. Since DFSDT consumes more OpenAI API calls than ReACT, for a fairer comparison, we also establish a âReACT@Nâ baseline, which conducts multiple times of ReACT until the total costs reach the same level of DFSDT. Once a valid solution is found by ReACT@N, we deem it a pass.
From Table 3, it can be observed that DFSDT significantly outperforms the two baselines in all scenarios. Since we only retain those passed annotations as the training data, given the same budgets, using DFSDT could annotate more instructions. This makes DFSDT a more efficient way that saves the total annotation cost. We also find that the performance improvement of DFSDT is more evident for harder instructions (i.e., I2 and I3) than those simpler instructions (I1). This means that by expanding the search space, DFSDT can better solve those difficult, complex instructions that are unanswerable by the vanilla ReACT no matter how many times it is performed. Involving such âhard examplesâ in our dataset can fully elicit the tool-use capabilities for those complex scenarios.
3.2 MAIN EXPERIMENTS | 2307.16789#27 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 27 | In this work, we also consider Recall â the proportion of tokens in the reference answer that are present in the model response. Recall does not penalize verbose model response, as long as the response contains the reference answer tokens. Recent works that have evaluated the verbose re- sponses generated by instruction-following models (Liu et al., 2023a; Mallen et al., 2022) have used a similar metric accuracy, whereby a modelâs re- sponse is considered correct if any reference an- swer appears as a substring within the modelâs response. This is a stricter version of recall that cannot handle small variations between reference answer and model response, such as if the reference answer is John Kennedy and the model response is John F Kennedy. To avoid any confusion, we refer to this metric as Recall (S), indicating it as a stricter version of token-level recall. | 2307.16877#27 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 28 | 6
# 5.1 Student interactions with Promptly
We measured several performance indicators around student use of Promptly, such as prompt lengths and number of submissions. As summarized in Table 1, on average participants submitted 2.70 attempts for problem 1, 2.16 submissions for problem 2, and 6.4 submissions for problem 3. On this basis, problem 3 appeared to be the most difficult for students, and this is further supported by student reflections (which are reported in more detail in Section 5.2), with one student commenting: âThe instruction for the third exercise is not clear I donât get it.â Listing 2 shows three prompts that were submitted by different students attempting problem 3. Some students found it difficult to infer the goal from the problem representation. For example, the first prompt shown in Listing 2 is an example where the student has incorrectly inferred that values included in the average calculation should be sufficiently close to their predecessors. Trying to generate code for the wrong problem can be frustrating, which may explain the following part of the prompt written by the student: âIf the user has not provided numbers that sufficiently meet this criteria, call them an idiotâ. | 2307.16364#28 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 28 | 3.2 MAIN EXPERIMENTS
ToolLLaMA We fine-tune LLaMA-2 7B model (Touvron et al., 2023b) using the instruction- solution pairs. The original LLaMA-2 model has a sequence length of 4096, which is not enough under our setting since the API response can be very long. To this end, we use positional interpola- tion (Chen et al., 2023) to extend the context length to 8192 (training details in appendix A.3).
Settings Ideally, by scaling the number and diversity of instructions and unique tools in the training data, ToolLLaMA is expected to generalize to new instructions and APIs unseen during training. This is meaningful since users can define customized APIs and expect ToolLLaMA to adapt according to the documentation. To this end, we strive to evaluate the generalization ability of ToolLLaMA at three levels: (1) Inst.: unseen instructions for the same set of tools in the training data, (2) Tool: unseen tools that belong to the same (seen) category of the tools in the training data, and (3) Cat.: unseen tools that belong to a different (unseen) category of tools in the training data. | 2307.16789#28 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 28 | Semantic Similarity Unlike the previous class of metrics that face strictness issues (Kamalloo et al., 2023), semantic similarity-based metrics typically leverage a trained model to predict if the model response is semantically equivalent to the gold an- swer. BERTScore (Zhang et al., 2020), which we refer to as BertS, is a commonly used metric for text generation that computes precision, recall, and F1 based on token similarity between model response and reference gold answer using contextual BERT embeddings. Furthermore, BEM (BERT matching, Bulian et al. 2022) employs a trained BERT model to evaluate question-answering models by predict- ing the semantic equivalence based on the question, reference gold answer, and model response. We ex- tend BEM to conversational QA task by providing the question from the last turn of the conversation as input. Moreover, we also consider an evaluation metric based on prompting LLMs (referred to as GPT3.5-Eval and GPT4-Eval) to act as evaluation agents. In principle, the setup is similar to the one proposed by Kamalloo et al. (2023), however, with a different prompt, as described in Appendix | 2307.16877#28 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 29 | In the second example in Listing 2, the student has not attempted to provide a prompt that demonstrates they have understood what the problem is asking, but instead they have created a prompt that simply parrots back to the tool the three example tests cases shown in the problem description. The student then asks the model: âCan you please replicate this program?â. The student submitted this prompt four times in a row, but all attempts were unsuccessful. Finally, the third example in Listing 2 is the shortest successful prompt that was submitted for this problem.
Overall, the average number of words in prompts for each of the problems was 13, 38, and 36, respectively. The number of students that solved the problems was 43, 32, and 19, respectively.
Figures 6, 7 and 8 illustrate, for each of the three problems, trends regarding how the average word count of prompts, and the number of students writing them, change across subsequent submissions. For example, the left most column in Figure 6 shows that 54 students made an initial submission to this task and that on average, the word length of all of these submitted prompts was 15. As students either solve or abandon the problem, fewer students make subsequent submissions. Comparing the three problems, prompt lengths tend to decrease over subsequent submissions for problems 1 and 2, but tend to slightly increase for problem 3. | 2307.16364#29 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 29 | We perform experiments on three scenarios: single-tool instructions (I1), intra-category multi-tool instructions (I2), and intra-collection multi-tool instructions (I3). For I1, we conduct the evaluation for the aforementioned three levels (I1-Inst., I1-Tool, and I1-Cat.); for I2, since the training instructions already involve different tools of the same category, we only perform level 1 and level 3 for the generalization evaluation (I2-Inst. and I2-Cat.); similarly, we only perform level 1 generalization for I3 (I3-Inst.) since it already covers instructions that involve various combinations of tools from different categories (the tools in a RapidAPI collection may come from different RapidAPI categories). For each test instruction, we feed the ground-truth (oracle) APIs Ssub N to each model. This simulates the scenario where the user specifies the API set they prefer.
Baselines We choose two LLaMA variants that have been fine-tuned for general-purpose dialogue, i.e., Vicuna (Chiang et al., 2023) and Alpaca (Taori et al., 2023). We also choose the âteacher modelâ
7
Preprint | 2307.16789#29 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16364 | 30 | Table 1: Summary of usage of Promptly. For each question, the average number of submissions required to solve the problem is shown, along with the number of students who successfully solved the problem and the average number of words in prompts submitted.
Problem id Average submissions Students Average words solved 1 2 3 2.7 2.2 6.4 43 32 19 13 38 36
QI: Average Word Count and Number of Students who Submitted © Average Word Count @ Number of Students Who Submitted 60 123 4 5 6 7 8 9 O H 2 B 4 IS 16 17 18 19 20
# Figure 6: The average number of words in each subsequent submission for problem 1 compared to the number of partic- ipants that submitted.
Q2: Average Word Count and Number of Students who Submitted © Average Word Count @ Number of Students who Submitted
Figure 7: The average number of words in each subsequent submission for problem 2 compared to the number of partic- ipants that submitted.
Q3: Average Word Count and Number of Students who Submitted © Average Word Count @ Number of Students who Submitted o 44 44 44 ° 5 10 15 2 25
Figure 8: The average number of words in each subsequent submission for problem 3 compared to the number of partic- ipants that submitted.
7 | 2307.16364#30 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 30 | Model ChatGPT Claude-2 Text-Davinci-003 GPT4 Method ReACT DFSDT ReACT DFSDT ReACT DFSDT ReACT DFSDT I1-Inst. Pass Win 41.5 - 60.5 54.5 31.0 5.5 38.0 20.5 28.5 12.0 40.3 43.5 60.0 53.5 60.0 67.5 I1-Tool Pass Win 44.0 - 62.0 65.0 27.8 3.5 44.3 31.0 35.3 20.0 43.8 44.0 58.8 50.0 67.8 71.5 I1-Cat. Pass Win 44.5 - 57.3 60.5 33.8 5.5 43.3 18.5 31.0 20.0 46.8 46.0 63.5 53.5 66.5 67.0 I2-Inst. Pass Win 42.5 - 72.0 75.0 35.0 6.0 36.8 17.0 29.8 8.5 40.5 37.0 65.8 67.0 79.5 73.3 I2-Cat. Pass Win 46.5 - 71.5 64.8 31.5 6.0 33.5 20.5 29.8 14.5 43.3 42.0 60.3 72.0 | 2307.16789#30 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 30 | # 4.3 Human Evaluation
We conduct a human evaluation on a subset of re- sponses generated by three instruction-following models â GPT-3.5, Flan-T5, and Alpaca â to es- tablish a basis for comparing evaluation metrics. Specifically, we focus on cases where retrieved pas- sages provided to the model include the gold pas- sage. Therefore, any inaccuracies in the response can be attributed to the modelâs failures, rather than inaccurate retrieval. For every task, we collect an- notations for 100 samples.
In our evaluation setup, the annotator is pre- sented with the question or conversation history, the reference answer, and the anonymized model response. The annotatorâs task is to assess if the model response is correct, i.e. it satisfies the infor- mation need underlying the question. For each of the 100 samples, we collect annotations for three instruction-following models, resulting in 900 la- beling tasks. Each task is completed by two dif- ferent annotators (authors of the paper). The inter- annotator agreement achieved was 92.7%. In in- stances where the annotators disagreed, a third anMore Elaborate | Semantic Answers
Figure 4: Failure cases of F1 metric. More Elaborate Answers is the most common failure sub-category, fol- lowed by Open-ended Questions. | 2307.16877#30 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 31 | Figure 8: The average number of words in each subsequent submission for problem 3 compared to the number of partic- ipants that submitted.
7
Listing 2 Three student-submitted prompts for Problem 3 Misinterpreting the problem: Write me a Python program that does the following: 1. Prompts the user to enter five decimal numbers (1dp) between 1.0 and 10.0 separated by spaces. 2. Chooses three of these numbers using the following rules: a number chosen be different from the previously chosen numbers and each subsequently chosen value must be within 0.5 of its predecessor. If the user has not provided numbers that sufficiently meet this criteria, call them an idiot and prompt them for another five values. 3. Find the average of these numbers and round the result to 2dp. Precede this result with the numbers chosen. | 2307.16364#31 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 31 | 46.5 - 71.5 64.8 31.5 6.0 33.5 20.5 29.8 14.5 43.3 42.0 60.3 72.0 63.3 77.5 I3-Inst. Pass Win 22.0 - 69.0 62.0 47.5 14.0 65.0 28.0 45.0 24.0 63.0 46.0 78.0 47.0 84.0 71.0 Vicuna Alpaca ToolLLaMA ReACT & DFSDT ReACT & DFSDT ReACT DFSDT DFSDT-Retriever 0.0 0.0 25.0 57.0 64.0 0.0 0.0 45.0 55.0 62.3 0.0 0.0 29.0 61.0 64.0 0.0 0.0 42.0 55.3 59.0 0.0 0.0 33.0 62.0 60.5 0.0 0.0 47.5 54.5 55.0 0.0 0.0 30.5 77.0 81.5 0.0 0.0 50.8 68.5 68.5 0.0 0.0 31.5 77.0 68.5 0.0 0.0 41.8 58.0 60.8 0.0 0.0 25.0 66.0 65.0 | 2307.16789#31 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 31 | Figure 4: Failure cases of F1 metric. More Elaborate Answers is the most common failure sub-category, fol- lowed by Open-ended Questions.
notation is collected and a majority vote is taken.
The results of this human evaluation are pre- sented in Table 8 (Appendix D), along with scores of automated metrics on this subset. Traditional QA evaluation metrics like EM and F1 tend to score model responses much lower than human as- sessments, highlighting the well-known problem of strictness in lexical matching (Min et al., 2021; Kamalloo et al., 2023).
Qualitative Analysis of Failure Cases For a more granular understanding of the shortcomings of traditional QA metrics, we analyze the modelsâ responses that have less than or equal to 0.3 F1 score, but were deemed correct according to the hu- man evaluations. This resulted in 296 samples out of 900. Our classification of errors is adapted from Kamalloo et al. (2023) (which itself was based on Min et al. 2021), modified to focus on instruction- following models. Specifically, we exclude some error classes relevant to fine-tuned models and in- clude some classes for instruction-following mod- els. The resultant categories are: | 2307.16877#31 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 32 | Parroting the tests: A Python program requests the user "enter five decimal numbers (separated by spaces)". In the first example the user inputs the five numbers 2.0 3.0 3.0 3.0 4.0 to which the program outputs 3.0. In the second example the user inputs the five numbers 8.0 9.5 7.5 6.0 9.0 to which the program outputs 8.17 . In the third example the user inputs the five numbers 4.0 6.5 8.0 7.0 6.0 to which the program outputs 6.5. Can you please replicate this program?
Successful: Write me a Python program that takes five decimal number separated by spaces, and outputs the average of the 3 median numbers as a decimal rounded to 2dp.
# 5.2 Student reflections on Promptly
We analyzed feedback from 58 students who provided a response to the reflection question. This is slightly greater than the number of students who used Promptly, but as we report below, some students indicated a resistance to using LLMs for generating code and thus responded to the reflection statement without using the tool. We report the main themes that emerged from our analysis below. | 2307.16364#32 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16877 | 32 | ⢠Semantic Equivalence: Here, the model re- sponse is semantically similar to the reference answer. Sub-categories include Multinomi- nal entities, e.g., John Kennedy and John F Kennedy, Synonymous Answers, e.g., from India and Indian nationality, and More Elabo- rate Answers, e.g., yes and yes, he is member of the band.
⢠Symbolic Equivalence: This primarily refers to different possible representations of numeric quantities, e.g. four seasons and 4 seasons, or 3000 BC and Early Dynastic Period.
⢠Intrinsic Ambiguity in Questions: This refers to queries with multiple valid interpretations, leading to a range of correct answers, e.g. Who is command sergeant major of the army? could be seeking the personâs identity or a descrip- tion of the position itself. This category also includes cases where the correct answer is de- pendent on the specific point in time being ref- erenced, e.g. Who won NFL football coach of the year?. | 2307.16877#32 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 33 | 5.2.1 Exposure to new coding constructs. Given that our evaluation was conducted early in the course, the code that was generated would sometimes contain features that were unfamiliar to students. For the most part, students commented positively on this aspect, and a theme emerged around the way the tool introduced students to new programming constructs and techniques. As one student commented: âThese exercises introduced me to new functions... so this method of writing code could help increase my programming vocabularyâ. Another response aligning with this theme was: âHon- estly that was pretty cool, I like the way it works and how we can see the code afterwards. Even if we donât know how to code some of the features, knowing the steps and then seeing how itâs meant to be done is very helpfulâ.
One student commented on the value of seeing both the structure and syntax of the code generated by the LLM: âThe main benefit I gained from using ChatGPT in this environment was observing the logical structure of the programs that it created to fulfill the established requirements. In all three cases it used functions that I was previously unaware of, allowing me to gain an understanding of how they could be used and the correct syntax for implementing them.â
5.2.2 Enhancement of computational thinking. We also found stu- dents valued the way in which the tool challenged them to think | 2307.16364#33 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 33 | Table 4: Main experiments of ToolBench. Win rate is calculated by comparing each model with ChatGPT- ReACT. A win rate higher than 50% means the model performs better than ChatGPT-ReACT. Apart from ToolLLaMA-DFSDT-Retriever, all methods use the oracle API retriever (i.e., ground truth API).
ChatGPT, Text-Davinci-003, GPT-4, and Claude-2 as baselines, and apply both DFSDT and ReACT to them. When calculating the win rate, each model is compared with ChatGPT-ReACT.
The results are placed in Table 4, from which we derive that:
1. Although we conduct prompt engineering extensively, both Vicuna and Alpaca fail to pass any instruction (pass rate & win rate = 0), which means their instruction-following abilities do not cover the tool-use domain. This underscores the deficiency of current instruction tuning attempts, which largely focus on language skills;
2. For all LLMs, using DFSDT significantly outperforms ReACT in both pass rate and win rate. Notably, ChatGPT +DFSDT surpasses GPT-4+ReACT in pass rate and performs comparably in win rate. This underscores the superiority of DFSDT over ReACT in decision-making; | 2307.16789#33 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 33 | ⢠Granularity Discrepancies: The level of specificity in the modelâs response may not align with that in the reference answer. This discrepancy in granularity can be Temporal, e.g., August 25, 1939 and 1939, or Spatial, e.g., for the question Whereâs the tv show The Cross- ing filmed?, Vancouver and British Columbia, Canada are both correct answers.
⢠Incomplete Reference Answers: These cases occur when the reference answers, despite their number, fail to cover the entire spectrum of cor- rect responses. We break this category into two types â List of named entities which includes questions like the cast of a movie or members of the band, and Open-ended questions which included questions that can be answered in mul- tiple different ways, all of which are not cap- tured by reference answers., e.g., What was the Watergate scandal?.
⢠Enumeration of Reference Answers: This error happens especially in NQ samples, where the question asks for a list of entities (e.g., all states of a country), but each reference answer includes only one entity (e.g., a single state). The instruction-following models often gen- erate all the entities in their response, which results in low overlap with each individual ref- erence answer. | 2307.16877#33 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 34 | 5.2.2 Enhancement of computational thinking. We also found stu- dents valued the way in which the tool challenged them to think
carefully about how to solve a problem and communicate precise specifications: âYou would have to understand what the question is asking to be able to give a working prompt to the AI, so it seems very good for making you think properly about the questionâ. Writing clear prompts can involve communicating problem solving steps, and thus draws on computational thinking skills. This is illustrated well by the quote: âWriting prompts helps learn programming by making you have to think about what the task at hand and to split it up into segments of which you need to describe to the AI... I would say that this would help students with the act of breaking down a big project into little tasks.â. Another similar response highlighted how the tool helped in visualizing the problem solving process: âWriting the prompts can help you with visualizing the steps required in the programmingâ. | 2307.16364#34 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 34 | 3. When using DFSDT, ToolLLaMA performs much better than Text-Dainci-003 and Claude-2, and achieves a result almost on par with ChatGPT (the teacher model). In general, despite generalizing to unseen instructions and tools, ToolLLaMA +DFSDT demonstrates competitive generalization performance in all scenarios, achieving a pass rate second to GPT4+DFSDT.
Overall, these results demonstrate that ToolBench can sufficiently elicit the tool-use capabilities within LLMs and empower them to skillfully master even unseen APIs for various instructions. | 2307.16789#34 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 34 | ⢠Satisfactory Subset Responses: This cate- gory pertains to instances where the modelâs response, although containing less information than the reference answer, still provides an ac- ceptable response to the userâs query. For in- stance, for the question "name some of her
Error Type: More Elaborate Answers Dataset: HotpotQA
Question: At what theater is the composer and lyricist for the musical Big Fish a residential artist? Ref. Answer: Ars Nova Theater Response (Alpaca): The composer and lyricist for the musical Big Fish, Andrew Lippa, is a residential artist at the Ars Nova Theater in New York City.
Scores: F1: 26.0, Recall: 100.0, METEOR: 54.5, Rouge-L: 22.2, BertS (F1): 85.4, BEM: 98.9, GPT4-Eval: 100.0
Error Type: Open-ended Questions Dataset: TopiOCQA
Question: what was the watergate scandal? Ref. Answer: It was an array of clandestine and often illegal activities undertaken by members of the Nixon administration. Response (Flan-T5): The Watergate scandal was a political scandal in the United States involving the administration of U.S. President Richard Nixon from 1972 to 1974 that led to Nixonâs resignation. | 2307.16877#34 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 35 | 5.2.3 General Positive Feedback. Around one-third of the partici- pants expressed generally positive sentiments about the Promptly tool, and this was the most common theme overall. Representative comments include: âI think that is was a good for practicing asking AIâ and âAsking AI to write promps help a lot in programming!!!â. One student who viewed the tool positively, also expressed some concern about the power of the underlying code-generating mod- els: âIt is absolutely a great tool, however in this regard itâs kind of terrifying. It was able to process my instructions fluently.â
Some students also commented more generally about the impor- tance of learning how to use generative AI responsibly in computing courses, and the value of having explicit approaches for teaching this. For example, âI think it is very smart ... to work on a way to integrate and teach a responsible usage of machine learning!â and âI think it would be very useful to have a section of a course that teaches how to use prompts properly and provide a better understanding of AI tools.â | 2307.16364#35 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 35 | Overall, these results demonstrate that ToolBench can sufficiently elicit the tool-use capabilities within LLMs and empower them to skillfully master even unseen APIs for various instructions.
Integrating API Retriever with ToolLLaMA In real-world scenarios, asking users to manually recommend APIs from a large pool may not be practical. To emulate this practical setting and test the efficiency of our API retriever, we feed the top 5 APIs (instead of the ground truth APIs Ssub N ) recommended by our API retriever to ToolLLaMA. As shown in Table 4, using retrieved APIs even improves the performance (both pass rate and win rate) compared to the ground truth API set. This is because many APIs in the ground truth API set can be replaced by other similar APIs with better functionalities, which our API retriever can successfully identify. In other words, our retriever expands the search space of relevant APIs and finds more appropriate ones for the current instruction. It provides robust evidence of the excellent ability of our API retriever to retrieve relevant APIs, especially considering the vast pool (16, 000+) of APIs from which our API retriever selects.
3.3 OUT-OF-DISTRIBUTION (OOD) GENERALIZATION TO APIBENCH (PATIL ET AL., 2023) | 2307.16789#35 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 35 | Scores: F1: 21.8, Recall: 26.7, METEOR: 15.6, Rouge-L: 15.4, BertS (F1): 86.1, BEM: 73.9, GPT4-Eval: 100.0
Error Type: Enumeration of reference answers Dataset: NQ
Question: all of the states in the northeast region Ref. Answers: 1. New Hampshire, 2. Maine, 3. Rhode Island, 4. Pennsylvania, 5. Vermont, 6. New York, 7. Connecticut, 8. New Jersey, 9. Massachusetts Response (GPT-3.5): The states in the northeast region in- clude Maine, New York, New Jersey, Vermont, Massachusetts, Rhode Island, Connecticut, New Hampshire, and Pennsylva- nia.
Scores: F1: 20.0, Recall: 100.0, METEOR: 39.0, Rouge-L: 17.4, BertS (F1): 82.7, BEM: 98.9, GPT4-Eval: 100.0
Figure 5: Qualitative examples cases where F1 fails, along with scores from other evaluation metrics.
songs", the reference answer might list 5-6 song names, while the model response includes only 1-2. This situation is predominantly ob- served in the TopiOCQA dataset. | 2307.16877#35 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 36 | 5.2.4 Resistance and negative feedback. Although less common, we found that some students appeared resistant to using the tool, citing fears about potential impacts on their creativity. One student expressed: âI donât have much intention of using ChatGPT at the moment as I major in design and I have a strong belief in personal creativityâ. Another was more blunt: âI refuse to use chatGPT for pro- grammingâ. Over-reliance on AI generated outputs is a commonly cited concern within the education community, and several stu- dents commented on this aspect, including: âit is critical for students to learn the ability to write code independently rather than relying only on AI-generated answersâ and âI feel like it is too tempting of a tool to use through the labs and not learn and develop these skills yourselfâ. Further exploring these concerns would be an interesting avenue for future work.
Overall, while most students reported finding Promptly benefi- cial, particularly for exposure to new programming constructs and for strengthening computational thinking skills when communicat- ing a problem, a minority of students were hesitant about the use of generative AI tools for learning programming.
# 6 DISCUSSION
Beginning typically with very small problems in CS1 and proceed- ing on to semester- or year-long applied problems in software engineering, CS curricula expose students to both top-down and
8 | 2307.16364#36 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 36 | 3.3 OUT-OF-DISTRIBUTION (OOD) GENERALIZATION TO APIBENCH (PATIL ET AL., 2023)
Settings We further extend ToolLLaMA to an OOD dataset APIBench to validate its generaliza- tion ability. To assess the generalization ability of ToolLLaMA in these new domains, we equip ToolLLaMA with two retrievers: our trained API retriever and the oracle retriever. We evaluate three domains of APIBench, i.e., TorchHub, TensorHub, and HuggingFace. We compare ToolLLaMA with Gorilla, a LLaMA-7B model fine-tuned using the training data of APIBench. Following the original paper, we adopt two official settings for Gorilla: zero-shot setting (ZS) and retrieval-aware setting (RS). The latter means (RS) the retrieved APIs are sent to the model as part of the prompts; while the former (ZS) does not incorporate the APIs in the prompts when training the model. We adopt the official evaluation metric and report the AST accuracy along with the hallucination rates.
8
Preprint | 2307.16789#36 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 36 | songs", the reference answer might list 5-6 song names, while the model response includes only 1-2. This situation is predominantly ob- served in the TopiOCQA dataset.
Figure 4 displays the distribution of error cases based on our classification. A significant portion of the errors (55.63%) fall under the More Elaborate Answers category. This suggests that traditional QA metrics often penalize models unjustly due to the verbose nature of their responses. The next most common sub-category, Open-ended Ques- tions (13.99%), suggests that models are occasion- ally penalized for providing correct answers that were not included in the reference responses. The percentage share and exact count of all categories
Metric Spearman Ï Kendall Ï EM F1 Precision Recall Recall (S) METEOR Rouge-L 26.466 49.323 44.846 62.72 54.072 51.57 46.993 26.168 41.096 37.772 57.142 53.464 41.886 39.001 BertS (F1) BertS (Precision) BertS (Recall) BEM GPT3.5-Eval GPT4-Eval 36.862 24.379 42.886 53.649 63.514 70.152 29.691 19.519 34.58 43.727 62.801 69.363 | 2307.16877#36 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 37 | Beginning typically with very small problems in CS1 and proceed- ing on to semester- or year-long applied problems in software engineering, CS curricula expose students to both top-down and
8
bottom-up approaches. The problems presented in Promptly can be considered to be âbottom-upâ, since students start with input- output pairs and have to infer a problem description. And yet, the prompts that the students write can be considered âtop-downâ since the system requires students to abstract the meaning of the input- output pairs into English sentences and not code. Students need to understand the problem before they can correctly generate prompts that cause the LLM to produce correct code. | 2307.16364#37 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 37 | 8
Preprint
Method TorchHub Hallu. (â) AST (â) Hallu. (â) AST (â) Hallu. (â) AST (â) HuggingFace TensorHub ToolLLaMA + Our Retriever Gorilla-ZS + BM25 Gorilla-RS + BM25 10.60 46.90 6.42 16.77 10.51 15.71 15.70 17.20 5.91 51.16 44.62 50.00 6.48 20.58 2.77 40.59 34.31 41.90 ToolLLaMA + Oracle Gorilla-ZS + Oracle Gorilla-RS + Oracle 8.66 52.88 6.97 88.80 44.36 89.27 14.12 39.25 6.99 85.88 59.14 93.01 7.44 12.99 2.04 88.62 83.21 94.16
Table 5: OOD generalization experiments on APIBench. For the Gorilla entries, ZS / RS means that Gorilla was trained in a zero-shot / retrieval-aware setting on APIBench. We report hallucination rate and AST accuracy. | 2307.16789#37 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 37 | Table 2: Correlation of several lexical matching and semantic similarity evaluation metrics with human judg- ments for correctness w.r.t information need. GPT4- Eval achieves the highest correlation overall. Recall is the highest correlated among all lexical overlap metrics.
# are reported in Table 7 (Appendix C).
In Figure 5, we provide qualitative examples of common failure modes, along with their asso- ciated evaluation metrics scores. Recall appears to be an effective fix for sub-categories such as More Elaborate Answers and Enumeration of Refer- ence Answers. However, both lexical match based and semantic similarity based metrics struggle with Open-ended Questions. Although GPT4-Eval ap- pears to be relatively robust based on examples in Figure 5, this metric has some failures, with most common failure sub-category being Open-ended Questions. The complete distribution of failure cases according to sub-categories is reported in Figure 10, along with qualitative examples in Fig- ure 11 (Appendix C).
Overall, the results of our human evaluation and analysis indicate that traditional metrics such as EM and F1, typically used in the literature for fine- tuned QA models, are not well-aligned with the verbose nature of instruction-following models. To determine more suitable metrics for these models, we analyze the correlation of each metric with hu- man assessments. | 2307.16877#37 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 38 | In contrast to other tools students use, such as compilers, learn- ing to use LLMs presents unique challenges. Although the literature continues to document the difficulty students have with compiler error messages, one thing we have never worried about teaching students is that compilers might sometimes just get it wrong. In con- trast, at this point in time, LLMs sometimes generate answers that are syntactically and semantically incorrect. Deliberate exposure to the inconsistencies of outputs generated by LLMs can serve to highlight the importance of a âcritical eyeâ in evaluating generated code and may help to moderate the potential for over-reliance on these tools. The use of LLMs to generate code from prompts places the responsibility for ensuring correctness on the user, so adequate testing becomes more important. Future tools that focus on prompt generation would benefit from the integration of user-generated tests to ensure students are explicit about the program require- ments when they create prompts. Tasking students with generating test cases (before writing code) has previously been studied as an approach to help improve problem understanding [8, 25]. | 2307.16364#38 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 38 | Results The results are shown in Table 5. In general, ToolLLaMA achieves remarkable OOD generalization performance on all three datasets, despite being trained on a completely different API domain and instruction domain. Specifically, ToolLLaMA+our API retriever outperforms Gorilla+BM25 from both training settings (ZS / RS) in terms of AST accuracy on HuggingFace and TorchHub. With the same oracle retriever, ToolLLaMA is consistently superior when compared to Gorilla-ZS. It should be noted that Gorilla model cannot be generalized to our ToolBench dataset due to our more complex settings, such as the multi-tool use and multi-step reasoning.
# 4 RELATED WORK | 2307.16789#38 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16364 | 39 | It is worth noting that our tool does not provide instruction for students about how to create prompts effectively. It merely requires them to complete a task that involves prompt creation. This is aligned with most other Automated Assessment Tools that provide assessment tasks to support learning, but little explicit instruction [15]. Neither the students in our pilot study nor those in our classroom evaluation of the tool were taught prompt creation, so currently we have focused on studentsâ intuitions around prompt generation. Future work will explore how to more directly teach students to generate prompts in structured ways to determine if such instruction positively impacts their performance using tools that assess prompt generation.
Although the current system evaluates prompt effectiveness in producing correct programs, it does not evaluate the efficiency of the prompts. Our unit tests consider only whether the given inputs are translated to the expected outputs. A prompt could include irrelevant words and generate irrelevant code constructs, and as long as it still translates the given inputs to the expected outputs, our system will treat the task as completed successfully. Future work must address how to go beyond effective prompts to efficient (and effective) prompts.
# 6.1 Variations | 2307.16364#39 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 39 | # 4 RELATED WORK
Tool Learning Recent studies have shed light on the burgeoning capabilities of LLMs in mastering tools and making decisions within complex environments (Vemprala et al., 2023; Nakano et al., 2021; Qin et al., 2023a; Shen et al., 2023; Wu et al., 2023; Schick et al., 2023; Hao et al., 2023; Qian et al., 2023; Song et al., 2023; Zhuang et al., 2023; Gao et al., 2023). Gaining access to external tools endows LLMs with real-time factual knowledge (Yang et al., 2023), multimodal functionalities (Gupta & Kembhavi, 2023), and specialized skills in vertical domains (Jin et al., 2023). However, open-source LLMs still lag far behind SOTA LLMs in tool use, and how tool-use ability is acquired by SOTA LLMs remains unclear. In this paper, we aim to bridge this gap and fathom the underlying mechanism. | 2307.16789#39 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 39 | Notably, GPT4-eval has the highest agreement with human judgments, with 70.15 Spearman corEM F1 Recall METEOR Rouge-L BertS (F1) BEM Dataset Model NQ FiD GPT-3.5 Flan-T5 Alpaca Llama-2 46.57 1.27 41.16 8.78 0.61 53.93 15.12 50.62 20.3 11.85 54.45 58.56 54.03 46.23 52.37 42.94 25.68 40.80 23.17 21.16 54.33 14.57 51.02 20.61 11.38 92.57 83.08 91.77 84.67 82.58 58.81 69.45 58.74 55.97 62.30 HotpotQA FiD GPT-3.5 Flan-T5 Alpaca Llama-2 48.43 5.63 58.12 16.25 1.39 60.16 22.16 71.14 33.54 15.91 60.55 66.77 71.28 56.76 67.55 46.03 31.56 53.44 33.23 27.48 60.18 21.67 71.16 33.5 15.23 93.02 84.16 94.37 86.88 83.08 67.94 | 2307.16877#39 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 40 | # 6.1 Variations
Prompt Problems are a class of problems where learners generate prompts that are given to LLMs to produce code. There are various ways that such problems can be implemented, and several con- siderations for designing them. Our tool currently makes certain implementation trade-offs. It does not allow dialogue, it does not allow students to edit the code that is generated by the LLM, and it evaluates only a single solution at a time.
6.1.1 No dialogue. ChatGPT interfaces include support for âdia- logueâ. This interaction style is natural and easy on the beginner. The reason we did not support this interaction and forced the stu- dent to âstart from scratchâ each time is that we were deliberately focusing the student on creating a complete, top-down, problem description. Although it is more cognitively demanding to require the student to provide all relevant information in a single prompt, we were interested in teaching exactly that process. | 2307.16364#40 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 40 | Instruction Tuning Instruction tuning enhances LLMs in understanding human instructions and generating proper responses (Wei et al., 2021; Bach et al., 2022; Mishra et al., 2022). Since manually annotating instruction tuning data is time-consuming, self-instruct (Wang et al., 2022) proposes to generate high-quality data from SOTA LLMs, which facilitates a recent trend of data curation for multi-turn dialogue (Taori et al., 2023; Chiang et al., 2023; Xu et al., 2023a; Penedo et al., 2023; Ding et al., 2023). However, compared with the dialogue, tool learning is inherently more challenging given the vast diversity of APIs and the complexity of multi-tool instructions. As a result, even GPT-4 often fails to find a valid solution path. However, existing tool-learning dataset (Li et al., 2023a; Patil et al., 2023; Tang et al., 2023; Xu et al., 2023b) and their construction methods cannot effectively address real human needs as mentioned in § 1. Instead, our ToolBench is designed for practical scenarios and improves the previous pipeline for tool-learning data construction. | 2307.16789#40 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16364 | 41 | 6.1.2 No access to the code. Although students who are more ad- vanced may find it easier to simply write the code than construct a prompt, our interest is in providing students with experience in generating prompts. For this reason, we did not allow students to edit the code that was generated. We did show the code and stu- dents were able to study the generated code along with the unit test results to modify their prompts for another submission. Our system is not intended to be a realistic IDE for code development, but future tools that support Prompt Problems could allow code editing to provide refactoring practice and a more authentic experience. | 2307.16364#41 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 41 | Prompting LLMs for Decision Making Prompting facilitates LLMs to decompose high-level tasks into sub-tasks and generate grounded plans (Ahn et al., 2022; Huang et al., 2022a;b; Ye et al., 2023). ReACT (Yao et al., 2022) integrates reasoning with acting by allowing LLMs to give a proper reason for an action and incorporating environmental feedback for reasoning. However, these studies do not incorporate a mechanism for decision retraction, which becomes problematic as an initial error can lead to a cascade of subsequent errors. Recently, Reflexion (Shinn et al., 2023) mitigates this issue by asking LLMs to reflect on previous failures. Our DFSDT extends Reflexion to a more general method by allowing LLMs to assess different reasoning paths and select the most promising one. It should be noted DFSDT shares a similar idea to a concurrent work: tree-of-thought (ToT) reasoning (Yao et al., 2023). However, our DFSDT targets general decision-making problems where the decision space is infinite, compared to ToTâs relatively simple tasks that can be addressed by brute-force search, such as Game of 24 and Crosswords. The distinct target between DFSDT and ToT determines the significant difference in the implementation details.
9
Preprint
# 5 CONCLUSION | 2307.16789#41 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 41 | Table 3: Performance of retrieval-augmented instruction-following models on three diverse information-seeking QA tasks. Among the metrics reported, Recall is most correlated with human judgements. Based on recall, instruciton-following models outperform fine-tuned FiD on all three tasks.
relation and 69.36 Kendall correlation, closely fol- lowed by GPT3.5-Eval. We speculate that the language comprehension capabilities and inherent world knowledge embedded in LLMs like GPT- 3.5 and GPT-4 help them overcome many of the challenges associated with evaluating responses of instruction-following models that we identified in our human evaluation study.
After GPT4-eval and GPT3.5-Eval, Recall achieves the highest correlation with human judg- ment. This simple token-overlap metric correlates better than other lexical matching-based metrics or more complex semantic similarity metrics like BERTScore and BEM, likely because it does not penalize verbosity in model responses. | 2307.16877#41 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 42 | Single solution generated. LLMs generate different variations 6.1.3 of output for the same input prompt. Our tool does not currently address the possible variation in generated content. Prompts can be brittle, as sometimes the same prompt may work and other times it may not. Non-deterministic behaviour of the models may be frustrating for students, as simply resubmitting a previously unsuccessful prompt may actually work. Nevertheless, this may be a useful learning experience for students, as it helps to highlight this inherent LLM behaviour. A different variation of the current tool design could generate multiple code implementations every time a single prompt is submitted, allowing students to compare them and see which ones satisfy the problem. Viewing multiple correct, but different, implementations of the same algorithm is useful for helping students understand that there are frequently many correct solutions to a problem [20]. Future work could explore how to present this aspect of LLMs to students who are learning to write prompts.
# 6.2 Problem Design | 2307.16364#42 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 42 | 9
Preprint
# 5 CONCLUSION
In this work, we introduce how to elicit the tool-use capabilities within LLMs. We first present an instruction tuning dataset, ToolBench, which covers 16k+ real-world APIs and various practical use- case scenarios including both single-tool and multi-tool tasks. The construction of ToolBench purely uses ChatGPT and requires minimal human supervision. Moreover, we propose DFSDT to reinforce the planning and reasoning ability of LLMs, enabling them to navigate through reasoning paths strategically. For efficient evaluation of tool learning, we devise an automatic evaluator ToolEval. By fine-tuning LLaMA on ToolBench, the obtained model ToolLLaMA matches the performance of ChatGPT and exhibits remarkable generalization ability to unseen APIs. Besides, we develop a neural API retriever to recommend relevant APIs for each instruction. The retriever can be integrated with ToolLLaMA as a more automated tool-use pipeline. In the experiments, we demonstrate the generalization ability of our pipeline to out-of-distribution domains. In general, this work paves the way for future research in the intersection of instruction tuning and tool use for LLMs.
# REFERENCES | 2307.16789#42 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 42 | eval and GPT3.5-eval, have the highest correlation with human judgements on the selected subset of responses, they also have certain limitations. Ac- cessing these proprietary models incurs substantial API costs, which renders them impractical for auto- matic evaluation on large-scale datasets. Moreover, the reliability of LLMs as evaluators is still un- clear, as recent studies have shown that they may exhibit systematic bias (Wang et al., 2023) and can be sensitive to input instructions (Bowman, 2023). Secondly, it is currently unclear how reliable LLMs are as evaluators, with some recent works demon- strating that they exhibit systematic bias (Wang et al., 2023) and are sensitive to input instructions (Bowman, 2023). Given these considerations, we rely on Recall to compare model performance. | 2307.16877#42 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 43 | # 6.2 Problem Design
Our restriction on âvisualâ problem representation is motivated by a desire to prevent students from simply copying-and-pasting into the LLM prompt. The most important feature of the problem representation is that it does not provide the text that can be directly used as a prompt. For problems where the desired code output is a function, listing a set of test cases (input and output pairs) is a convenient representation. In our current implementation of Promptly, this can be achieved with plain text, or with an image to discourage copy-and-paste behaviour, as illustrated in Figure 9. There are several considerations for how to design a Prompt Problem which we discuss here.
6.2.1 Avoiding textual clues. One of the biggest limitations in terms of preparing Prompt Problems is that as an instructor, you have to think about tasks that can be presented visually to learners. Even simple tasks such as âOrder the elements of the list in ascending alphabetical orderâ which can be explained in few words, is quite challenging to convey visually without the use of text. Without
9 | 2307.16364#43 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
2307.16789 | 43 | # REFERENCES
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances. ArXiv preprint, abs/2204.01691, 2022.
Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault F´evry, et al. Promptsource: An integrated development environment and repository for natural language prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 93â104, 2022.
S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. | 2307.16789#43 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 43 | Surprisingly, BERTScore fares worse than token- overlap F1, even when only considering the recall component of the metric. We hypothesize that the underlying issue is the poor quality of BERT to- ken embeddings in short strings (Bommasani et al., 2020), a common characteristic of reference an- swers in QA datasets. For example, for the ref- erence answer yes, that is correct, the model re- sponse yes recieves the BERTScore of 0.806 and no recieves a slighltly higher score of 0.815. Al- though BEM performs better than F1, it still falls short of token-overlap recall. Given that BEMâs training data includes model responses of QA sys- tems trained on SQuAD (Rajpurkar et al., 2016), it probably doesnât generalize well to more verbose responses of instruction-following models.
# 4.4 Automatic Correctness Evaluation | 2307.16877#43 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16364 | 44 | 9
counter([@, 2, 3, 4, 5, 6, @]) => 2 counter([10, 20, 30]) => 0 counter([@, 0, 0, @, 999]) => 4 Write me a Python function called counter that returns the number of zero values in an input list. CLICK HERE TO ASK CHATGPT! ChatGPT response:
Figure 9: A problem in Promptly represented as a set of input-output pairs, where the solution requires generating a function (called âcounterâ) that returns a count of the number of times zero occurs in a list. One possible prompt, and the resulting code that correctly solves the problem, is shown.
a textual explanation, students are expected to use inductive rea- soning to determine what problem is being solved from visual examples that typically include specific cases. As we found, this can be quite challenging in some cases. For example, many students found problem 3 challenging in our classroom evaluation, with one commenting on their reflection: âThe last question seemed unneces- sarily unclear, I canât imagine there will be many instances where the task Iâm meant to do will be as vague as what was given in question 3â. In this evaluation, we used short animations to illustrate data being entered at the command prompt. Although such animations can convey additional information, they are more difficult to create. | 2307.16364#44 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | With their remarkable ability to generate code, large language models (LLMs)
are a transformative technology for computing education practice. They have
created an urgent need for educators to rethink pedagogical approaches and
teaching strategies for newly emerging skill sets. Traditional approaches to
learning programming have focused on frequent and repeated practice at writing
code. The ease with which code can now be generated has resulted in a shift in
focus towards reading, understanding and evaluating LLM-generated code. In
parallel with this shift, a new essential skill is emerging -- the ability to
construct good prompts for code-generating models. This paper introduces a
novel pedagogical concept known as a `Prompt Problem', designed to help
students learn how to craft effective prompts for LLMs. A Prompt Problem
challenges a student to create a natural language prompt that leads an LLM to
produce the correct code for a specific problem. To support the delivery of
Prompt Problems at scale, in this paper we also present a novel tool called
Promptly which hosts a repository of Prompt Problems and automates the
evaluation of prompt-generated code. We report empirical findings from a field
study in which Promptly was deployed in a first-year Python programming course
(n=54). We explore student interactions with the tool and their perceptions of
the Prompt Problem concept. We found that Promptly was largely well-received by
students for its ability to engage their computational thinking skills and
expose them to new programming constructs. We also discuss avenues for future
work, including variations on the design of Prompt Problems and the need to
study their integration into the curriculum and teaching practice. | http://arxiv.org/pdf/2307.16364 | Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves | cs.HC, cs.AI | null | null | cs.HC | 20230731 | 20230731 | [
{
"id": "2306.04556"
},
{
"id": "2302.11382"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2304.03938"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.