id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2307.16789#70
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
For instance, donâ t merely say â an addressâ , provide the exact road and district names. Donâ t just mention â a productâ , specify wearables, milk, a blue blanket, a pan, etc. Donâ t refer to â my companyâ , invent a company name instead. The first seven of the ten queries should be very specific. Each single query should combine API calls of different tools in various ways and include the necessary parameters. Note that you shouldnâ t ask â which API to useâ , rather, simply state your needs that can be addressed by these APIs. You should also avoid asking for the input parameters required by the API call, but instead directly provide the parameters in your query. The final three queries should be complex and lengthy, describing a complicated scenario where all the provided API calls can be utilized to provide assistance within a single query. You should first think about possible related API combinations, then give your query. Related APIs are APIs that can be used for a given query; those related APIs have to strictly come from the provided API names. For each query, there should be multiple related APIs; for different queries, overlap of related APIs should be as little as possible. Deliver your response in this format: [Query1: ......, â related apisâ :[[tool name, api name], [tool name, api name], [tool name, api name]...],Query2: ......, â related apisâ :[[tool name, api name], [tool name, api name], [tool name, api name]...],Query3: ......, â related apisâ :[[tool name, api name], [tool name, api name], [tool name, api name]...], ...] In-context Seed Examples. In the following, we show one single-tool instruction seed example and one multi-tool instruction seed example. For example, with tool ASCII Art, the given api names are â figletâ , â list figlet stylesâ , â cowsayâ , â list cowsay stylesâ , â matheqâ . Some sample queries and related apis would be: â Queryâ : â
2307.16789#69
2307.16789#71
2307.16789
[ "2302.13971" ]
2307.16789#71
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Need to create an ASCII art representation of a mathematical equation. The equation is â y = mx + câ , where m and c are constants. Help me generate the ASCII art for this equation. Also please generate an ASCII art representation of the text â Newtonâ s Second Law of Motionâ .â , â related apisâ : [â figletâ , â list figlet stylesâ , â matheqâ ] â Queryâ : â Working on a research paper on cows and need to include ASCII art representations of various cows. Can you first retrieve available ASCII art styles for cows? Then, can you generate ASCII art for cows like the Jersey, Holstein, and Guernsey? Finally, I want the cow to say â Moo!â in the ASCII art.â , â related apisâ : [â figletâ , â list figlet stylesâ , â cowsayâ , â list cowsay stylesâ ] â Queryâ : â Iâ
2307.16789#70
2307.16789#72
2307.16789
[ "2302.13971" ]
2307.16789#72
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
m writing a blog post on ASCII art and need to include some examples. Can you generate ASCII art for the following strings: â ASCIIâ , â artâ , and â galleryâ ? You can first retrieve available figlet styles and then generate ASCII art for the strings using the styles.â , â related apisâ : [â figletâ , â list figlet stylesâ ] â Queryâ : â Greetings! Iâ m putting together a quirky slideshow about our furry friends and need your help to sprinkle some ASCII art goodness. Could you kindly fetch me the catalog of ASCII art styles available for animals?
2307.16789#71
2307.16789#73
2307.16789
[ "2302.13971" ]
2307.16789#73
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Also, Iâ m particularly keen on featuring ASCII art for creatures like pandas, cows, elephants, and penguins. And if they could say something cute like â Hello!â or â Hugs!â in the ASCII art, that would be purr-fect!â , â related apisâ : [â figletâ , â list figlet stylesâ , â cowsayâ , â list cowsay stylesâ ] For example, with tool [â Entrepreneur Mindset Collectionâ , â Random Wordsâ , â thedigitalnews- feederapiâ , â Chemical Elementsâ ], the given api names are (tool â
2307.16789#72
2307.16789#74
2307.16789
[ "2302.13971" ]
2307.16789#74
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Entrepreneur Mindset Collec- tionâ )â Random Quote in JSON formatâ , (tool â Random Wordsâ )â Get multiple random wordsâ , (tool â Random Wordsâ )â Get a random wordâ , (tool â thedigitalnewsfeederapiâ )â getting specific cricket articlesâ , (tool â thedigitalnewsfeederapiâ )â Getting Cricket Articlesâ , (tool â thedigitalnewsfeeder- apiâ )â getting specific news articlesâ , (tool â thedigitalnewsfeederapiâ )â Getting News Articlesâ , (tool â thedigitalnewsfeederapiâ )â getting all news articlesâ , (tool â Chemical Elementsâ )â Get All Chemical Elementsâ . Some sample queries and related apis would be: â Queryâ : â For my best friendâ
2307.16789#73
2307.16789#75
2307.16789
[ "2302.13971" ]
2307.16789#75
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
s surprise birthday party, I require inspiration for party games and decorations. Kindly suggest some random words that can serve as themes for the party. Furthermore, Iâ m interested in gathering news articles about the latest party trends to ensure a modern celebration. 18 Preprint Also, I would appreciate details about the local hotels in my area for accommodation options. Your assistance is greatly appreciated.â , â related apisâ : [[â Random Wordsâ , â Get multiple random wordsâ ], [â thedigitalnewsfeederapiâ , â Getting News Articlesâ ], [â thedigitalnewsfeederapiâ , â Getting all news articlesâ ]] â Queryâ : â
2307.16789#74
2307.16789#76
2307.16789
[ "2302.13971" ]
2307.16789#76
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
In the midst of organizing a team-building event for my esteemed company, I eagerly seek your valued input for invigorating activities. Might I kindly request a collection of random quotes that encapsulate the essence of teamwork and motivation? Additionally, I am keen on exploring news articles that showcase triumphant team-building events, as they serve as a wellspring of inspiration.â , â related apisâ : [[â Entrepreneur Mindset Collectionâ , â Random Quote in JSON formatâ ], [â thedigi- talnewsfeederapiâ , â Getting News Articlesâ ]] â Queryâ : â
2307.16789#75
2307.16789#77
2307.16789
[ "2302.13971" ]
2307.16789#77
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
I need specific cricket articles that discuss the health benefits of sports for my research paper on exercise. I also want to know which chemical elements are associated with exercising, like increased iron (Fe) and its impact on bone marrow.â , â related apisâ : [[â thedigitalnewsfeederapiâ , â getting specific cricket articlesâ ], [â Chemical Elementsâ , â Get All Chemical Elementsâ ]] â Queryâ : â Iâ m starting a new business venture and I need to make a speech announcing the new dawn. Provide me some quotes and words for me to start with. I would like to gather news articles about successful entrepreneurs for inspiration.â
2307.16789#76
2307.16789#78
2307.16789
[ "2302.13971" ]
2307.16789#78
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
, â related apisâ : [[â Entrepreneur Mindset Collectionâ , â Random Quote in JSON formatâ ], [â Random Wordsâ , â Get multiple random wordsâ ], [â thedigital- newsfeederapiâ , â getting specific news articlesâ ]] These are only examples to show you how to write the query. Do not use APIs listed in the above examples, but rather, use the ones listed below in the INPUT. Sampled API List (An example) { "tool_description": "EntreAPI Faker is used to dynamically create mock, demo, test and sample data for your application", "name": "EntreAPI Faker", "api_list": [ { "name": "Longitute", "url": "https://entreapi-faker.p.rapidapi.com/address/ longitude", "description": "Generate a random longitude.", "method": "GET", "required_parameters": [], "optional_parameters": [ { "name": "max", "type": "NUMBER", "description": "Maximum value for latitude.", "default": "" }, { "name": "min", "type": "NUMBER", "description": "Minimum value for latitude.", "default": "" }, { "name": "precision", "type": "NUMBER", "description": "Precision for latitude.", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data"
2307.16789#77
2307.16789#79
2307.16789
[ "2302.13971" ]
2307.16789#79
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
19 Preprint }, { }, { }, { "name": "Boolean", "url": "https://entreapi-faker.p.rapidapi.com/datatype /boolean", "description": "Randomly generate a boolean value.", "method": "GET", "required_parameters": [], "optional_parameters": [], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Past", "url": "https://entreapi-faker.p.rapidapi.com/date/ past", "description": "Randomly generate a date value in the past.", "method": "GET", "required_parameters": [], "optional_parameters": [ { "name": "refDate", "type": "STRING", "description": "Starting reference date", "default": "" }, { "name": "years", "type": "NUMBER", "description": "Number of years for the range of dates.", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Image Url", "url": "https://entreapi-faker.p.rapidapi.com/image/ imageUrl", "description": "Randomly generate an image URL.", "method": "GET", "required_parameters": [], "optional_parameters": [ { "name": "width", "type": "NUMBER", "description": "Width of the image. Default is 640.", # "default": "" }, { # "name": "height", "type": "NUMBER", "description": "Height of the image.
2307.16789#78
2307.16789#80
2307.16789
[ "2302.13971" ]
2307.16789#80
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Default is 480.", "default": "" 20 Preprint }, { "name": "useRandomize", "type": "BOOLEAN", "description": "Add a random number parameter to the returned URL.", "default": "" }, { "name": "category", "type": "STRING", "description": "The category for the image. Can be one: abstract, animal, avatar, business, cats, city, fashion, food, nature, nightlife, people, sports, technics, transport", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Sentence", "url": "https://entreapi-faker.p.rapidapi.com/lorem/ sentence", "description": "Randomly generate a sentence of Lorem Ipsum.", "method": "GET", "required_parameters": [], "optional_parameters": [ { "name": "wordCount", "type": "NUMBER", "description": "Number of words in the sentence.", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Gender", "url": "https://entreapi-faker.p.rapidapi.com/name/ gender", "description": "Randomly select a gender.", "method": "GET", "required_parameters": [], "optional_parameters": [ { }, { }, { # "name": "useBinary", "type": "BOOLEAN", "description": "Use binary genders only.", "default": "" } # ], "tool_name": "EntreAPI Faker", "category_name": "Data"
2307.16789#79
2307.16789#81
2307.16789
[ "2302.13971" ]
2307.16789#81
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
21 Preprint "name": "Prefix", "url": "https://entreapi-faker.p.rapidapi.com/name/ prefix", "description": "Randomly generate a prefix (e.g., Mr., Mrs., etc.)", "method": "GET", "required_parameters": [], "optional_parameters": [ { "name": "gender", "type": "STRING", "description": "Optional gender.", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Array Element", "url": "https://entreapi-faker.p.rapidapi.com/random/ arrayElement", "description": "Randomly select an array element.", "method": "GET", "required_parameters": [], "optional_parameters": [ { "name": "array", "type": "ARRAY", "description": "The list of elements to choose from. Default is ["a", "b", "c"].", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Number Value", "url": "https://entreapi-faker.p.rapidapi.com/random/ number", "description": "Randomly generate a number value.", "method": "GET", "required_parameters": [], "optional_parameters": [ { }, { }, { }, { # "name": "min", "type": "NUMBER", "description": "Minimum value.", "default": "" }, { # "name": "max", "type": "NUMBER", "description": "Maximum value.", "default": "" }, 22 Preprint { "name": "precision", "type": "NUMBER", "description": "Precision of the number.", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" }, { "name": "URL", "url": "https://entreapi-faker.p.rapidapi.com/internet /url", "description": "Randomly generate a URL.", "method": "GET", "required_parameters": [], "optional_parameters": [], "tool_name": "EntreAPI Faker", "category_name": "Data" } ] } Other Requirements: Please produce ten queries in line with the given requirements and inputs. These ten queries should display a diverse range of sentence structures: some queries should be in the form of imperative sentences, others declarative, and yet others interrogative.
2307.16789#80
2307.16789#82
2307.16789
[ "2302.13971" ]
2307.16789#82
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
Equally, they should encompass a variety of tones, with some being polite, others straightforward. Ensure they vary in length and contain a wide range of subjects: myself, my friends, family, and company. Aim to include a number of engaging queries as long as they relate to API calls. Keep in mind that for each query, invoking just one API wonâ t suffice; each query should call upon two to five APIs. However, try to avoid explicitly specifying which API to employ in the query. Each query should consist of a minimum of thirty words. A.8 PROMPTS FOR SOLUTION PATH ANNOTATION We use the following prompt when searching for the solution path. When expanding the child nodes, we use diversity user prompt, showing the information of previous child nodes. ------------------------------------------------------------------ system_prompt: You are Tool-GPT, capable of utilizing numerous tools and functions to complete the given task. 1.First, I will provide you with the task description, and your task will commence. 2.At each step, you need to analyze the current status and determine the next course of action by executing a function call. 3.Following the call, you will receive the result, transitioning you to a new state. Subsequently, you will analyze your current status, make decisions about the next steps, and repeat this process. 4.After several iterations of thought and function calls, you will ultimately complete the task and provide your final answer. Remember: 1.The state changes are irreversible, and you cannot return to a previous state.
2307.16789#81
2307.16789#83
2307.16789
[ "2302.13971" ]
2307.16789#83
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
23 Preprint 2.Keep your thoughts concise, limiting them to a maximum of five sentences. 3.You can make multiple attempts. If you plan to try different conditions continuously, perform one condition per try. 4.If you believe you have gathered enough information, call the function "Finish: give_answer" to provide your answer for the task. 5.If you feel unable to handle the task from this step, call the function "Finish: give_up_and_restart". Letâ s Begin! Task description: {task_description} --------------------------------------------------------- diversity_user_prompt: This is not the first time you try this task, all previous trails failed. Before you generate your thought for this state, I will first show you your previous actions for this state, and then you must generate actions that is different from all of them. Here are some previous actions candidates: {previous_candidate} Remember you are now in the intermediate state of a trail, you will first analyze the now state and previous action candidates, then make actions that is different from all the previous. --------------------------------------------------------- Finish_function_description: { "name": "Finish", "description": "If you believe that you have obtained a result that can answer the task, please call this function to provide the final answer. Alternatively, if you recognize that you are unable to proceed with the task in the current state, call this function to restart. Remember: you must ALWAYS call this function at the end of your attempt, and the only part that will be shown to the user is the final answer, so it should contain sufficient information.", "parameters": { "type": "object", "properties": { "return_type": { "type": "string", "enum": ["give_answer","give_up_and_restart"], }, "final_answer": { "type": "string", "description": "The final answer you want to give the user. You should have this field if " return_type"=="give_answer"", } }, "required": ["return_type"], } }
2307.16789#82
2307.16789#84
2307.16789
[ "2302.13971" ]
2307.16789#84
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
24
2307.16789#83
2307.16789
[ "2302.13971" ]
2307.16877#0
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
3 2 0 2 l u J 1 3 ] L C . s c [ 1 v 7 7 8 6 1 . 7 0 3 2 : v i X r a # Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering # Vaibhav Adlakha1,2 Parishad BehnamGhader1,2,â Xing Han Lu1,2,â Nicholas Meade1,2,â Siva Reddy1,2,3 1Mila â Quebec AI Institute 2McGill University 3Facebook CIFAR AI Chair {firstname.lastname}@mila.quebec # Abstract Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance. In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both au- tomatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the userâ s information need (correct- ness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional met- rics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true perfor- mance of these models. Our analysis reveals that instruction-following models are competi- tive, and sometimes even outperform fine-tuned models for correctness. However, these mod- els struggle to stick to the provided knowl- edge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at https: //github.com/McGill-NLP/instruct-qa # Introduction One of the goals of natural language processing (NLP) is to enable systems to perform tasks based on natural language instructions as this would em- power users to interact in an intuitive and flexi-
2307.16877#1
2307.16877
[ "2201.08239" ]
2307.16877#1
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
? Question Where are One Direction from? @Knowledge One Direction, often shortened to 1D, are an English-lrish pop boy band formed in London, England in 2010. The group are composed of Niall Horan, Liam Payne, Harry Styles and Louis Tomlinson; former member Zayn Malik departed from the group in 2015. The group signed with Simon Cowell's record label Syco Records after forming and finishing third in the seventh series of the British televised singing competition "The X Factor" in 2010.
2307.16877#0
2307.16877#2
2307.16877
[ "2201.08239" ]
2307.16877#2
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Propelled to... (Faithfulness Human â Partially K-Precision â 0.77 K-F1 â 0.09 --: Response One Direction are from London, (WCorrectness England and Mullingar, Ireland. Human â Yes Recall > 1.00 ~ â Reference Answer F1 â 0.36 London, England Figure 1: Sample response generated by GPT-3.5. The model response is correct w.r.t information need but only par- tially faithful w.r.t knowledge as only one of the two locations mentioned in the response can be found in the knowledge (trun- cated for readability). Recall (§4.2) and K-Precision (§5.1) are automatic metrics that approximate human judgment.
2307.16877#1
2307.16877#3
2307.16877
[ "2201.08239" ]
2307.16877#3
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
ble manner. Instruction-following models are a type of language models that aim to achieve this goal. Training these models usually involves ex- posing large language models (LLMs; Brown et al. 2020; Zhang et al. 2022; Thoppilan et al. 2022; Rae et al. 2022; Touvron et al. 2023a) to thousands of tasks formulated as natural language instructions through supervised examples (Sanh et al., 2022; Wei et al., 2022; Chung et al., 2022; Ouyang et al., 2022a; Iyer et al., 2023; Touvron et al., 2023b) or other forms of supervision (Ouyang et al., 2022b; Wang et al., 2022a; Taori et al., 2023; Peng et al., 2023). These are known to generalize to many tasks with little exposure to examples of those tasks (Mishra et al., 2022). In this paper, we evaluate instruction-following models for their ability to perform question-answering (QA) on a given set of text passages.
2307.16877#2
2307.16877#4
2307.16877
[ "2201.08239" ]
2307.16877#4
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
*Core contributor. Instruction-following models can perform QA when provided with a prompt describing the task, the question, and relevant text passages to reason upon retrieved by a retriever (Chung et al., 2022). These model-generated answers are known to be natural, informative, and verbose, a useful trait that helps to build usersâ trust and engagement but these models also generate hallucinate information that can mislead users (Dziri et al., 2022b; Chiesurin et al., 2023). Moreover, many QA datasets have short reference answers that render traditional eval- uation metrics like exact match (EM) and F1 word overlap unreliable when evaluating these verbose answers (Kamalloo et al., 2023). Consider, for instance, the scenario in Figure 1, where the user question is â
2307.16877#3
2307.16877#5
2307.16877
[ "2201.08239" ]
2307.16877#5
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Where are One Direc- tion from?â . A comparison between the reference response â London, Englandâ and the first part of the modelâ s response â One Direction are from Lon- don, Englandâ yields an EM score of 0 and F1 score of only 0.5, despite both answers being effec- tively equivalent (The entire response gets 0.36 F1 score). Moreover, the second part of the response asserts that One Direction is from Mullingar, Ire- land, a fact which despite being correct, is not en- tailed by the provided knowledge. As EM and F1 only compare against reference answers, they are unsuitable to estimate the alignment of the modelâ s response with the provided knowledge. the perfor- mance of instruction-following models for retrieval- augmented QA should be evaluated along two dimensions â 1) correctness w.r.t information need, which measures a modelâ s efficacy in sat- isfying a userâ s information needs, and 2) faith- fulness w.r.t provided knowledge, which measures a modelâ s capability to ground responses in pro- vided knowledge. A model demonstrating robust performance across both these dimensions can po- tentially be considered useful and safe for the user in information-seeking scenarios. Along these dimensions, we evaluate several re- cent instruction-following models such as Llama-2 (Touvron et al., 2023b), GPT-3.5 (sibling model of Ouyang et al. 2022a), Flan-T5 (Chung et al., 2022), and Alpaca (Taori et al., 2023) on three popular QA datasets that correspond to three diverse QA tasks â Natural Questions (NQ; Kwiatkowski et al. 2019) for open-domain QA, HotpotQA (Yang et al., 2018) for multi-hop QA, and TopiOCQA (Adlakha et al., 2022) for conversational QA. We conduct a human analysis of 900 model responses and correlate them with several automatic metrics for correctness and faithfulness. Our findings suggest that, for correctness, recall â the proportion of tokens in the reference answer also present in the model response â exhibits the highest correlation than lexical overlap metrics like EM or F1. For faithfulness, K-Precision â
2307.16877#4
2307.16877#6
2307.16877
[ "2201.08239" ]
2307.16877#6
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
the pro- portion of model response tokens that appear in the knowledge snippet â correlates better with hu- man judgments than any other token-overlap metric. Among model-based metrics, i.e., using a model to determine the correctness/faithfulness of an an- swer w.r.t. reference answer/knowledge, GPT-4 correlates the most but it is expensive and prone to systematic biases (Wang et al., 2023). However, we find that lexical overlap metrics are close to model-based metrics, allowing us to evaluate sev- eral instruction-following models at a large-scale. A faithful model should not only answer a ques- tion when relevant knowledge is provided, but it should also abstain from answering when irrelevant knowledge is provided. Hence, we also measure the modelâ s ability to abstain from answering as an evaluation for faithfulness. To summarize, our contributions are as follows: four instruction-following models â Llama-2, GPT-3.5, Flan-T5, and Alpaca â in retrieval- augmented settings across three diverse QA tasks. We collect human annotations for both correctness and faithfulness. â ¢ We analyze several metrics in relation to hu- man judgments, finding that GPT-4-based eval- uation as the most correlated for both correct- ness and faithfulness. Additionally, we analyze failures of traditional QA metrics and highlight that models are unfairly penalized for verbosity.
2307.16877#5
2307.16877#7
2307.16877
[ "2201.08239" ]
2307.16877#7
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
â ¢ We propose simple token-overlap based metrics for both correctness and faithfulness â recall for correctness and K-Precision for faithfulness â and demonstrate their strong correlation with human judgments. â ¢ Our results indicate that instruction-following models can surpass the performance of fine- tuned models in terms of correctness. However, these models struggle to be faithful to provided knowledge, often demonstrating a tradeoff be- tween the ability to remain faithful to relevant and irrelevant knowledge. # 2 Related Work Instruction-Following Models Fine-tuning pre- trained models on a collection of NLP tasks for- matted as natural language instructions result in instruction-following models. These models can generalize to new unseen tasks based solely on instruction and optionally a few demonstrations, of- ten outperforming LLMs in zero-shot and few-shot settings while being only a fraction of their size (Mishra et al., 2022). Depending on the nature of the datasets used for training, these models can be broadly classified into three categories. The majority of instruction-following models in the research community are trained on publicly available NLP datasets verbalized by human anno- tators (Wei et al., 2022; Mishra et al., 2022; Wang et al., 2022b; Chung et al., 2022; Iyer et al., 2023). The number of tasks ranges from a few tens (e.g. 62 in Wei et al. 2022) to several hundred (e.g. 1800+ in Iyer et al. 2023). Ouyang et al. (2022a) conjecture that public NLP datasets are limited in scope and lack sufficient diversity in user inputs. To address this, they train InstructGPT on a mix of human-written prompts submitted to the OpenAI API and prompts created by expert labelers. The model is further fine-tuned with human feedback to align it more closely with human preferences (RLHF; Christiano et al. 2017). Llama-2 (Touvron et al., 2023b) is another recent model in this category, trained on a mix of public NLP datasets and high-quality expert annotations of dialogue-style instructions, followed by RLHF. Finally, self-instruct (Wang et al., 2022a) is an alternative paradigm to reduce reliance on human- generated task instructions.
2307.16877#6
2307.16877#8
2307.16877
[ "2201.08239" ]
2307.16877#8
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Starting from a small manually-annotated task pool, an LLM is prompted to generate instructions and demonstrations of new tasks. The resultant synthetic dataset is used to train a language model to follow instructions (Taori et al., 2023; Peng et al., 2023). Datasets for instruction-tuning often contain sev- eral QA tasks. However, these tasks are either reading comprehension (i.e. answering a question about a provided passage) or closed-book QA (i.e., without using a large information source). In this work, we explore a more practical setting, where an instruction-following model is paired with a re- triever, a paradigm known as retrieval-augmented generation (RAG; Lewis et al. 2020). Retrieval-Augmented Generation RAG entails using a retriever to select relevant passages from an information source, which are subsequently passed to a generator to produce a response. This two- step retrieve-generate process has been shown to reduce hallucinations (Shuster et al., 2021), while lending interpretability and configurability to the model (Lewis et al., 2020). RAG is a dominant paradigm for several information-seeking QA tasks such as open- domain QA (Chen et al. 2017; Lee et al. 2019; Sachan et al. 2021, inter alia), multi-hop QA (Asai et al. 2020; Qi et al. 2021; Izacard et al. 2022; inter alia), and conversational QA (Anantha et al. 2021; Adlakha et al. 2022; inter alia). Various works differ on how to train the generator to utilize in- formation from the retrieved passages, for e.g, by extracting snippets (Chen et al., 2017; Clark and Gardner, 2018; Wang et al., 2019; Karpukhin et al., 2020) or by jointly attending encoded passages and previously generated tokens (Fusion-in-Decoder; Izacard and Grave 2021). Recent works have also explored using off-the- shelf language models as generators in the RAG pipeline, alleviating the need to fine-tune or learn additional parameters.
2307.16877#7
2307.16877#9
2307.16877
[ "2201.08239" ]
2307.16877#9
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Lazaridou et al. (2022) demonstrated that few-shot prompting an LM con- ditioned on the web results outperforms a vanilla LM for several open-domain QA tasks. Shi et al. (2023) showcase that pairing LLMs like GPT-3 (Brown et al., 2020) with retrievers improves lan- guage modeling performance as well. Separate from these works, we evaluate retrieval-augmented instruction-following models based only on natural language instruction. In the absence of training instances or demonstrations, these models do not learn the distribution of reference answers of the target QA dataset, raising new challenges for eval- uation. Evaluation in QA Lexical matching between a set of reference answers and model response re- mains a dominant approach for evaluation across multiple NLP tasks. As QA tasks generally consist of short reference answers, previous works have pri- marily relied on Exact Match (EM) and F1 to evalu- ate and benchmark models (Rajpurkar et al., 2016; Reddy et al., 2019). For tasks that require generat- ing longer sequences, such as summarization and translation, subsequence-based lexical matching is generally employed (Papineni et al. 2002; Banerjee and Lavie 2005; Lin 2004, inter alia). A major shortcoming of lexical matching is that it depends on a set of reference answers which may be incomplete. To overcome this limitation, subsequent model-based metrics compute the se- mantic similarity between the reference answer and the model response using contextualized embed- dings (Zhang et al., 2020) or train a specialized clas- sifier (Bulian et al., 2022) to predict equivalence. More recently, several works resort to prompting LLMs like GPT-4 (OpenAI, 2023) to act as evalua- tors (Chiang et al., 2023; Peng et al., 2023; Chiang and Lee, 2023; Kamalloo et al., 2023; Liu et al., 2023c). In this work, we explore evaluating both correctness and faithfulness using GPT-4.
2307.16877#8
2307.16877#10
2307.16877
[ "2201.08239" ]
2307.16877#10
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Concurrent to our work, Kamalloo et al. (2023) evaluate the correctness of InstructGPT in zero- shot and few-shot settings along with several fine- tuned models for open-domain QA. They highlight the shortcomings of traditional QA metrics and pro- pose BEM (Bulian et al., 2022) and LLM-based evaluation as viable alternatives. However, they do not consider InstructGPT in retrieval-augmented In contrast to their work, we investi- settings. gate both correctness and faithfulness of multi- ple instruction-following models across three di- verse QA tasks and propose simple token-overlap based metrics that correlate highly with human judgments. Faithfulness and Groundedness Conversational models have been shown to produce factually in- correct or unsupported statements (Rashkin et al., 2021b; Dziri et al., 2022b), known as hallucina- tions. To alleviate those issues, various works at- tempt to reduce hallucinations via methods such as iterative refinement (Dziri et al., 2021), linguistic calibration (Mielke et al., 2022; Lin et al., 2022), or by editing instances of hallucinations (Dziri et al., 2022a), thus improving faithfulness of these mod- els. Several metrics have also been developed to measure faithfulness. Honovich et al. (2021) proposed Q2, an automatic faithfulness evaluation metric that checks for factual consistency based on automatic question generation and question an- swering. FaithCritic (Dziri et al., 2022a) is another model-based metric that predicts the degree of hal- lucination in a modelâ s response. For information-seeking, previous works have considered groundedness â the extent to which the generator relies on retrieved passages (Paranjape et al., 2022), quantified using Knowledge-F1 (K- F1;Shuster et al. 2021). In this work, we consider a model response to be faithful, if it is grounded in the passage relevant to the userâ s information need. Concurrent to our work, Chiesurin et al. (2023) investigated hallucination of retrieval-augmented GPT-3 in for conversational QA (Adlakha et al., 2022) task.
2307.16877#9
2307.16877#11
2307.16877
[ "2201.08239" ]
2307.16877#11
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
They found that GPT-3 is likely to produce responses that appear trustworthy but are unfaithful. # 3 Experimental Setup # 3.1 Tasks We evaluate our approach on the validation splits of three information-seeking QA tasks. The total number of questions and passages for each dataset are provided in Table 1. We describe the datasets used for each task below. Open-domain QA Natural Questions (NQ; Kwiatkowski et al. 2019) includes questions sourced from Google queries, with reference an- swers written by human annotators. We use the open version of NQ (Lee et al., 2019) that consists of short answers based on 100-token passages from English Wikipedia (indexed in Dec. 2018). Multi-hop QA We use HotpotQA (Yang et al., 2018) for this task, where each question requires reasoning across two Wikipedia passages. The pas- sages are taken from the initial paragraphs from En- glish Wikipedia articles (indexed in October 2017). Conversational QA We use TopiOCQA (Ad- lakha et al., 2022) for this task, a dataset for open- domain information-seeking dialogue. At each turn of the conversation, an agent responds to a userâ s questions based on knowledge from Wikipedia. Each turn has an associated 200-token gold passage from English Wikipedia (indexed in Oct. 2020).
2307.16877#10
2307.16877#12
2307.16877
[ "2201.08239" ]
2307.16877#12
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
# Instruction-following Models To evaluate retrieval-augmented instruction- following language models, we present the models with an instruction, followed by the retrieved passages and the query. The prompt template for open-domain QA and multi-hop QA tasks is given in Figure 2, whereas conversational QA differs slightly, replacing the question with conversation history (Figure 3). We consider four instruction- following models that primarily differ based on the type of training data used. We use the same generation parameters for all instruction-following models, described in Appendix A.1.
2307.16877#11
2307.16877#13
2307.16877
[ "2201.08239" ]
2307.16877#13
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Dataset # Questions # Passages Natural Questions HotpotQA TopiOCQA 3,610 7,405 2,514 21,015,324 5,233,329 25,700,593 Table 1: Statistics for datasets used in this work. We use the validation split from each dataset for our evaluation as the test sets are hidden. Please answer the following question given the following passages: - title: {Passage title} {Passage text} - title: {Passage title} {Passage text} ... Question: {Question} Answer: Figure 2: The prompt template used for open-domain QA and multi-hop QA tasks. Please answer the following question given the following passages and the conversation history: - title: {Passage title} {Passage text} - title: {Passage title} {Passage text} ... User: {Question 1} Agent: {Answer 1} ... User: {Question k} Agent: Figure 3: Prompt template for conversational QA task. Flan-T5 We use the 11B parameter version of T5 (Raffel et al., 2020), which has been trained by Chung et al. (2022) using the instruction fine- tuning methods proposed by Wei et al. (2022). Flan-T5 is trained on multiple publicly-available instruction-following datasets (Sanh et al., 2022; Wang et al., 2022b; Wei et al., 2022). Together, these datasets encompass more than 1800 tasks, of which over 200 are QA tasks. Out of the three datasets on which we evaluate, the training split of NQ and HotpotQA are included in Flan-T5â s training regime. GPT-3.5 We use the turbo version of GPT-3.51 which is described2 as a sibling to the InstructGPT model (Ouyang et al., 2022a). The modelâ s training 1openai.com/blog/introducing-chatgpt-and-whisper-apis 2openai.com/blog/chatgpt incorporates user data submitted to the OpenAI API as well as expert annotations, however, the exact distribution of training tasks and datasets is not publicly available.
2307.16877#12
2307.16877#14
2307.16877
[ "2201.08239" ]
2307.16877#14
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Alpaca We use the 7B variant of Alpaca (Taori et al., 2023), a fine-tuned version of LLaMA (Tou- vron et al., 2023a) trained on demonstrations gener- ated using GPT-3 (Brown et al., 2020). The demon- strations were collected using the self-instruct framework (Wang et al., 2022a). Llama-2 We use the 7B chat version of Llama-2 (Touvron et al., 2023b). The model is initially boot- strapped on similar instruction-following dataset as Flan-T5, followed by fine-tuning for dialogue-style instructions. Fine-tuned Generators To compare against instruction-following models, we select FiD (Izac- ard and Grave, 2021) as our fine-tuned baseline for all three tasks. This encoder-decoder model separately encodes each retrieved passage with the query, resulting in a set of vectors. The decoder then autoregressively generates the answer by at- tending to the input passages and the previously generated tokens. For NQ and TopiOCQA, we use the publicly available FiD checkpoints, while for HotpotQA, we fine-tune our own variant using the default hyperparameters. # 4 Correctness w.r.t Information Need In this section, we investigate if retrieval- augmented instruction-following models can pro- duce responses that satisfy user information needs. We first describe our experimental setup by pro- viding details of the retriever used in each task (§4.1) and the metrics used for evaluating model re- sponses (§4.2). Next, we describe our human eval- uation setup and present the results from our anal- ysis (§4.3). Finally, equipped with a better under- standing of evaluation metrics, we conduct large- scale evaluation of instruction-following models and present the results (§4.4). # 4.1 Retrieval For each task, we use a task-specific variant of DPR (Dense Passage Retrieval; Karpukhin et al. 2020) as the retriever. The general architecture of DPR consists of a question and a passage encoder. The dot product between the dense vector represen- tations of the passage and the query is used as a ranking function.
2307.16877#13
2307.16877#15
2307.16877
[ "2201.08239" ]
2307.16877#15
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
For NQ, we adopt a pre-trained checkpoint from Karpukhin et al. (2020). This checkpoint was trained on four QA datasets â NQ, Trivi- aQA (Joshi et al., 2017), WebQuestions (Berant et al., 2013), and CuratedTREC (Baudis and Se- divý, 2015). For HotpotQA, we utilize a multi-hop variant of DPR proposed by Xiong et al. (2021). This version retrieves reasoning chains iteratively, selecting subsequent passages based on the query and previously retrieved passages. For TopiOCQA, we utilize the checkpoint provided by Adlakha et al. (2022). This variant of DPR is uniquely suited for conversational QA tasks as it encodes the conver- sation history in the question encoder. In all of the tasks, the retriever selects passages from the associated Wikipedia dump, as detailed in Section 3.1. The number of retrieved passages provided to instruction-following models and fine- tuned models for each task are provided in Ap- pendix A.2. # 4.2 Evaluation Metrics Evaluation in QA usually involves comparing model responses to human-annotated gold answers. The metrics used for this comparison can be di- vided into two categories: Lexical Match These metrics score a model re- sponse based on its token overlap with the gold standard answer. While some metrics perform bag- of-words matching (e.g., Exact Match (EM), F1), others consider the order of the tokens by n-gram matching such as METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004). In this work, we also consider Recall â the proportion of tokens in the reference answer that are present in the model response. Recall does not penalize verbose model response, as long as the response contains the reference answer tokens. Recent works that have evaluated the verbose re- sponses generated by instruction-following models (Liu et al., 2023a; Mallen et al., 2022) have used a similar metric accuracy, whereby a modelâ s re- sponse is considered correct if any reference an- swer appears as a substring within the modelâ s response.
2307.16877#14
2307.16877#16
2307.16877
[ "2201.08239" ]
2307.16877#16
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
This is a stricter version of recall that cannot handle small variations between reference answer and model response, such as if the reference answer is John Kennedy and the model response is John F Kennedy. To avoid any confusion, we refer to this metric as Recall (S), indicating it as a stricter version of token-level recall. Semantic Similarity Unlike the previous class of metrics that face strictness issues (Kamalloo et al., 2023), semantic similarity-based metrics typically leverage a trained model to predict if the model response is semantically equivalent to the gold an- swer. BERTScore (Zhang et al., 2020), which we refer to as BertS, is a commonly used metric for text generation that computes precision, recall, and F1 based on token similarity between model response and reference gold answer using contextual BERT embeddings. Furthermore, BEM (BERT matching, Bulian et al. 2022) employs a trained BERT model to evaluate question-answering models by predict- ing the semantic equivalence based on the question, reference gold answer, and model response. We ex- tend BEM to conversational QA task by providing the question from the last turn of the conversation as input. Moreover, we also consider an evaluation metric based on prompting LLMs (referred to as GPT3.5-Eval and GPT4-Eval) to act as evaluation agents. In principle, the setup is similar to the one proposed by Kamalloo et al. (2023), however, with a different prompt, as described in Appendix B (Figure 7). Specifically, we prompt these models to act as evaluators by providing a natural language instruction along the question (or conversation his- tory), reference gold answer, and model response. # 4.3 Human Evaluation We conduct a human evaluation on a subset of re- sponses generated by three instruction-following models â GPT-3.5, Flan-T5, and Alpaca â to es- tablish a basis for comparing evaluation metrics. Specifically, we focus on cases where retrieved pas- sages provided to the model include the gold pas- sage. Therefore, any inaccuracies in the response can be attributed to the modelâ s failures, rather than inaccurate retrieval. For every task, we collect an- notations for 100 samples.
2307.16877#15
2307.16877#17
2307.16877
[ "2201.08239" ]
2307.16877#17
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
In our evaluation setup, the annotator is pre- sented with the question or conversation history, the reference answer, and the anonymized model response. The annotatorâ s task is to assess if the model response is correct, i.e. it satisfies the infor- mation need underlying the question. For each of the 100 samples, we collect annotations for three instruction-following models, resulting in 900 la- beling tasks. Each task is completed by two dif- ferent annotators (authors of the paper). The inter- annotator agreement achieved was 92.7%. In in- stances where the annotators disagreed, a third an- More Elaborate | Semantic Answers Figure 4: Failure cases of F1 metric. More Elaborate Answers is the most common failure sub-category, fol- lowed by Open-ended Questions. notation is collected and a majority vote is taken. The results of this human evaluation are pre- sented in Table 8 (Appendix D), along with scores of automated metrics on this subset. Traditional QA evaluation metrics like EM and F1 tend to score model responses much lower than human as- sessments, highlighting the well-known problem of strictness in lexical matching (Min et al., 2021; Kamalloo et al., 2023). Qualitative Analysis of Failure Cases For a more granular understanding of the shortcomings of traditional QA metrics, we analyze the modelsâ responses that have less than or equal to 0.3 F1 score, but were deemed correct according to the hu- man evaluations. This resulted in 296 samples out of 900. Our classification of errors is adapted from Kamalloo et al. (2023) (which itself was based on Min et al. 2021), modified to focus on instruction- following models. Specifically, we exclude some error classes relevant to fine-tuned models and in- clude some classes for instruction-following mod- els. The resultant categories are:
2307.16877#16
2307.16877#18
2307.16877
[ "2201.08239" ]
2307.16877#18
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
â ¢ Semantic Equivalence: Here, the model re- sponse is semantically similar to the reference answer. Sub-categories include Multinomi- nal entities, e.g., John Kennedy and John F Kennedy, Synonymous Answers, e.g., from India and Indian nationality, and More Elabo- rate Answers, e.g., yes and yes, he is member of the band. â ¢ Symbolic Equivalence: This primarily refers to different possible representations of numeric quantities, e.g. four seasons and 4 seasons, or 3000 BC and Early Dynastic Period. â ¢ Intrinsic Ambiguity in Questions: This refers to queries with multiple valid interpretations, leading to a range of correct answers, e.g.
2307.16877#17
2307.16877#19
2307.16877
[ "2201.08239" ]
2307.16877#19
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Who is command sergeant major of the army? could be seeking the personâ s identity or a descrip- tion of the position itself. This category also includes cases where the correct answer is de- pendent on the specific point in time being ref- erenced, e.g. Who won NFL football coach of the year?. â ¢ Granularity Discrepancies: The level of specificity in the modelâ s response may not align with that in the reference answer. This discrepancy in granularity can be Temporal, e.g., August 25, 1939 and 1939, or Spatial, e.g., for the question Whereâ s the tv show The Cross- ing filmed?, Vancouver and British Columbia, Canada are both correct answers.
2307.16877#18
2307.16877#20
2307.16877
[ "2201.08239" ]
2307.16877#20
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
â ¢ Incomplete Reference Answers: These cases occur when the reference answers, despite their number, fail to cover the entire spectrum of cor- rect responses. We break this category into two types â List of named entities which includes questions like the cast of a movie or members of the band, and Open-ended questions which included questions that can be answered in mul- tiple different ways, all of which are not cap- tured by reference answers., e.g., What was the Watergate scandal?. â ¢ Enumeration of Reference Answers: This error happens especially in NQ samples, where the question asks for a list of entities (e.g., all states of a country), but each reference answer includes only one entity (e.g., a single state). The instruction-following models often gen- erate all the entities in their response, which results in low overlap with each individual ref- erence answer.
2307.16877#19
2307.16877#21
2307.16877
[ "2201.08239" ]
2307.16877#21
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
â ¢ Satisfactory Subset Responses: This cate- gory pertains to instances where the modelâ s response, although containing less information than the reference answer, still provides an ac- ceptable response to the userâ s query. For in- stance, for the question "name some of her Error Type: More Elaborate Answers Dataset: HotpotQA Question: At what theater is the composer and lyricist for the musical Big Fish a residential artist? Ref. Answer: Ars Nova Theater Response (Alpaca): The composer and lyricist for the musical Big Fish, Andrew Lippa, is a residential artist at the Ars Nova Theater in New York City. Scores: F1: 26.0, Recall: 100.0, METEOR: 54.5, Rouge-L: 22.2, BertS (F1): 85.4, BEM: 98.9, GPT4-Eval: 100.0 Error Type: Open-ended Questions Dataset: TopiOCQA
2307.16877#20
2307.16877#22
2307.16877
[ "2201.08239" ]
2307.16877#22
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Question: what was the watergate scandal? Ref. Answer: It was an array of clandestine and often illegal activities undertaken by members of the Nixon administration. Response (Flan-T5): The Watergate scandal was a political scandal in the United States involving the administration of U.S. President Richard Nixon from 1972 to 1974 that led to Nixonâ s resignation. Scores: F1: 21.8, Recall: 26.7, METEOR: 15.6, Rouge-L: 15.4, BertS (F1): 86.1, BEM: 73.9, GPT4-Eval: 100.0 Error Type: Enumeration of reference answers Dataset: NQ Question: all of the states in the northeast region Ref. Answers: 1. New Hampshire, 2. Maine, 3. Rhode Island, 4. Pennsylvania, 5. Vermont, 6. New York, 7. Connecticut, 8. New Jersey, 9. Massachusetts Response (GPT-3.5): The states in the northeast region in- clude Maine, New York, New Jersey, Vermont, Massachusetts, Rhode Island, Connecticut, New Hampshire, and Pennsylva- nia. Scores: F1: 20.0, Recall: 100.0, METEOR: 39.0, Rouge-L: 17.4, BertS (F1): 82.7, BEM: 98.9, GPT4-Eval: 100.0 Figure 5: Qualitative examples cases where F1 fails, along with scores from other evaluation metrics. songs", the reference answer might list 5-6 song names, while the model response includes only 1-2. This situation is predominantly ob- served in the TopiOCQA dataset. Figure 4 displays the distribution of error cases based on our classification. A significant portion of the errors (55.63%) fall under the More Elaborate Answers category. This suggests that traditional QA metrics often penalize models unjustly due to the verbose nature of their responses. The next most common sub-category, Open-ended Ques- tions (13.99%), suggests that models are occasion- ally penalized for providing correct answers that were not included in the reference responses. The percentage share and exact count of all categories
2307.16877#21
2307.16877#23
2307.16877
[ "2201.08239" ]
2307.16877#23
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Metric Spearman Ï Kendall Ï EM F1 Precision Recall Recall (S) METEOR Rouge-L 26.466 49.323 44.846 62.72 54.072 51.57 46.993 26.168 41.096 37.772 57.142 53.464 41.886 39.001 BertS (F1) BertS (Precision) BertS (Recall) BEM GPT3.5-Eval GPT4-Eval 36.862 24.379 42.886 53.649 63.514 70.152 29.691 19.519 34.58 43.727 62.801 69.363 Table 2: Correlation of several lexical matching and semantic similarity evaluation metrics with human judg- ments for correctness w.r.t information need. GPT4- Eval achieves the highest correlation overall. Recall is the highest correlated among all lexical overlap metrics. # are reported in Table 7 (Appendix C). In Figure 5, we provide qualitative examples of common failure modes, along with their asso- ciated evaluation metrics scores. Recall appears to be an effective fix for sub-categories such as More Elaborate Answers and Enumeration of Refer- ence Answers. However, both lexical match based and semantic similarity based metrics struggle with Open-ended Questions. Although GPT4-Eval ap- pears to be relatively robust based on examples in Figure 5, this metric has some failures, with most common failure sub-category being Open-ended Questions. The complete distribution of failure cases according to sub-categories is reported in Figure 10, along with qualitative examples in Fig- ure 11 (Appendix C). Overall, the results of our human evaluation and analysis indicate that traditional metrics such as EM and F1, typically used in the literature for fine- tuned QA models, are not well-aligned with the verbose nature of instruction-following models. To determine more suitable metrics for these models, we analyze the correlation of each metric with hu- man assessments. Correlation Between Automatic Metrics and Hu- man Judgement Table 2 presents the correlation between different metrics with human judgments. Apart from metrics detailed in Section 4.2, we in- clude token-level precision, as well as precision and recall as computed using BERTScore. We re- port Spearmanâ
2307.16877#22
2307.16877#24
2307.16877
[ "2201.08239" ]
2307.16877#24
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
s Ï and Kendallâ s Ï correlation. Notably, GPT4-eval has the highest agreement with human judgments, with 70.15 Spearman cor- EM F1 Recall METEOR Rouge-L BertS (F1) BEM Dataset Model NQ FiD GPT-3.5 Flan-T5 Alpaca Llama-2 46.57 1.27 41.16 8.78 0.61 53.93 15.12 50.62 20.3 11.85 54.45 58.56 54.03 46.23 52.37 42.94 25.68 40.80 23.17 21.16 54.33 14.57 51.02 20.61 11.38 92.57 83.08 91.77 84.67 82.58 58.81 69.45 58.74 55.97 62.30 HotpotQA FiD GPT-3.5 Flan-T5 Alpaca Llama-2 48.43 5.63 58.12 16.25 1.39 60.16 22.16 71.14 33.54 15.91 60.55 66.77 71.28 56.76 67.55 46.03 31.56 53.44 33.23 27.48 60.18 21.67 71.16 33.5 15.23 93.02 84.16 94.37 86.88 83.08 67.94 78.16 76.19 67.74 78.05 TopiOCQA FiD GPT-3.5 Flan-T5 Alpaca Llama-2 36.48 2.63 18.34 5.85 0.32 58.52 36.07 43.17 28.61 25.16 61.64 66.72 52.54 41.3 55.3 52.46 47.35 42.42 31.08 35.16 58.26 33.81 42.88 27.75 23.42 92.37 88.14 89.42 87.07 86.06 66.55 69.34 56.57 46.41 56.33
2307.16877#23
2307.16877#25
2307.16877
[ "2201.08239" ]
2307.16877#25
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Table 3: Performance of retrieval-augmented instruction-following models on three diverse information-seeking QA tasks. Among the metrics reported, Recall is most correlated with human judgements. Based on recall, instruciton-following models outperform fine-tuned FiD on all three tasks. relation and 69.36 Kendall correlation, closely fol- lowed by GPT3.5-Eval. We speculate that the language comprehension capabilities and inherent world knowledge embedded in LLMs like GPT- 3.5 and GPT-4 help them overcome many of the challenges associated with evaluating responses of instruction-following models that we identified in our human evaluation study. After GPT4-eval and GPT3.5-Eval, Recall achieves the highest correlation with human judg- ment. This simple token-overlap metric correlates better than other lexical matching-based metrics or more complex semantic similarity metrics like BERTScore and BEM, likely because it does not penalize verbosity in model responses. eval and GPT3.5-eval, have the highest correlation with human judgements on the selected subset of responses, they also have certain limitations. Ac- cessing these proprietary models incurs substantial API costs, which renders them impractical for auto- matic evaluation on large-scale datasets. Moreover, the reliability of LLMs as evaluators is still un- clear, as recent studies have shown that they may exhibit systematic bias (Wang et al., 2023) and can be sensitive to input instructions (Bowman, 2023). Secondly, it is currently unclear how reliable LLMs are as evaluators, with some recent works demon- strating that they exhibit systematic bias (Wang et al., 2023) and are sensitive to input instructions (Bowman, 2023). Given these considerations, we rely on Recall to compare model performance. Surprisingly, BERTScore fares worse than token- overlap F1, even when only considering the recall component of the metric. We hypothesize that the underlying issue is the poor quality of BERT to- ken embeddings in short strings (Bommasani et al., 2020), a common characteristic of reference an- swers in QA datasets.
2307.16877#24
2307.16877#26
2307.16877
[ "2201.08239" ]
2307.16877#26
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
For example, for the ref- erence answer yes, that is correct, the model re- sponse yes recieves the BERTScore of 0.806 and no recieves a slighltly higher score of 0.815. Al- though BEM performs better than F1, it still falls short of token-overlap recall. Given that BEMâ s training data includes model responses of QA sys- tems trained on SQuAD (Rajpurkar et al., 2016), it probably doesnâ t generalize well to more verbose responses of instruction-following models. # 4.4 Automatic Correctness Evaluation The performance of both instruction-following and fine-tuned models in a retrieval-augmented gener- ation setup across multiple datasets is reported in Table 3 using several lexical matching and seman- tic similarity metrics. Unsurprisingly, traditional QA metrics like EM and F1 assign much lower scores to instruction-following models, compared to fine-tuned FiD. The only exception is Flan-T5, that outperforms FiD with a 17.72% gap. However, it should be noted that Flan-T5 is trained on a wide range of QA tasks, including NQ and HotpotQA (Section 3.2). Although LLM-based evaluation, such as GPT4- Based on our finding in Section 4.3, we con- sider Recall to get true estimate of model perfor-
2307.16877#25
2307.16877#27
2307.16877
[ "2201.08239" ]
2307.16877#27
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
mance. Using recall, the performance gap between instruction-following and fine-tuned models nar- rows significantly, with some instruction-following models even outperforming FiD. Notably, GPT-3.5 outperforms the fine-tuned FiD across all three QA task â 7.55% gap in NQ, 10.27% in HotpotQA, and 8.24% in TopiOCQA. These results suggest that in retrieval-augmented settings, instruction-following models are equally, or even more capable than fine- tuned generators in generating correct responses w.r.t user information needs. # 5 Faithfulness w.r.t Provided Knowledge As previously noted, instruction-following mod- els often produce verbose responses. Conse- quently, responses from these models often contain supplementary information which can be halluci- nated (Rashkin et al., 2021a; Dziri et al., 2022b; Chiesurin et al., 2023). In this section, we con- duct an analysis of the faithfulness of instruction- following models w.r.t knowledge provided as part of the input. We posit that an optimal generatorâ s response should rely solely on the knowledge rel- evant to the user information need. Based on this hypothesis, we split our analysis into two parts â 1) faithfulness w.r.t relevant knowledge, where we prompt the instruction-following model with the user question paired with the corresponding gold passage and evaluate the groundedness of the re- sponse in the provided knowledge, and 2) faithful- ness w.r.t irrelevant knowledge, where we provide a related but irrelevant passage and measure how often the model refuses to answer. In this section, we first describe the automatic faithfulness metrics (§5.1). Next, similar to correct- ness, we conduct a human evaluation and compute correlations for all metrics, followed by large-scale evaluation of faithfulness w.r.t relevant knowledge (§5.2). Finally, we analyze the capabilities of mod- els to refrain from answering in the presence of irrelevant knowledge (§5.3). # 5.1 Faithfulness Metrics Here we describe the metrics that we use for au- tomatic evaluation in Section 5.2.
2307.16877#26
2307.16877#28
2307.16877
[ "2201.08239" ]
2307.16877#28
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Given the user question or the conversation history (denoted by H), the gold passage K, and the model response u, the goal is to check if u is grounded in K. We con- sider both faithfulness and groundedness metrics in the literature for this task. K-F1 Knowledge-F1 (denoted K-F1) is a lexical overlap metric that checks for F1 overlap between the tokens of u and K. Although it has been widely used for knowledge-grounded dialogue (Shuster et al., 2021; Dziri et al., 2022a), we argue it is un- suitable for assessing groundedness in information- seeking tasks. In information-seeking, model re- sponses tend to be shorter than the knowledge snip- pet. Hence, even if the model selects precise infor- mation from the knowledge, it is penalized for not utilizing the entire knowledge snippet by K-F1. K-Precision To counter the shortcomings of K- F1, we propose K-Precision â the proportion of tokens in the model response u that are present in K. The intuition nehind this is that in information- seeking, grounding u in K is inherently an asym- metric task, i.e., u can be a subset of K but K cannot be a subset of u. K-BertS Follwing Shuster et al. (2021) and Dziri et al. (2022a), we use of BERTScore to measure semantic similarity between K and u based on con- textual BERT token embeddings. We refer to this as K-BertS to differentiate it from BertS (Section 4). FaithCritic We use the hallucination critic model by Dziri et al. (2023) to evaluate whether a response entails a given passage.3 It outputs a score between 0 and 1 indicating how likely a given response is hallucinated. Here, lower scores are indicative of lesser hallucination within a modelâ s responses, hence, more groundedness. Q2 Q2 (Honovich et al., 2021) is an evaluation metric used to quantify factual consistency between responses and provided passages using automatic question generation, question answering, and natu- ral language inference (NLI) models. LLMCritic Similar to correctness, we inves- tigate prompting LLMs to act as evaluator for groundedness.
2307.16877#27
2307.16877#29
2307.16877
[ "2201.08239" ]
2307.16877#29
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
More specifically, we prompt GPT- 3.5 and GPT-4 to annotate whether a given response uses only the knowledge present in the provided passage. The actual prompt is provided in Ap- pendix B (Figure 8). # 5.2 Faithfulness w.r.t Relevant Knowledge In this section, we investigate the faithfulness of model responses when they are provided a passage relevant to the user query. We first conduct human 3RoBERTa-Large checkpoint: huggingface.co/McGill- NLP/roberta-large-faithcritic Metric Spearman Kendall K-F1 K-Precision K-Recall -2.67 46.482 -4.258 -2.074 41.536 -3.388 K-BertS (F1) K-BertS (Precision) K-BertS (Recall) FaithCritic Q2 (F1) Q2 (NLI) LLMCritic (GPT-3.5) LLMCritic (GPT-4) 3.583 19.721 -10.3 11.741 27.883 27.524 27.189 50.485 3.009 16.07 -8.22 9.528 23.932 24.228 26.789 49.742 Table 4: Correlation of evaluation metrics of faithful- ness with human judgments. LLMCritic (GPT-4) is most correlated with human judgements. K-Precision is a close second. evaluation on a subset of samples, and use it to com- pare several evaluation metrics. Finally, we present the results of large-scale automatic evaluation of instruction-following models. We conduct experiments on all three information- seeking tasks. For HotpotQA and TopiOCQA, the gold passage(s) for each query is provided as part of the dataset. For NQ, we follow Karpukhin et al. (2020) and provide each question and reference answer as a query to BM25 and take the first ranked passage as the gold passage. For all instruction- following models, we use the prompt provided in Section 3. Human Evaluation For each example, we pro- vide annotators with a question (or the conversa- tion history), response, and retrieved passages and task them with determining whether the response is grounded in the provided passages.
2307.16877#28
2307.16877#30
2307.16877
[ "2201.08239" ]
2307.16877#30
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
We allow anno- tators to provide two labels â 1) to determine if the provided passage is actually a relevant passage to the userâ s query, and 2) to determine if the model re- sponse is â completely,â â partially,â or â notâ found in the presented passages. The model response is given a score of 1.0 if the label is â completely,â 0.5 for â partiallyâ and 0 for â not.â
2307.16877#29
2307.16877#31
2307.16877
[ "2201.08239" ]
2307.16877#31
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
We collect two annotations for each example and resolve all con- flicting annotations by collecting a third annotation and taking the majority vote. We randomly sample 50 examples from Natural Questions, HotpotQA, and TopiOCQA for evalu- ation. We first filter out annotations for which the passage is not relevant to the query. This resulted in 39 samples for NQ, 47 for HotpotQA, and 49 for TopiOCQA. The high number of non-relevant for NQ is probably due to heuristic matching of gold Dataset: HotpotQA Question: Which 2008 American documentary film, Pond Hockey or I.O.U.S.A., delves into fiscal issues? Knowledge #1: Pond Hockey (film) - [...] The film is an examination of the changing culture of pond hockey. Knowledge #2: I.O.U.S.A. - I.O.U.S.A. is a 2008 American documentary film directed by Patrick Creadon. The film fo- cuses on the shape and impact of the United States national debt [...] and was known as the "Fiscal Wake-Up Tour." Ref.
2307.16877#30
2307.16877#32
2307.16877
[ "2201.08239" ]
2307.16877#32
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Answer: I.O.U.S.A. Response (Alpaca): Pond Hockey delves into fiscal issues. I.O.U.S.A. focuses on the shape and impact of the United States national debt. Scores: K-F1: 15.91, K-Precision: 82.35, K-BertS (F1): 84.64, Q2 (F1): 20.0, Q2 (NLI): 20.0, FaithCritic: 0.53, LLMCritic(GPT-3.5): 0.0, LLMCritic(GPT-3.5): 0.0 Dataset: NQ Question:
2307.16877#31
2307.16877#33
2307.16877
[ "2201.08239" ]
2307.16877#33
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
When did they replace lead with graphite in pen- cils? Knowledge: Pencil - [...] many people have the misconception that the graphite in the pencil is lead, [...] even though it never contained the element lead. [...] Ref. Answer: never contained the element lead Response (Flan-T5): 1835 Scores: K-F1: 0.0, K-Precision: 0.0, K-BertS (F1): 76.22, Q2 (F1): 0.0, Q2 (NLI): 0.0, FaithCritic: 99.96, LLMCritic(GPT- 3.5): 100.0, LLMCritic(GPT-3.5): 0.0 Dataset: TopiOCQA Question:
2307.16877#32
2307.16877#34
2307.16877
[ "2201.08239" ]
2307.16877#34
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
What are dragonfly flight speeds? Knowledge: Dragonfly (Flight speed) - [...] a maximum speed of 36â 54 km/h (22â 34 mph) with average cruising speed of about 16 km/h (9.9 mph). Dragonflies can travel at 100 body- lengths per second in forward flight, and three lengths per second backwards Ref. Answer: It can travel at 100 body-lengths per second in forward flight, and three lengths per second backwards. Response (GPT-3.5): Dragonflies like the hawkers have a maximum speed of around 30 miles per hour with an average cruising speed of about 10-15 miles per hour. They can also travel at 100 body-lengths per second in forward flight
2307.16877#33
2307.16877#35
2307.16877
[ "2201.08239" ]
2307.16877#35
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Scores: K-F1: 54.54, K-Precision: 72.97, K-BertS (F1): 89.48, Q2 (F1): 40.0, Q2 (NLI): 40.0, FaithCritic: 99.55, LLMCritic(GPT-3.5): 100.0, LLMCritic(GPT-3.5): 100.0 Figure 6: Examples of non-faithful responses along- side relevant metric scores. Text in purple indicates hallucination, while teal responses are grounded to teal provided knowledge. passage to the question.
2307.16877#34
2307.16877#36
2307.16877
[ "2201.08239" ]
2307.16877#36
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
We consider three models â GPT-3.5, Flan-T5, and Alpaca, resulting in 405 samples. We compute scores from all evaluation metrics on this subset, including LLMCritic (for both GPT-3.5 and GPT-4). These are presented in Table 9 (Appendix D). In Table 4, we present correlations between dif- ferent automatic groundedness metrics and human evaluation. We find that LLMCritic based on GPT- 4 correlates the most with human evaluation.
2307.16877#35
2307.16877#37
2307.16877
[ "2201.08239" ]
2307.16877#37
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
K- Dataset Model K-F1 â K-Precision â K-BertS (F1) â Q2 (F1) â Q2 (NLI) â FaithCritic â NQ GPT-3.5 Flan-T5 Alpaca Llama-2 19.66 5.84 13.29 20.42 65.78 94.04 70.44 70.9 85.34 80.9 83.40 84.94 38.17 36.54 30.18 â 43.07 38.27 33.46 â 19.37 82.42 69.92 32.37 HotpotQA GPT-3.5 Flan-T5 Alpaca Llama-2 16.61 3.26 9.55 17.7 81.19 92.12 87.03 76.9 84.18 78.57 82.68 83.65 49.32 36.03 43.51 â 56.07 37.97 49.05 â 38.95 64.31 50.32 38.53 TopiOCQA GPT-3.5 Flan-T5 Alpaca Llama-2 26.82 23.74 19.26 24.75 71.96 86.37 66.91 64.64 87.01 86.42 84.96 86.19 54.74 61.30 40.21 45.00 60.44 64.75 44.83 50.72 30.71 44.89 58.28 42.55 Table 5: Results for faithfulness w.r.t relevant knowledge. We report both token-based and model-based metrics. For all metrics except FaithCritic, higher scores indicate greater response groundedness. Precision, the token-overlap based metrics that is invariant to the length of the knowledge snippet in a close second, better than other model-based faithfulness metrics like K-BertS, FaithCritic, and Q2. This indicates that models trained to detect hallucinations in knowledge-grounded dialogues do not generalize well to information-seeking QA tasks.
2307.16877#36
2307.16877#38
2307.16877
[ "2201.08239" ]
2307.16877#38
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
We present some examples of model hallu- cinations in Figure 6, along with associated scores of evaluation metrics. Automatic Evaluation In Table 5, we present the results for faithfulness w.r.t relevant knowledge on NQ, HotpotQA, and TopiOCQA. Taditional faith- fulness metrics such as K-F1, K-BertS, and Faith- Critic, rank either Llama-2 or GPT-3.5 as the most faithful model for all the three tasks. On the other hand, K-Precision, the metric most correlated with human judgments, denotes a com- pletely different trend. GPT-3.5 is the least faithful for NQ, while Llama-2 is least faithful for Hot- potQA and TopiOCQA. K-Precision ranks Flan-T5 as the most faithful instruction-following model for all three tasks. We hypothesize that K-F1 faces a similar issue as F1 in correctness evaluation â there is a length mismatch between the model response and the provided knowledge snippet. Our prelimi- nary examination of model responses reveals that Flan-T5 responses are generally short, which is probably why K-F1 assigns it a low score. These findings further highlight that verbose re- sponses from instruction-following models are of- ten not grounded in provided passages. For exam- ple, in Figure 6, GPT-3.5 hallucinates by outputting numbers that are completely different from what was provided, whereas Alpaca fails to reason prop- erly based on provided passages. # 5.3 Faithfulness w.r.t Irrelevant Knowledge In the retrieval-augmented setting, an ideal model should comprehend passage contents and avoid an- swering if the passage lacks relevant information. To test this, we provide the models with an irrele- vant passage by selecting the 1001 ranked passage from the list of retrieved passages. Prompt Setup Our preliminary experiments demonstrated that without an explicit instruction, Flan-T5 and Alpaca did not refrain from answering at all. Hence, we modified the prompt to make this behavior more explicit and instructed the model to output I donâ t know if the passage is deemed irrel- evant, as demonstrated in Figure 9 (Appendix B). We report the proportion of model responses that contain I donâ
2307.16877#37
2307.16877#39
2307.16877
[ "2201.08239" ]
2307.16877#39
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
t know and other observed synony- mous expressions.4 Note that for these experiments, we only investigate whether a model refused to an- swer. We do not verify the correctness of any gen- erated responses. Moreover, to measure the impact of this new instruction, we also experiment with providing the gold passage and report the propor- tion of model responses that do not contain I donâ t know and other synonymous expressions. Results We present our results in Table 6. We find that when provided with an irrelevant passage, Llama-2 most often refuses to answer on open- domain and multi-hop QA datasets (more than 99% in NQ and HotpotQA). GPT-3.5 performs the best for TopiOCQA, refraining to answer on 88.15% turns. However, for both of these models, the incli- nation to not answer also extends to when the gold passage is actually present. In comparison, Flan- T5 is well balanced on datasets it was exposed to 4â UNANSWERABLEâ , â ..passages do not contain..â Incorrect Psg. â Gold Psg. â Dataset Model NQ GPT-3.5 Flan-T5 Alpaca Llama-2 98.5 91.99 0.06 99.34 48.01 24.76 0.00 75.84 HotpotQA GPT-3.5 Flan-T5 Alpaca Llama-2 98.54 77.14 0.09 99.16 26.39 1.58 0.11 76.96 TopiOCQA GPT-3.5 Flan-T5 Alpaca Llama-2 88.15 40.77 1.27 87.59 32.42 7.68 0.80 61.77 Table 6: Percentage of model responses that contain I donâ t know and other synonymous expressions when provided with an incorrect passage (higher is better) or the gold passage (lower is better). during training, however, it remains overconfident on TopiOCQA, which was not included in the train- ing. Alpaca adheres the least to the instruction and answers even if the passage is not relevant to the information need of the user.
2307.16877#38
2307.16877#40
2307.16877
[ "2201.08239" ]
2307.16877#40
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Appendix E demon- strates some failure examples of these models in both scenarios. Further research is required to opti- mally design and prompt models to better identify when to answer and when not to answer. # 6 Discussion and Limitations Below, we highlight several key findings of this paper and discuss some of its limitations. Which Evaluation Metrics are Best? Our analy- sis on correctness (§4) and faithfulness (§5) demon- strates that widely-used metrics are not suitable for evaluating the correctness (due to errors such as elaborate answers, open-ended questions, and list of named-entities) and faithfulness (due to partially grounded responses). Correlating the metrics with human judgements (Table 2 and Table 5) reveals that Recall and GPT4-Eval are the best lexical and model-based metrics for correctness and K- Precision and LLMCritic (GPT-4) are the best lexical and model-based metrics for faithfulness, respectively. However, these model-based metrics, especially the ones based on LLMs, are usually slow to run, expensive, difficult to reproduce, and may exhibit systematic biases. While we propose that Recall and K-Precision are the most widely-accessible and human-aligned metrics for correctness and faithfulness, respec- tively, we emphasize that these simple lexical- based metrics are easy to hack. One model can copy all the retrieved knowledge as the output, leading to high Recall and K-Precision metrics. However, such a model will be penalized heavily irrelevant when evaluated for faithfulness w.r.t. knowledge. Instruction-Following Models According to the most human aligned and easy to use metrics (i.e., Recall and K-Precision), we conclude that GPT- 3.5 outperforms other models on majority of the datasets in correctness w.r.t information need. How- ever, when analyzing the faithfulness w.r.t relevant knowledge, Flan-T5 is shown to be the best model in all three datasets. Moreover, our further analysis on the modelsâ faithfulness w.r.t irrelevant knowl- edge demonstrates that models struggle to correctly identify whether the provided knowledge is rele- vant or not. Limitations It is worth mentioning that the exper- iments for evaluating the faithfulness of the models are conducted in a modified setting, where a rele- vant or irrelevant passage is provided in the prompt on purpose.
2307.16877#39
2307.16877#41
2307.16877
[ "2201.08239" ]
2307.16877#41
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
This is different from the real-world scenario, where the retrieved passages can contain a mix of relevant and irrelevant knowledge. Finally, it should also be noted that beyond qual- itative investigation, we did not explore a wide range of prompts for the tasks studied in this work. Recent work has shown that the performance of instruction-following models can vary greatly de- pending upon the provided prompt (Zhao et al., 2021; Liu et al., 2023b). We leave it to future works to investigate better prompts for instruction- following models in a retrieval-augmented setting.
2307.16877#40
2307.16877#42
2307.16877
[ "2201.08239" ]
2307.16877#42
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
# 7 Conclusion We extensively study the capability of instruction- following models to correctly and faithfully re- spond to questions in three QA settings (natural, multi-hop, and conversational). First, we uncover various issues with using traditional metrics, like F1 score, to evaluate the correctness of models. Through correlation with human judgement, we find that LLM-based metrics (e.g. GPT-4) and token-level Recall are promising metrics for evalu- ating the correctness w.r.t information need. More- over, our further faithfulness analysis shows that LLM-based metrics like LLMCritic (GPT-4) and lexical-based K-Precision are more aligned with human judgements in evaluating the faithfulness of the models given the relevant knowledge. Overall, we find that GPT-3.5 is better at provid- ing correct responses for all tasks, whereas Flan-T5 comes out on top for faithfulness. However, all models struggle to accurately respond with â I donâ t knowâ
2307.16877#41
2307.16877#43
2307.16877
[ "2201.08239" ]
2307.16877#43
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
given an irrelevant passage when explicitly instructed to do so. While Recall and K-Precision are the most hu- man judgement aligned and widely-accessible alter- native metrics, they are easy to hack. Therefore, we encourage the community to come up with more reliable metrics. # References Vaibhav Adlakha, Shehzaad Dhuliawala, Kaheer Sule- man, Harm de Vries, and Siva Reddy. 2022. Topi- ocqa: Open-domain conversational question answer- ing with topic switching. Transactions of the Associ- ation for Computational Linguistics, 10:468â 483. Raviteja Anantha, Svitlana Vakulenko, Zhucheng Tu, Shayne Longpre, Stephen Pulman, and Srinivas Chappidi. 2021.
2307.16877#42
2307.16877#44
2307.16877
[ "2201.08239" ]
2307.16877#44
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Open-domain question answering goes conversational via question rewriting. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 520â 534, Online. Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learning to retrieve reasoning paths over wikipedia graph for question answering. In International Conference on Learning Representations. Satanjeev Banerjee and Alon Lavie. 2005.
2307.16877#43
2307.16877#45
2307.16877
[ "2201.08239" ]
2307.16877#45
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65â 72, Ann Arbor, Michigan. Association for Computational Linguis- tics. Petr Baudis and Jan Sedivý. 2015. Modeling of the question answering task in the yodaqa system. In Conference and Labs of the Evaluation Forum. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533â
2307.16877#44
2307.16877#46
2307.16877
[ "2201.08239" ]
2307.16877#46
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
1544, Seattle, Wash- ington, USA. Association for Computational Linguis- tics. Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020. Interpreting Pretrained Contextualized Representa- tions via Reductions to Static Embeddings. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4758â 4781, Online. Association for Computational Lin- guistics.
2307.16877#45
2307.16877#47
2307.16877
[ "2201.08239" ]
2307.16877#47
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Samuel R. Bowman. 2023. Eight things to know about large language models. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
2307.16877#46
2307.16877#48
2307.16877
[ "2201.08239" ]
2307.16877#48
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Language models are few-shot learners. Jannis Bulian, Christian Buck, Wojciech Gajewski, Ben- jamin Börschinger, and Tal Schuster. 2022. Tomayto, tomahto. beyond token-level answer equivalence for question answering evaluation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 291â 305, Abu Dhabi, United Arab Emirates. Association for Computa- tional Linguistics. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Association for Computational Linguistics (ACL). Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evalua- tions? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 15607â 15631, Toronto, Canada. Association for Computational Linguistics. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023.
2307.16877#47
2307.16877#49
2307.16877
[ "2201.08239" ]
2307.16877#49
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality. Sabrina Chiesurin, Dimitris Dimakopoulos, Marco An- tonio Sobrevilla Cabezudo, Arash Eshghi, Ioannis Papaioannou, Verena Rieser, and Ioannis Konstas. 2023. The dangers of trusting stochastic parrots: Faithfulness and trust in open-domain conversational question answering. In Findings of the Association for Computational Linguistics: ACL 2023, pages 947â 959, Toronto, Canada. Association for Computational Linguistics. Paul F Christiano, Jan Leike, Tom Brown, Miljan Mar- tic, Shane Legg, and Dario Amodei. 2017. Deep Reinforcement Learning from Human Preferences. In Advances in Neural Information Processing Sys- tems, volume 30. Curran Associates, Inc. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Al- bert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdh- ery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Ja- cob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 845â 855, Melbourne, Australia. Association for Computational Linguistics.
2307.16877#48
2307.16877#50
2307.16877
[ "2201.08239" ]
2307.16877#50
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Nouha Dziri, Ehsan Kamalloo, Sivan Milton, Os- mar Zaiane, Mo Yu, Edoardo M. Ponti, and Siva Reddy. 2022a. Faithdial : A Faithful Benchmark for Information-Seeking Dialogue. Transactions of the Association for Computational Linguistics, 10:1473â 1490. Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chan- dra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, and Yejin Choi. 2023.
2307.16877#49
2307.16877#51
2307.16877
[ "2201.08239" ]
2307.16877#51
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Faith and Fate: Limits of Transformers on Composi- tionality. ArXiv:2305.18654 [cs]. Nouha Dziri, Andrea Madotto, Osmar Zaïane, and Avishek Joey Bose. 2021. Neural Path Hunter: Re- ducing Hallucination in Dialogue Systems via Path Grounding. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing, pages 2197â
2307.16877#50
2307.16877#52
2307.16877
[ "2201.08239" ]
2307.16877#52
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
2214, Online and Punta Cana, Do- minican Republic. Association for Computational Linguistics. Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, and Siva Reddy. 2022b. On the origin of hallucinations in conversational models: Is it the datasets or the models? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 5271â 5285, Seattle, United States. Association for Computational Linguistics. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text de- generation. In International Conference on Learning Representations.
2307.16877#51
2307.16877#53
2307.16877
[ "2201.08239" ]
2307.16877#53
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021. Q2: Evaluating Factual Consistency in Knowledge- Grounded Dialogues via Question Generation and In Proceedings of the 2021 Question Answering. Conference on Empirical Methods in Natural Lan- guage Processing, pages 7856â 7870, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian Oâ Horo, Gabriel Pereyra, Jeff Wang, Christo- pher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Ves Stoyanov. 2023. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open do- main question answering. In Proceedings of the 16th Conference of the European Chapter of the Associ- ation for Computational Linguistics:
2307.16877#52
2307.16877#54
2307.16877
[ "2201.08239" ]
2307.16877#54
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Main Volume, pages 874â 880, Online. Association for Computa- tional Linguistics. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi- Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Atlas: Few-shot learning with retrieval augmented language models. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1601â
2307.16877#53
2307.16877#55
2307.16877
[ "2201.08239" ]
2307.16877#55
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
1611, Vancouver, Canada. Association for Computational Linguistics. Ehsan Kamalloo, Nouha Dziri, Charles Clarke, and Davood Rafiei. 2023. Evaluating open-domain ques- tion answering in the era of large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5591â 5606, Toronto, Canada. Association for Computational Linguistics. Vladimir Karpukhin, Barlas OË guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020.
2307.16877#54
2307.16877#56
2307.16877
[ "2201.08239" ]
2307.16877#56
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019.
2307.16877#55
2307.16877#57
2307.16877
[ "2201.08239" ]
2307.16877#57
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Natu- ral questions: A benchmark for question answering research. Transactions of the Association for Compu- tational Linguistics, 7:452â 466. Angeliki Lazaridou, Elena Gribovskaya, Wojciech Internet- Stokowiec, and Nikolai Grigorev. 2022. augmented language models through few-shot prompting for open-domain question answering. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open do- main question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086â
2307.16877#56
2307.16877#58
2307.16877
[ "2201.08239" ]
2307.16877#58
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
6096, Florence, Italy. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- In Advances in Neural Infor- intensive nlp tasks. mation Processing Systems, volume 33, pages 9459â
2307.16877#57
2307.16877#59
2307.16877
[ "2201.08239" ]
2307.16877#59
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
9474. Curran Associates, Inc. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74â 81, Barcelona, Spain. Association for Computational Linguistics. Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334.
2307.16877#58
2307.16877#60
2307.16877
[ "2201.08239" ]
2307.16877#60
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paran- jape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023a. Lost in the middle: How language models use long contexts. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023b. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv., 55(9). Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023c. G-eval: Nlg evaluation using gpt-4 with better human align- ment. Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. 2022. When not to trust language models: Investigating effectiveness and limitations of parametric and non- parametric memories. arXiv preprint.
2307.16877#59
2307.16877#61
2307.16877
[ "2201.08239" ]
2307.16877#61
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Sabrina J. Mielke, Arthur Szlam, Emily Dinan, and Y.-Lan Boureau. 2022. Reducing conversational agentsâ overconfidence through linguistic calibration. ArXiv:2012.14983 [cs]. Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jenni- maria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Hein- rich Küttler, Linqing Liu, Pasquale Minervini, Pon- tus Stenetorp, Sebastian Riedel, Sohee Yang, Min- joon Seo, Gautier Izacard, Fabio Petroni, Lucas Hos- seini, Nicola De Cao, Edouard Grave, Ikuya Ya- mada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Bar- las Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Wen-tau Yih. 2021. Neurips 2020 efficientqa competition: Systems, anal- In Proceedings of the yses and lessons learned. NeurIPS 2020 Competition and Demonstration Track, volume 133 of Proceedings of Machine Learning Re- search, pages 86â 111. PMLR. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-task generaliza- tion via natural language crowdsourcing instructions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3470â 3487, Dublin, Ireland. Association for Computational Linguistics. OpenAI. 2023. Gpt-4 technical report.
2307.16877#60
2307.16877#62
2307.16877
[ "2201.08239" ]
2307.16877#62
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022a. Training language models to follow instructions with human feedback. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022b.
2307.16877#61
2307.16877#63
2307.16877
[ "2201.08239" ]
2307.16877#63
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Training language models to follow instructions with human feedback. ArXiv:2203.02155 [cs]. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311â 318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Ashwin Paranjape, Omar Khattab, Christopher Potts, Matei Zaharia, and Christopher D Manning. 2022.
2307.16877#62
2307.16877#64
2307.16877
[ "2201.08239" ]
2307.16877#64
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Hindsight: Posterior-guided training of retrievers for improved open-ended generation. In International Conference on Learning Representations. Baolin Peng, Chunyuan Li, Pengcheng He, Michel Gal- ley, and Jianfeng Gao. 2023. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277. Peng Qi, Haejun Lee, Tg Sido, and Christopher Man- ning. 2021.
2307.16877#63
2307.16877#65
2307.16877
[ "2201.08239" ]
2307.16877#65
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Answering open-domain questions of varying reasoning steps from text. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing, pages 3599â 3614, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susan- nah Young, Eliza Rutherford, Tom Hennigan, Ja- cob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Mari- beth Rauh, Po-Sen Huang, Amelia Glaese, Jo- hannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Anto- nia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Bud- den, Esme Sutherland, Karen Simonyan, Michela Pa- ganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsim- poukelli, Nikolai Grigorev, Doug Fritz, Thibault Sot- tiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson dâ Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Ko- ray Kavukcuoglu, and Geoffrey Irving. 2022. Scal- ing Language Models:
2307.16877#64
2307.16877#66
2307.16877
[ "2201.08239" ]
2307.16877#66
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Methods, Analysis & Insights from Training Gopher. ArXiv:2112.11446 [cs]. Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1â 67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383â 2392, Austin, Texas. Association for Computational Linguistics. Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Michael Collins, Dipanjan Das, Slav Petrov, Gau- rav Singh Tomar, Iulia Turc, and D. Reitter. 2021a.
2307.16877#65
2307.16877#67
2307.16877
[ "2201.08239" ]
2307.16877#67
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Measuring attribution in natural language generation models. ArXiv, abs/2112.12870. Hannah Rashkin, David Reitter, Gaurav Singh Tomar, and Dipanjan Das. 2021b. Increasing Faithfulness in Knowledge-Grounded Dialogue with Controllable Features. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 704â 718, Online. Association for Computa- tional Linguistics.
2307.16877#66
2307.16877#68
2307.16877
[ "2201.08239" ]
2307.16877#68
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249â 266. Devendra Sachan, Mostofa Patwary, Mohammad Shoeybi, Neel Kant, Wei Ping, William L. Hamil- ton, and Bryan Catanzaro. 2021. End-to-end training of neural retrievers for open-domain question answer- ing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 6648â 6662, Online.
2307.16877#67
2307.16877#69
2307.16877
[ "2201.08239" ]
2307.16877#69
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Association for Computational Linguistics. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Tr- ishala Neeraj, Jos Rozen, Abheesht Sharma, An- drea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multi- task prompted training enables zero-shot task gener- alization. In International Conference on Learning Representations. Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2023.
2307.16877#68
2307.16877#70
2307.16877
[ "2201.08239" ]
2307.16877#70
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Replug: Retrieval-augmented black-box language models. Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3784â 3803. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023.
2307.16877#69
2307.16877#71
2307.16877
[ "2201.08239" ]
2307.16877#71
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung- Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny So- raker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Ale- jandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Co- hen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera- Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. LaMDA: Language Models for Dialog Applications. ArXiv:2201.08239 [cs]. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a.
2307.16877#70
2307.16877#72
2307.16877
[ "2201.08239" ]
2307.16877#72
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Llama: Open and efficient foundation language models. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023. Large language models are not fair evaluators. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022a.
2307.16877#71
2307.16877#73
2307.16877
[ "2201.08239" ]
2307.16877#73
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Self-instruct: Aligning language model with self generated instructions. Yizhong Wang, Swaroop Mishra, Pegah Alipoormo- labashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Lai, Ishan Puro- hit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022b. Super-NaturalInstructions: General- ization via declarative instructions on 1600+ NLP In Proceedings of the 2022 Conference on tasks. Empirical Methods in Natural Language Processing, pages 5085â
2307.16877#72
2307.16877#74
2307.16877
[ "2201.08239" ]
2307.16877#74
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
5109, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallap- ati, and Bing Xiang. 2019. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 5878â
2307.16877#73
2307.16877#75
2307.16877
[ "2201.08239" ]
2307.16877#75
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
5882, Hong Kong, China. As- sociation for Computational Linguistics. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022. Finetuned language mod- els are zero-shot learners. In International Confer- ence on Learning Representations. Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021. Answering complex open-domain ques- tions with multi-hop dense retrieval. In International Conference on Learning Representations. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018.
2307.16877#74
2307.16877#76
2307.16877
[ "2201.08239" ]
2307.16877#76
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
HotpotQA: A dataset for diverse, explainable multi-hop question answer- ing. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mi- haylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pre- trained transformer language models. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- In International uating text generation with bert. Conference on Learning Representations.
2307.16877#75
2307.16877#77
2307.16877
[ "2201.08239" ]
2307.16877#77
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improv- ing few-shot performance of language models. # A Experimental Details # Instruction Model Details To generate text, we use a high temperature to avoid repetitiveness in sampling, but further leverage top- p sampling (Holtzman et al., 2019) to avoid sam- pling words with very low frequency (which may lead to incoherent text being generated). The val- ues used for all generation parameters are listed below: Top-p: p = 0.95 â ¢ Temperature: t = 0.95 â ¢ Seed: s = 0 â ¢ Min. new tokens: mintoken = 1 â ¢ Max. new tokens: maxtoken = 50
2307.16877#76
2307.16877#78
2307.16877
[ "2201.08239" ]
2307.16877#78
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
# A.2 Retriever Details While the retriever remains constant for each task, the number of retrieved passages provided to instruction-following models and fine-tuned FiD varies. Instruction-following models are con- strained by the input context size, hence, they re- ceive fewer passages than fine-tuned FiD. For the conversational QA task, including the conversation history in the prompt further reduces the number of passages that can be incorporated into the input con- text. Despite the varying context sizes of different instruction-following models, we provide a consis- tent number of retrieved passages (denoted by K) for each model within a specific task to maintain fair comparison.
2307.16877#77
2307.16877#79
2307.16877
[ "2201.08239" ]
2307.16877#79
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
The details are as follows: open-domain QA (NQ): K = 8 â ¢ multi-hop QA (HotpotQA): K = 8 â ¢ conversational QA (TopiOCQA): K = 4 Unlike instruction-following models, FiD is not restricted by input context size. We use the de- fault settings for each dataset â 100 passages for NQ, 50 passages for TopiOCQA, and up to 18 pas- sages for HotpotQA. For HotpotQA, the top 100 reasoning chains produced by the retriever are de- duplicated to generate the final passage set. # B Prompts details In Section 4.2, we introduce LLM-based evalu- ations to evaluate the correctness of a model re- sponse w.r.t. the userâ s information need. To ac- complish this, we use the prompt template shown in Figure 7, and map â yesâ to 1 and â noâ to 0.
2307.16877#78
2307.16877#80
2307.16877
[ "2201.08239" ]
2307.16877#80
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Similarly, Section 5.1 introduces the LLMCritic System prompt: You are CompareGPT, a machine to verify the correctness of predictions. Answer with only yes/no. You are given a question, the corresponding ground-truth answer and a prediction from a model. Compare the "Ground-truth answer" and the "Prediction" to determine whether the prediction correctly answers the question. All information in the ground-truth answer must be present in the prediction, including numbers and dates. You must answer "no" if there are any specific details in the ground-truth answer that are not mentioned in the prediction. There should be no contradicting statements in the prediction. The prediction may contain extra information. If the prediction states something as a possibility, treat it as a definitive answer. Question: {Question} Ground-truth answer: {Reference answer} Prediction: {Model response} CompareGPT response: Figure 7: The prompt template used for correctness evaluation. System prompt: You are CompareGPT, a machine to verify the groundedness of predictions. Answer with only yes/no. You are given a question, the corresponding evidence and a prediction from a model. Compare the "Prediction" and the "Evidence" to determine whether all the information of the prediction in present in the evidence or can be inferred from the evidence. You must answer "no" if there are any specific details in the prediction that are not mentioned in the evidence or cannot be inferred from the evidence. Question: {Question} Prediction: {Model response} Evidence: {Reference passage} CompareGPT response: Figure 8: The prompt template used for faithfulness evaluation. Evaluation method for calculating the faithfulness of the models w.r.t. relevant knowledge. To run this evaluation, we used the prompt shown in Figure 8. Furthermore, we conducted other experiments to study the answer abstience of the models in Sec- tion 5.3. The template used in these experiments is shown in Figure 9.
2307.16877#79
2307.16877#81
2307.16877
[ "2201.08239" ]
2307.16877#81
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Category Subcategory Count Percentage Enumeration of Reference Answers Granularity Discrepancies Granularity Discrepancies Incomplete Reference Answers Incomplete Reference Answers Incorrect Gold Answers Intrinsic Ambiguity in Questions Semantic Equivalence Semantic Equivalence Semantic Equivalence Sufficient Subset Symbolic Equivalence Enumeration of Reference Answers Temporal granularity discrepancy Spatial granularity discrepancy List of Named Entities Open-ended Questions Incorrect Gold Answers Ambiguous Questions Multinominal Entities Synonymous Answers More Elaborate Answers Sufficient subset Symbolic Equivalence 21 4 10 13 41 4 12 1 8 163 10 6 7.17 1.37 3.41 4.44 13.99 1.37 4.10 0.34 2.73 55.63 3.41 2.05 Table 7: Percentage share and exact counts of F1 failure cases by sub-category. See Section 4.3 for more details. Please answer the following question given the following passages. If the answer is not in the passages or cannot be inferred from the passages, respond as â I donâ t knowâ
2307.16877#80
2307.16877#82
2307.16877
[ "2201.08239" ]
2307.16877#82
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
. - title: {Passage title} {Passage text} - title: {Passage title} {Passage text} ... Question: {Question} Answer: # D Human Evaluation Section 4 and Section 5 describe the human eval- uation procedures for both correctness of the re- sponses w.r.t. information need and faithfulness of the models w.r.t. relevant knowledge. Table 8 demonstrates the quantitative results on the 100 samples picked for human evaluation using all studied correctness metrics. Similarly, the faith- fulness results on the 50 samples are presented in Table 9. Figure 9: The prompt template used for faithfulness w.r.t irrelevant knowledge. # E Failure Cases of Models in Faithfulness w.r.t Irrelevant Knowledge # C Failure Cases of Metrics Lexical-based metrics Figure 4 presents an overview of the F1 metric failures; the exact per- centages and counts can be found in Table 7. GPT4-Eval To better understand how GPT4- Eval fails compared to F1, we took the subset of annotated failure cases (described in Section 4.3) where GPT-4Eval also predicts 0; In total, we found 70 instances out of overall 296 samples. Figure 10 shows the distribution of failure subcategories for the GPT-4Eval subset. We observe that a higher proportion of failures are caused by open-ended questions, whereas more elaborated answers and enumeration of reference answers are less penal- ized by GPT4-Eval compared to the remaining fail- ures shown in Table 7. Moreover, all other subcate- gories now have a higher proportion due to the gap left by more elaborate answers and enumeration of reference answers. To illustrate the new findings, we include a few samples in Figure 11. Results illustrated in Table 6 show that models sometimes perform differently given relevant or irrelevant knowledge. Figure 12 demonstrates the failure examples of the studied models in all three QA datasets. It can be observed that given an ir- relevant passage, models (especially Alpaca) do not refrain from answering. Moreover, failure ex- amples presented in Figure 13 show that GPT-3.5 has difficulty in generating responses even when the correct information is available in the provided knowledge.
2307.16877#81
2307.16877#83
2307.16877
[ "2201.08239" ]
2307.16877#83
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
# Subcategory Ambiguous Ques. Incorrect Gold Ans. Multinominal entities, 1.4% More Elaborate Ans. Sufficient subset} Spatial granularity Temporal granularity List of Named Entities 14.3% Open-ended Ques. 37.1% ° wu 10 15 20 25 Category Incomplete Reference Answers [Mf Granularity Discrepancies [J Sufficient Subset Incomplete Reference Answers [Mf Granularity Discrepancies [J Sufficient Subset MM Semantic Equivalence {J Incorrect Gold Answers Intrinsic Ambiguity in Questions Figure 10: Distribution of failure cases of GPT4-Eval by sub-category. It struggles the most with Open-ended Questions.
2307.16877#82
2307.16877#84
2307.16877
[ "2201.08239" ]
2307.16877#84
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
EM F1 Recall Recall (S) METEOR Rouge-L BertS (F1) BEM GPT4-Eval Dataset Model NQ FiD GPT-3.5 Flan-T5 Alpaca-7B 66.00 1.0 65.00 11.0 70.97 21.21 72.19 26.51 72.83 87.10 77.73 59.87 72.0 83.00 75.00 51.0 58.76 38.45 58.56 30.07 70.33 19.77 71.05 26.44 94.88 84.62 94.74 85.52 75.18 91.74 80.36 67.82 72.0 89.00 81.00 64.0 HotpotQA FiD GPT-3.5 Flan-T5 Alpaca-7B 55.00 8.0 65.00 21.0 68.71 27.25 83.58 41.95 68.73 78.83 84.67 69.0 63.0 77.00 76.00 62.0 52.61 39.91 62.62 42.22 68.52 26.25 83.31 41.89 94.53 85.52 96.01 88.43 74.78 89.19 87.72 78.04 70.0 81.00 86.00 68.0 TopiOCQA FiD GPT-3.5 Flan-T5 Alpaca-7B 37.00 4.0 29.00 7.0 61.45 44.85 52.88 32.86 63.55 79.41 63.0 44.24 44.00 46.00 44.0 22.0 54.91 57.18 50.32 36.18 60.55 41.94 52.01 31.98 92.58 89.93 90.90 87.26 69.36 82.59 67.39 52.42 57.0 84.00 60.00 41.0 Human Eval 82.0 93.00 89.00 74.0 77.0 82.00 94.00 77.0 75.0 88.00 77.00 52.0
2307.16877#83
2307.16877#85
2307.16877
[ "2201.08239" ]