id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2307.13528#4
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
â Corresponding author Response Claim Evidence Scenario Methods Length Generated by Granularity Provided Provided Domain Task FEVER-based 7.30 Human Fact v xX Wikipedia Fact Verification FactCC 20.83 Synthetic Sentence v v Newswire Summ. Factuality QAGS-based 16.11 Model Summary v v Newswire Summ. Factuality WICE-based 24.20 Human Fact v v Wikipedia â Entailment RARR - PaLM/LaMDA Fact xX X Wikipedia QA 41.80 ChatGPT Fact xX xX Wikipedia QA FACTOOL 30.37 ChatGPT Snippet x x Python Code generation 67.13 ChatGPT Statement xX xX Math Math Problems 76.34 ChatGPT Tuple x x Sci. text Sci.
2307.13528#3
2307.13528#5
2307.13528
[ "2110.14168" ]
2307.13528#5
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Review Table 1: A comparison of published approaches for factuality detection in terms of generated responses and claims to be veriï¬ ed based on collected evidence. â Scenarioâ represents which task and domain the corresponding ap- proach has been justiï¬ ed. â Sci.â represents â Scientiï¬ câ . to have a more comprehensive factuality detection and veriï¬ cation framework that is similarly versa- tile. Additionally, in the current literature, the task of factuality detection is usually simpliï¬ ed as ei- ther (i) given a claim, determining whether it is factually correct, (ii) or given evidence, determin- ing whether the generated claim is supported. This task deï¬ nition is not well suited to writing tasks that users commonly engage with when interacting with generative models (e.g., ChatGPT), where we often need to validate the factuality of a long-form generation without explicit claims and evidence. â ¢ We connect the concept of â tool useâ with â fac- tuality detectionâ , developing a uniï¬ ed and ver- satile framework for factuality detection across a variety of domains and tasks. â ¢ We use FACTOOL to evaluate the factuality of modern chatbots, and found that GPT-4 has the best factuality across almost all scenarios.
2307.13528#4
2307.13528#6
2307.13528
[ "2110.14168" ]
2307.13528#6
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Su- pervisely ï¬ ne-tuned chatbots (Vicuna-13B) have reasonably good factuality in KB-based QA but perform poorly in more challenging scenarios, in- cluding code generation, math problem solving, and scientiï¬ c literature review writing. In this paper, we propose a task and domain- agnostic framework, FACTOOL, which aims to de- tect factual errors in LLM-generated texts. We il- lustrate our framework in Fig. 1, where we connect the concept of â tool useâ (Thoppilan et al., 2022; Gao et al., 2022b; Schick et al., 2023) with â fac- tuality detectionâ and demonstrate that the ability to use tools in LLMs is crucial for factuality de- tection. Speciï¬ cally, FACTOOL leverages various tools, including Google Search, Google Scholar, code interpreters, Python, or even LLMs them- selves, to gather evidence about the factuality of the generated content. Moreover, our framework employs the reasoning abilities of LLMs to assess the factuality of the content, given the evidence that has been gathered. We develop a benchmark and perform experiments across four tasks: knowledge- based QA, code generation, math problem solving, and scientiï¬ c literature review writing. In summary, our contributions are: # 2 Related Work
2307.13528#5
2307.13528#7
2307.13528
[ "2110.14168" ]
2307.13528#7
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Factuality Detection in Natural Language Pro- cessing Factuality detection was a topic of rig- orous study even before the advent of generative AI. Existing works can be organized by their dif- ferences in terms of the â responseâ to be veri- ï¬ ed, the â claimâ extracted from the response, and supporting â evidenceâ . As illustrated in Tab. 1, the creation of the FEVER dataset (Thorne et al., 2018a) spawned models (Zhong et al., 2020; Kr- ishna et al., 2022) that determine whether a given ï¬
2307.13528#6
2307.13528#8
2307.13528
[ "2110.14168" ]
2307.13528#8
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
ne-grained claim made based on Wikipedia1 arti- cles is correct. In this task setting, both the claim and related evidence are given. FactCC (Kryscin- ski et al., 2020) and QAGS-based models (Wang et al., 2020) adopted different task formulations to detect factual consistency, i.e., given the ev- idence text, and the goal is to determine if the generated summaries or summary sentences are factually consistent with the given text. WICE- based methods (Kamoi et al., 2023) decide if a fact from a Wikipedia sentence could be supported
2307.13528#7
2307.13528#9
2307.13528
[ "2110.14168" ]
2307.13528#9
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
â ¢ We revisit the task of factuality detection and extend it in a way that allows for a better audit of current generative AI models. 1https://www.wikipedia.org/ by provided evidence. RARR (Gao et al., 2022a) proposed a new approach by directly prompting LLMs to generate queries, retrieve evidence and determine factuality. Existing works typically rely on either a given claim or given evidence and target a speciï¬ c use case. However, in this paper, we introduce a more challenging yet practical task setting, i.e., factuality detection without explicit claims or evidence, and propose a framework capable of addressing this challenge in a variety of scenarios. Tool use in Large Pretrained Language Models Language models store limited knowledge within their parameters. To overcome this limitation, vari- ous tools have been introduced to assist language models in order to further expand their capabili- ties. For example, Press et al. (2022); Komeili et al. (2022) gathered information from the Internet to enhance question answering and dialog systems, respectively. Schick et al. (2023) trained a model capable of interacting with ï¬ ve tools including a calculator, a translation system, etc. Recently, Shen et al. (2023) introduced a framework that employs LLMs to connect various AI models from the ma- chine learning communities to tackle AI tasks. Fur- thermore, Liang et al. (2023) proposed a new AI ecosystem that connects LLMs with millions of existing APIs to accomplish tasks. In this work, we explore tool use in LLMs for the task of factuality detection. # 3 Revisiting Factuality in Generative AI # 3.1 Deï¬
2307.13528#8
2307.13528#10
2307.13528
[ "2110.14168" ]
2307.13528#10
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
nition Versatile Factuality In most previous works, factuality has been deï¬ ned as whether a claim in a text can be supported by evidence from a separate, trustworthy knowledge base, with applications in fact-checking (Thorne et al., 2018b) (where the knowledge base is a large source like Wikipedia) and summarization (Kryscinski et al., 2020) (where the knowledge base is an input document or doc- uments). In this paper, we extend this deï¬ nition to whether the claims made in generated signals (which could be text, code, or mathematical ex- pressions and so on) can be supported by evidence under speciï¬ c rules. Speciï¬ cally, these rules can range from consistency with a knowledge base de- rived from Wikipedia, to a veriï¬ cation rule spec- iï¬ ed within a Python library, or an operational rule derived from mathematics. By adopting this broader deï¬ nition, we are able to establish a uni- ï¬ ed framework for addressing factuality issues in generative AI beyond just the textual domain. Fine-grained Factuality One can usually detect the factuality of a given generated signal (e.g., text) at different levels of granularity, such as sentences, and documents. A more granular assessment can be particularly valuable because it (1) not only allows users to pinpoint where inaccuracies oc- cur (Liu et al., 2021) but also (2) serves as a reward model for developers to reï¬ ne their generative sys- tems (Lightman et al., 2023). However, implementing ï¬ ne-grained factuality detection is challenging due to two reasons: (1) specifying the desired granularity level without am- biguity, and (2) extracting claims in line with the predetermined granularity level. In this paper, we argue that by utilizing the powerful instruction- following ability and the natural language inter- face of LLMs, we can effectively address the chal- lenge of deï¬ ning and extracting ï¬ ne-grained claims through claim deï¬ nition-based few-shot prompting. More details can be found in §4.1. Structurally speaking, given a prompt (e.g., a query or instruction) and the corresponding model- generated response, the ï¬
2307.13528#9
2307.13528#11
2307.13528
[ "2110.14168" ]
2307.13528#11
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
ne-grained factuality de- tection task involves the following concepts: Prompt (p) a query or instruction that users pro- vide to the generative model. Response (r) a piece of text (usually in long form) generated by the generative model. Claim (c) a statement inferred from the model re- sponse, whose granularity is deï¬ ned by a natural language text. Evidence (e) The available information (e.g., knowledge base, pre-deï¬ ned rules) that support or demonstrate the truth or validity of a claim.
2307.13528#10
2307.13528#12
2307.13528
[ "2110.14168" ]
2307.13528#12
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
# Instantiations in Different Scenarios Using the above task deï¬ nition, we can deï¬ ne fac- tuality in different application scenarios (see also in Tab.2). Knowledge-based QA Knowledge-based (KB) QA (Chen et al., 2017) aims to answer questions using a given knowledge base or open-domain data source (e.g., Wikipedia). In this task, we deï¬ ne factuality as how well each claim in the generated answer is supported by world knowledge. In this paper, we consider a more challenging scenario: open-domain QA that requires long-form answers, rather than short ones. Prompt (p) Response (r) Claim (c) Evidence (e) Question Math problems Scientiï¬ c question Long-form answer Atomic component unit Executable code Math solution Long-form review Web searched results Python library Calculator Code snippet Math calculation Tuple (paper title, year, authors) Google scholar Table 2:
2307.13528#11
2307.13528#13
2307.13528
[ "2110.14168" ]
2307.13528#13
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Factuality deï¬ nition in different tasks. â Sci. Lit Reviewâ represents scientiï¬ c literature review. Code Generation The code generation task (Yin and Neubig, 2017) involves generating executable code based on a given query. We deï¬ ne factual- ity in code generation as how well the generated code, as a whole, can be executed correctly within a speciï¬ c programming language (e.g., Python) and fulï¬ lls the provided requirements. This deï¬ nition is grounded in an execution-based approach to code evaluation, which measures the correctness of gen- erated code by executing it against some test case inputs and comparing its output to the expected output. the claim. On the other hand, the ability of LLMs to utilize multiple tools paves the way for multiple tool-augmented factuality detection. For example, by directly using ChatGPT plugins,3 we can inte- grate multiple tools into a chatbot. The framework is illustrated in Fig. 1, which consists of ï¬ ve main components: claim extraction, query generation, tool querying, evidence collec- tion, and agreement veriï¬ cation. We elaborate each component below.
2307.13528#12
2307.13528#14
2307.13528
[ "2110.14168" ]
2307.13528#14
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
# 4.1 Claim Extraction Math Problem Solving The math problem solv- ing task involves the use of automated methods to address mathematical problems (Cobbe et al., 2021). At the claim level, factuality in math prob- lem solving is deï¬ ned as the extent to which the generated statements adhere to the calculation rules. At the response level, factuality in math problem solving is deï¬ ned as how effectively the overall mathematical solution addresses the given prob- lem. Extracting claims from responses under various task settings is challenging due to the inconsistent deï¬ nitions of claims across tasks and scenarios. This inconsistency hinders the development of ap- plications such as text summarization evaluation and factuality detection. To tackle this, we propose an approach in this paper that treats claim extrac- tion as a process guided by LLM prompts based on the speciï¬
2307.13528#13
2307.13528#15
2307.13528
[ "2110.14168" ]
2307.13528#15
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
c deï¬ nition of claims. This approach offers the following advantages: Scientiï¬ c Literature Review Writing The sci- entiï¬ c literature review writing task (Jha et al., 2015) aims to analyze and synthesize existing re- search on a speciï¬ c topic in a ï¬ eld of study. In this task, we deï¬ ne factuality as whether the generated scientiï¬ c literature review correctly cites existing scientiï¬ c literature, including the correct mention of authors and publication years.2 (i) Leveraging the strong instruction-following capabilities of LLMs can signiï¬ cantly reduce the costs associated with data annotation and model training for claim extraction. (ii) When developing a system or constructing a dataset for an application that relies on the def- inition of claims, one simply needs to provide a textual deï¬ nition of the claim using a large model. This enables future researchers to effectively utilize these deï¬ nitions as a foundation in their work. # 4 Approach We propose a tool-augmented framework for detect- ing factual errors that can apply a uniï¬ ed approach across various tasks. The motivation for using tools is twofold. On one hand, each tool embodies the domain expertise, assisting us in the effective gath- ering of evidence that veriï¬ es the correctness of 2In this paper, our focus lies in examining the consistency of the relationship between the paper title, authors, and publi- cation year. However, the task of determining the suitability of the cited paper as the most appropriate choice is left for future investigation. (iii) Our experiments demonstrate that the claim extraction module, implemented by ChatGPT, ex- hibits strong performance in extracting claims (atomic component units). The detailed results of these experiments are discussed in Section 6.1. Here, we employ ChatGPT as a base LLM and apply different textual deï¬ nitions of claims across four tasks. Our goal is to extract all veriï¬ able claims within the generated text x, denoted as 3https://openai.com/blog/ chatgpt-plugins ) Knowledge-based QA Prompt ( Who is the CEO of Twitter? LLM Response Claims The CEO of Twitter at the time of writing this answer is Jack Dorsey.
2307.13528#14
2307.13528#16
2307.13528
[ "2110.14168" ]
2307.13528#16
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
He co-founded Twitter in 2006 the CEO of Twitter unded Twitter in 2006 Evidence (1.1) Noah Glass, Evan Williams, and Biz Stone co-founded Odeo. (1.2) Former NBC Universal advertising chief Linda Yaccarino will become. Query \Generation| Queries (1.1) Is Jack Dorsey the CEO of Twitter? (1.2) Who is the current CEO _ of Twitter? (2.1) Did Jack Dorsey co-found â Twitter in 2006?
2307.13528#15
2307.13528#17
2307.13528
[ "2110.14168" ]
2307.13528#17
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Scores Claim-level Factuality: [0, 1, ...] Response-level Factuality: 0 <q Prompt | Retum a string containing space-delimited Code Generation numbers starting from 0 up to n inclusive. LLM Response al def string_sequence(n): def string_sequence(n) result="" Claim result = "" for iin range(n+1): Extraction fori inrange(a+l): result += str(i) +" result += str(i) + return result.strip() return result.strip() Query â Test Cases Generation) Exec Results (J) string_sequence(4) (2) string_sequence(0) (3) string_sequence(3) â
2307.13528#16
2307.13528#18
2307.13528
[ "2110.14168" ]
2307.13528#18
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
cores, Response-level Factuality: | Math Problem Solvin; Prompt | Marie ordered 5 packs of milk that costs $3 each, and some boxes of pizza. Marie Math Claims paid a total of $45. How many bo pizza did Marie order if each box c Claim 7) 45-15 LLM Response Extraction (@)30 - Marie ordered 5 packs of milk that costs $3 each. The total cost of these items is: 5*3 = $15 To find the cost of the boxes of pizza, we subtract the total cost of the other items from the total amount paid by Marie: 45 - 15 = $30 Queries Generation (1) print(round(5*3, 7) == 15) (2) print(round(45-15, 7)=30) (3) print(round(30/3, 7) == 10) To find the number of boxes of pizza, we divide the remaining amount by the cost of each box: 30/3 = 10 ~ Therefore, Marie ordered 10 boxes of pizza, Scores Exei Response-level Rest Factuality:1 U FacTool Prompt Scientific Literature Review Writing Discuss the applications and limitations of quantum computing, citing at least one relevant paper.
2307.13528#17
2307.13528#19
2307.13528
[ "2110.14168" ]
2307.13528#19
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
When citing papers, please include the title, the author(s), and the publication year. (1) {title: Quantum Computing in the NISQ era and beyond, authors: John Preskill, publication_year: 2018} Query Queries Generation (1) Quantum Computing in the NISQ era and beyond Evidence (1) {title: Quantum Computing in the NISQ era and beyond, authors: Johnâ Preskill, publication_year: 2018} LLM Response Quantum computing has the potential to revolutionize various fields such as cryptography, optimization, and simulation. However, there are also limitations such as the need for error correction. One papers that have contributed to this field is â Quantum Computing in the NISQ era and beyondâ by John Preskill (2018). Scores Claim-level Factuality: [1] Response-level Factuality: 1 Figure 2: Our proposed framework for factuality detection in four domains: knowledge-based QA, code generation, math problem solving and scientiï¬ c literature review writing. {ci}i=1···n. Detailed prompting instructions can be found in Appendix A. and the calculated answer. We prompt ChatGPT to extract all such claims. KB-based QA The claim is deï¬ ned using the concept of atomic content units (ACUs) (Liu et al., 2022). Each ACU corresponds to a single atomic fact within a generated answer. In practice, we leverage ChatGPT4 (speciï¬ cally, the â gpt-3.5- turboâ
2307.13528#18
2307.13528#20
2307.13528
[ "2110.14168" ]
2307.13528#20
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
version) to extract claims based on two cri- teria: (i) each claim should not exceed 15 words, and (ii) it should clearly describe a fact. We also include two in-context examples from the RoSE dataset (Liu et al., 2022) in our prompt to obtain more ï¬ ne-grained claims. Additionally, we ask ChatGPT to resolve any coreferences or ambiguity, such as unclear pronouns and other related expres- sions within the claims.
2307.13528#19
2307.13528#21
2307.13528
[ "2110.14168" ]
2307.13528#21
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Scientiï¬ c Literature Review Each claim within the generated review is deï¬ ned as a tuple of â (paper title, year, authors)â contained from the generated review. We then prompt ChatGPT to extract all such tuples within the generated review. # 4.2 Query Generation For each claim ci, we convert it into a list of queries {qij}j=1···m that can be used to query external tools such as search engines, the Python interpreter, or Google scholar. KB-based QA We prompt ChatGPT or GPT-4 to generate two search engine queries from each claim ci. These queries are intended to help humans in verifying the factuality of ci. Detailed prompting instructions can be found in Appendix A. Code Generation We consider each generated code snippet within the response as a single claim to be veriï¬ ed. We extract all such code snippets that are enclosed with brackets, in other words, within a code block. Code Generation For each claim ci we gener- ate two different types of queries: simulated test case inputs, denoted as {qtij}j=1···m, and poten- tial solutions, denoted as {qsij}j=1···m. Both types of queries are generated by ChatGPT or GPT-4. The simulated test case inputs are function calls generated for a given code snippet, while potential solutions are repeatedly generated solutions that ChatGPT generates in response to the user prompt
2307.13528#20
2307.13528#22
2307.13528
[ "2110.14168" ]
2307.13528#22
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Math Problems We deï¬ ne each claim in a step- by-step math solution as the arithmetic operation performed between known real numbers. Each of these operations contains two parts: the calculation 4We have also explored other entailment-based models with BERT, and the result is no better than ChatGPT. ing ti as the input and collect the execution result (output) for each (ti, sj) pair. The input-output pairs are used as test cases for verifying the chat- bot generated unveriï¬
2307.13528#21
2307.13528#23
2307.13528
[ "2110.14168" ]
2307.13528#23
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
ed solution. The process is shown in Fig. 3. Math Problems We collect the execution results for code snippets derived from the mathematical operations. As illustrated in Fig. 2, math claims like â 30 /3 = 10â are extracted and then con- verted into a Python executable code, for instance, â print(round(30/3, 7)==10)â . Figure 3: Unit test library generation for detecting fac- tual errors in code generation. Scientiï¬ c Literature Review We use the title of each paper, extracted from the text, as the query to access relevant information through the Google Scholar API provided by the Scholarly6 Python package. This allows us to retrieve key information about each paper, including the paper title, author list, and publication year.
2307.13528#22
2307.13528#24
2307.13528
[ "2110.14168" ]
2307.13528#24
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
p. In our later experiments, we generate 3 sim- ulated test case inputs and 3 potential solutions. Detailed prompting instructions can be found in Appendix A. Math Problems We prompt ChatGPT or GPT-4 to convert all mathematical operations into exe- cutable Python code snippets. These snippets are designed to return â Trueâ when the calculation matches the calculated answer and â Falseâ if it doesnâ t. Detailed prompting instructions can be found in Appendix A. # 4.4 Agreement Veriï¬ cation In the ï¬ nal step, each claim, ci, receives a binary factuality label, Li â {TRUE, FALSE}, based on the level of support it receives from the collected evidence, {eik}k=1···li. This labeling process is performed for every individual claim. Scientiï¬ c Literature Review We use the paper title, found within the extracted claim tuple, as the query for Google Scholar. Our assumption here is that if a paper exists, it should appear as the ï¬ rst search result on Google Scholar when we use the paper title as the query. KB-based QA We prompt ChatGPT or GPT-4 to judge the factuality of the claim given the retrieved list of evidence snippets. We follow a zero-shot Chain-of-Thought (Wei et al., 2023) reasoning pro- cess: initially, the model attempts to reason about whether the claim is factual or not.
2307.13528#23
2307.13528#25
2307.13528
[ "2110.14168" ]
2307.13528#25
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
If an error is identiï¬ ed, we then ask it to explain and attempt to rectify the mistake. # 4.3 Tool Querying & Evidence Collection We then use the queries to query various tools to collect relevant evidence statements {eik}k=1···li. Code Generation We conduct a majority vote for each test case across all solutions, establishing what we refer to as the â pseudo-golden outputâ for that particular test case. We repeat this process for every test case. Following this, we compare the execution result of the solution thatâ
2307.13528#24
2307.13528#26
2307.13528
[ "2110.14168" ]
2307.13528#26
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
s under veri- ï¬ cation against all the test cases with the pseudo golden output. If the results match, we classify the solution under veriï¬ cation as true. Otherwise, it is deemed false. KB-based QA The external tool we use to help verify the factuality of the generated text is the Google Search API, which queries the internet for knowledge using the queries generated from the claims extracted from the generated text of LLM. We use the Google Search API provided by Ser- per5 to search the top pages and retrieve the most relevant search snippets included in the APIâ
2307.13528#25
2307.13528#27
2307.13528
[ "2110.14168" ]
2307.13528#27
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
s re- sponse. We then parse the response to obtain differ- ent types of snippets such as answer boxes, knowl- edge graphs, and organic search results. Math Problems We compile the results of each code snippet execution. If any snippet returns â Falseâ , we classify the associated generated text x as false. Conversely, if all snippets yield â Trueâ , Code Generation For each test case input ti and generated potential solution sj, we execute sj us- 6https://github.com/ scholarly-python-package/scholarly 5https://serper.dev/
2307.13528#26
2307.13528#28
2307.13528
[ "2110.14168" ]
2307.13528#28
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
we classify the corresponding generated text x as true. Scientiï¬ c Literature Review We compare the extracted claim: â (paper title, year, authors)â to the evidence: â (paper title, year, authors)â retrieved from Google Scholar API. For the paper title and year of publication, we conduct an exact, case- insensitive string match. As for the authorsâ match, we prompt ChatGPT or GPT-4 to judge whether the author list in the extracted claim is a subset of the retrieved author list. All the information must be matched in order to be classiï¬ ed as â Trueâ , otherwise â Falseâ .
2307.13528#27
2307.13528#29
2307.13528
[ "2110.14168" ]
2307.13528#29
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
# 5 Dataset Construction # 5.1 Prompt and Response Collection KB-based QA For KB-based QA, we evaluate our framework using RoSE (Liu et al., 2022) and FactPrompts. RoSE is a text summarization dataset that provides ï¬ ne-grained ACUs for each reference summary. FactPrompts is a dataset that comprises real-world prompts sourced from various platforms and datasets, such as Quora and TruthfulQA (Lin et al., 2022), along with corresponding responses generated by ChatGPT. We construct the dataset using 100 reference summaries from RoSE and 50 responses from FactPrompts for our evaluation. Code Generation For code generation, we eval- uate our framework using HumanEval (Chen et al., 2021). HumanEval is a programming problem dataset that contains several unit tests for each problem. We use ChatGPT to generate responses based on the processed prompts of HumanEval pro- vided in (Chen et al., 2022) which solely contain the instruction of the prompt without input-output demonstrations. Math Problems For math problems, we evalu- ate our framework using GSM-Hard (Gao et al., 2022b). GSM-Hard is a dataset constructed from GSM8K (Cobbe et al., 2021) by replacing the num- bers in the questions of GSM8K with larger num- bers. We sampled 100 prompts from GSM-Hard that have a target solution value of positive.7 Then, 7GSM8K involves many application questions, including calculations involving money, measurements of quantities, etc. We found that GSM-Hard examples with negative values often contained illogical situations, such as â negative 5 applesâ .
2307.13528#28
2307.13528#30
2307.13528
[ "2110.14168" ]
2307.13528#30
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
A positive target solution value helps prevent ChatGPT from making extra assumptions on top of the description in the problem. we generate responses for these prompts using ChatGPT. Scientiï¬ c Literature Review For the scientiï¬ c literature review, we follow self-instruct (Wang et al., 2023) to create 100 diverse prompts span- ning computer science, business, law, medicine, and physics. Each prompt asks for a technical or research-oriented response that includes at least one relevant literature citation. Then, we generate responses for these prompts using ChatGPT. # 5.2 Claim Collection For responses from FactPrompts and GSM-Hard, we follow the idea of â claim extraction as prompt- ingâ described in §4.1, This approach allows us to reuse claim prompts as listed in Appendix A. We use ChatGPT as the model for claim extraction due to its cost efï¬ ciency and effectiveness in extract- ing ï¬ ne-grained claims. In terms of HumanEval responses, given that the generated response to a HumanEval prompt is already in the form of a code snippet, we consider the â claimâ of the response to be identical to the response itself. # 5.3 Claim and Response Annotation KB-based QA & Scientiï¬ c Literature Review For claim annotation, the authors collectively an- notate the extracted claims as either factual or non-factual. For response annotation, if one claim within the response is labeled as non-factual, then the response as a whole is considered non-factual; otherwise, the response is considered factual. Code Generation We consider the claim label to be identical to the response label since the â claimâ of the response is the same as the response itself. For response annotation, we annotate ChatGPTâ s responses using the execution code provided in (Chen et al., 2022) against the HumanEval test cases. This allows us to distinguish between factual (those passing all tests) responses and non-factual responses. Math Problems For claim annotation, the au- thors collectively annotate the extracted claims as either factual or non-factual. For response annota- tion, we utilize the target value provided in GSM- Hard (Gao et al., 2022b) to annotate the generated responses.
2307.13528#29
2307.13528#31
2307.13528
[ "2110.14168" ]
2307.13528#31
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Task Datasets Responses Claims KB-QA RoSE 100 527 KB-QA FactPrompts HumanEval GSM-Hard FactPrompts Code Math Sci.Lit 50 (23:27) 164 (109:55) 100 (47:53) 100 (10:90) 233 (177:56) 164 (109:55) 284 (246:38) 186 (33:153) Table 3: Detailed statistics of datasets used in this work. Note that (p, n) denotes p = count of positive responses or claims, and n = count of negative responses or claims. â
2307.13528#30
2307.13528#32
2307.13528
[ "2110.14168" ]
2307.13528#32
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Sci.Litâ represents scientiï¬ c literature review. # 6 Experiments We evaluate FACTOOL against two baselines that use LLMs to check their own inputs: Self-Check with 3-shot CoT and zero-shot CoT, which have shown to been effective on various tasks including dialogue response, math reasoning, and code gener- ation (Madaan et al., 2023; Chen et al., 2023). Both of these baselines aim to test the ability of LLM to identify its own errors without the use of any exter- nal tool. In practice, we prompt ChatGPT (gpt-3.5- turbo-0301) and GPT-4 (gpt-4-0314)8 to recognize, explain, and attempt to rectify their own errors. Following this reasoning process, the models make ï¬ nal judgments on the factuality of the given claim. The key difference between Self-Check (zero-shot CoT) and Self-Check (3-shot CoT) is that Self- Check (3-shot CoT) provides three demonstrations to models, while Self-Check (zero-shot CoT) does not provide any demonstrations.
2307.13528#31
2307.13528#33
2307.13528
[ "2110.14168" ]
2307.13528#33
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
# 6.1 Exp-I: Claim Extraction Evaluation We evaluate the claim extraction module of FAC- TOOL on RoSE (Liu et al., 2022). We treat the reference summary as the generated text x, and the reference ACUs as the golden-extracted claims. We measure the similarity between the machine-extracted (GPT-4, ChatGPT, and Flan-T5 XXL) claims {cc i }i=1···nc and golden- extracted claims {cg i }i=1···ng using 4 metrics: ROUGE-1, ROUGE-2, ROUGE-L (Lin, 2004), and BERTScore. In Tab. 4, we report the av- erage of the highest similarity between each 8We anticipate that the recently released models, gpt-3.5- turbo-0613 and gpt-4-0613, will lower the inference costs for FACTOOL. This expectation arises from their improved ability to produce structured responses, such as those in JSON format. While conducting our experiments on gpt-3.5-turbo-0301 and gpt-4-0314, we often ran into problems where the responses were not valid JSON, requiring us to rerun any samples with invalid response formats. The source code of FACTOOL will be using the latest versions of ChatGPT and GPT-4.
2307.13528#32
2307.13528#34
2307.13528
[ "2110.14168" ]
2307.13528#34
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Model Metric Precision Recall F1-score GPT-4 ROUGE-1 ROUGE-2 ROUGE-L BERTScore 0.7394 0.6304 0.7175 0.6632 0.8758 0.7771 0.8625 0.7865 0.7860 0.6772 0.7667 0.7175 ChatGPT ROUGE-1 ROUGE-2 ROUGE-L BERTScore 0.7770 0.6520 0.7557 0.6958 0.8285 0.7115 0.8148 0.7521 0.7836 0.6610 0.7655 0.7174 FLAN-T5-XXL ROUGE-1 ROUGE-2 ROUGE-L BERTScore 0.6531 0.5609 0.6428 0.4314 0.8928 0.8157 0.8885 0.6661 0.7326 0.6413 0.7237 0.5408 Table 4: The average similarity between the extracted claims from GPT-4, ChatGPT, and Flan-T5 XXL and the golden ACUs on RoSE. ChatGPT-extracted claim and the corresponding golden-extracted claim in the same sample. (i.e., j ))). Results We demonstrate in Tab. 4 that the claims extracted by GPT-4, ChatGPT, and Flan-T5 closely match the ACUs annotated by humans, as evaluated by ROUGE and BERTScore metrics. Note that in Exp-II, we choose ChatGPT as the claim extractor for two reasons: (1) The context length of Flan- T5 is too short (512 tokens) to effectively extract claims from lengthy responses in our dataset. (2) ChatGPT is more cost-efï¬ cient compared to GPT- 4, while maintaining similar effectiveness in claim extraction. # 6.2 Exp-II: Framework Evaluation We evaluate FACTOOL and the two Self-Check baselines on the dataset constructed from each sce- nario. Depending on the model used for query gen- eration and agreement veriï¬ cation, we have two FACTOOL baselines:
2307.13528#33
2307.13528#35
2307.13528
[ "2110.14168" ]
2307.13528#35
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
FACTOOL powered by Chat- GPT and FACTOOL powered by GPT-4. We report the accuracy, recall, precision, and F1-score at both the claim and response levels. 6.2.1 Result Tab. 5 shows the claim-level and response-level performance of FACTOOL and the self-check base- lines. We obtain following observations. FACTOOL powered by GPT-4 outperforms all other baselines across all scenarios From Tab. 5, we observe that FACTOOL powered by GPT-4 outperforms all other baselines across all scenarios. FACTOOL powered by GPT-4 achieves an 89.09 claim-level F1 / 71.79 response-level F1 on KB-based QA, a 92.11 claim-level F1 / 92.11 response-level F1 on code generation (remember Claim-Level Response-Level Tasks LLMs Methods Acc. R P F1 Acc.
2307.13528#34
2307.13528#36
2307.13528
[ "2110.14168" ]
2307.13528#36
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
R P F1 ChatGPT Self-Check (0) Self-Check (3) FACTOOL 75.54 69.53 74.25 90.40 81.36 73.45 80.00 79.12 90.91 84.88 80.23 81.25 54.00 54.00 64.00 60.87 47.83 43.48 50.00 50.00 66.67 54.90 48.89 52.63 KB-QA GPT-4 Self-Check (0) Self-Check (3) FACTOOL 77.25 79.83 84.12 84.75 85.88 85.31 85.23 87.36 93.21 84.99 86.61 89.09 54.00 64.00 78.00 95.65 52.17 60.87 50.00 63.16 87.50 65.67 57.14 71.79 ChatGPT Self-Check (0) Self-Check (3) FACTOOL 68.29 68.90 78.05 99.10 100.00 89.19 68.33 68.52 80.49 80.88 81.32 84.62 68.29 68.90 78.05 99.10 100.00 89.19 68.33 68.52 80.49 80.88 81.32 84.62 Code GPT-4 Self-Check (0) Self-Check (3) FACTOOL 75.31 77.44 89.02 95.50 96.40 94.59 75.18 76.43 89.74 84.13 85.26 92.11 75.31 77.44 89.02 95.50 96.40 94.59 75.18 76.43 89.74 84.13 85.26 92.11 ChatGPT Self-Check (0) Self-Check (3) FACTOOL 84.15 87.32 97.54 90.24 94.31 97.56 91.36 91.34 99.59 90.80 92.80 98.56 57.00 61.00 78.00 74.47 89.36 93.62 53.03 55.26 69.84 61.95 68.29 80.00 Math GPT-4 Self-Check (0) Self-Check (3) FACTOOL 83.10 92.61 98.24 86.99 96.75 97.97 93.04 94.82 100.00 89.92 95.77 98.97 49.00 65.00 78.00 85.11 89.36 95.74 47.62 58.33 69.23 61.07 70.59 80.36 ChatGPT Self-Check (0) Self-Check (3) FACTOOL 28.69 24.19 97.31 96.00 96.97 84.85 21.82 18.60 100.00 35.56 31.22 91.80 18.00 22.00 99.00 100.00 90.00 90.00 10.87 10.47 100.00 19.61 18.75 94.74 GPT-4 Self-Check (0) Self-Check (3) FACTOOL 35.75 44.75 98.39 84.85 87.88 90.91 20.29 23.20 100.00 32.75 36.71 95.24 19.00 49.00 99.00 100.00 70.00 90.00 10.99 12.73 100.00 19.80 21.54 94.74
2307.13528#35
2307.13528#37
2307.13528
[ "2110.14168" ]
2307.13528#37
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Table 5: Experimental results of FACTOOL powered by ChatGPT and FACTOOL powered by GPT-4 on KB-based QA, Code Generation, Math Problems, and Scientiï¬ c Literature Review. that claim-level factuality is considered equivalent to response-level factuality in our experiment for code generation), a 98.97 claim-level F1 / 80.36 response-level F1 on math problems, and a 95.24 claim-level F1 / 94.74 response-level F1 on scien- tiï¬ c literature review. Each of these ï¬ gures is the highest for their respective tasks. view. On FACTOOL powered by GPT-4 v.s. Self- Check (3) powered by GPT-4, we observe: 95.24 v.s. 36.71 claim-level F1 and 94.74 v.s. 21.54 response-level F1. Here, Google Scholar shown to be highly robust in performing its speciï¬ ed task of ï¬ nding citations when compared to LLM itself. FACTOOL powered by GPT-4 outperforms all self-check baselines across all scenarios From Tab. 5, we show that FACTOOL with GPT-4 outper- forms all self-check baselines across all scenarios. On FACTOOL powered by GPT-4 v.s. Self-Check (3) powered by GPT-4, we observe: 71.79 v.s. 57.14 response-level F1 on KB-based QA, 92.11 v.s. 85.26 response-level F1 on code generation, 80.36 v.s. 70.59 response-level F1 on math prob- lems, and 94.74 v.s. 21.54 response-level F1 on scientiï¬ c literature review. FACTOOL powered by GPT-4 outperforms FACTOOL powered by ChatGPT FACTOOL powered by GPT-4 outperforms FACTOOL pow- ered by ChatGPT across all scenarios. This trend is especially signiï¬ cant in KB-QA, where query gen- eration and agreement veriï¬
2307.13528#36
2307.13528#38
2307.13528
[ "2110.14168" ]
2307.13528#38
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
cation are harder for ChatGPT but relatively easier for GPT-4 (89.09 v.s 81.25 claim-level F1 and 71.79 v.s 52.63 response- level F1). On the other hand, in scenarios where query generation and agreement veriï¬ cation are relatively easy for both ChatGPT and GPT-4, the performance is similarly good. FACTOOL powered by GPT-4 signiï¬ cantly out- performs all self-check baselines in scientiï¬ c lit- erature review From Tab. 5, we show that FAC- TOOL powered by GPT-4 signiï¬ cantly outperforms the self-check baselines in scientiï¬ c literature re- Self-check models are prone to false posi- tive and thus less sensitive in detecting errors From Tab. 5, we observe that self-check models have lower precision compared to FACTOOL. On Self-Check (3) powered by GPT-4 v.s. FACTOOL powered by GPT-4, we observe: 63.16 v.s. 87.50 response-level precision on KB-based QA, 76.43 v.s. 89.74 response-level precision on code gen- eration, 58.33 v.s. 69.23 response-level precision on math problems, and 12.73 v.s. 100.00 response- level precision on scientiï¬ c literature review. These ï¬ gures show that self-check models tend to clas- sify claims as â Trueâ considerably more frequently than FACTOOL, suggesting a lower sensitivity for error detection. Self-check models powered by ChatGPT outperform FACTOOL powered by ChatGPT on KB-QA Tab. 5 shows that Self-Check (0) powered by ChatGPT outperforms FACTOOL powered by ChatGPT. Through examining speciï¬ c cases, we found that reasoning errors are the main reason why FACTOOL powered by ChatGPT performs worse than the self-check baselines. Even when provided with sufï¬ cient evidence to determine whether the claim is factual or not, the agreement veriï¬
2307.13528#37
2307.13528#39
2307.13528
[ "2110.14168" ]
2307.13528#39
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
cation implemented by ChatGPT can become confused. For example, for the claim â The modern-day version of fortune cookies was invented in the United States.â , the reasoning of FACTOOL powered by ChatGPT is self- â The given text is not contradictory: entirely factual. The modern-day version of fortune cookies was not invented in the United States. Most people nowadays believe that fortune cookies were created by a Japanese man named Makoto Hagiwara in 1914 in San Francisco. Hagiwara owned what is now called the Golden Gate Park Japanese Tea Garden, where he served tea and fortune cookies. is supported by the provided evidences.â
2307.13528#38
2307.13528#40
2307.13528
[ "2110.14168" ]
2307.13528#40
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
found in Fig. 9 of Appendix B. 6.2.2 Performance Analysis We take a closer look at performance in different scenarios by examining evaluated cases. KB-based QA The fact-checking capability of FACTOOL on KB-based QA is determined by sev- eral factors, including whether the search engine can return the most relevant snippets that could assist in determining the factuality of the given claim, the quality of the generated search engine queries, and the LLMâ s ability to reason about the validity of the claim given the retrieved evidence. We found that FACTOOL powered by GPT-4 is especially capable under the following situations: (1) Fact-checking recent events, discoveries, or news: FACTOOL powered by GPT-4 successfully identify false claims such as â
2307.13528#39
2307.13528#41
2307.13528
[ "2110.14168" ]
2307.13528#41
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Argentina has not won the World Cup since 1986â ever sold is a digital artwork called â Everydays: 5000 Daysâ â . (2) Fact-checking high-precision statistics: FACTOOL powered by GPT-4 success- fully identify false claims such as â Ireland has an obesity rate of 26.9%â and The First 5000 Daysâ â Everydays: sold for 69 millionâ . Detailed examples can be found in Fig. 10 of Appendix B. Code Generation The fact-checking capability of FACTOOL on code generation is determined by the LLMâ s capability to generate high-quality test cases and potential solutions. We demonstrate that due to GPT-4â s exceptional ability to generate such high-quality test cases and potential solutions, FAC- TOOL powered by GPT-4 outperforms other base- lines. For example, in â HumanEval/36â , GPT-4 is consistently generating high quality solutions, leading to its correctly identiï¬ es the mistakes in the response, while ChatGPT fails to identify the mistake.
2307.13528#40
2307.13528#42
2307.13528
[ "2110.14168" ]
2307.13528#42
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Detailed examples can be found in Fig. 11 and Fig. 12 of Appendix B. Math Problems The fact-checking capability of FACTOOL on math problems is determined by the LLMâ s capability to generate accurate Python snip- pets that verify the correctness of given extracted mathematical calculations. Both FACTOOL pow- ered by GPT-4 and FACTOOL powered by Chat- GPT excel in this regard. For example, both FAC- TOOL powered by GPT-4 and FACTOOL pow- ered by ChatGPT correctly identify 23 Ã
2307.13528#41
2307.13528#43
2307.13528
[ "2110.14168" ]
2307.13528#43
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
4319216 doesnâ t equal to 99305768. Detailed examples can be found in Fig. 13 of Appendix B. Scientiï¬ c Literature Review The fact-checking capability of FACTOOL on Scientiï¬ c Literature Review is determined by the LLMâ s capability to identifying whether the author list generated is a subset of the actual author list. Both FAC- TOOL powered by GPT-4 and FACTOOL pow- ered by ChatGPT excel in this regard. For exam- ple, both FACTOOL powered by GPT-4 and FAC- TOOL powered by ChatGPT correctly identify that the paper â The Impact of Artificial Intelligence on Employmentâ was not written by â Acemoglu and Restrepoâ . De- tailed examples can be found in Fig. 14 of Ap- pendix B. 6.2.3 Failure Analysis To gain a comprehensive understanding of FAC- TOOLâ s performance, we conduct analysis on cases where FACTOOL will fail. KB-based QA We summarize following sources of errors: (1) Reasoning error: Although the evidence provided is sufï¬ cient and the LLM accurately ï¬ nds the most relevant information, the model fails to reason about the relationship between the claim and the provided evidence. For example, for claim â Jupiter is less dense than Saturn", FACTOOL powered by GPT-4 fails to reason the relative relationship even though the evidences provided are sufï¬ cient. (2) Conï¬ icting evidence: Conï¬ ict in evidence can cause confusion for LLM, leading to incorrect decisions. For example, for claim â
2307.13528#42
2307.13528#44
2307.13528
[ "2110.14168" ]
2307.13528#44
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Jupiter has a density of 1.33 grams per cubic centimeter", there are conï¬ icting evidences claiming that the density is 1.326 or 1.33g/cm3 . (3) Ambiguity in claim: Ambigu- ous descriptions and subjective adjectives can lead to incorrect decisions. For example, the claim â Fortune cookies are enjoyed by people all over the world." is ambiguous and can have different answers based on different interpretations. Detailed examples can be found in Fig. 15 of Appendix B. Code Generation Errors in code generation mainly comes from: (1) Limited variety in syn- thetic test cases: The synthetic test cases gen- erated by LLMs may not be fully representa- tive or sufï¬ ciently diverse. in the â HumanEval/64â sample, all the inputs of the generated synthetic test cases are com- posed of strings that only include lowercase let- ters (without uppercase letters). (2) Potential er- rors in code generation: The generated poten- tial solutions could contain errors or bugs. De- spite implementing a majority voting system to lessen this issue, it cannot completely eliminate the chance of bugs in the code generation pro- cess. For example, in the â HumanEval/79â sam- ple, all the generated solutions failed to correctly â decimal_to_binary(0)â as â db0dbâ . De- tailed examples can be found in Fig. 16 of Ap- pendix B.
2307.13528#43
2307.13528#45
2307.13528
[ "2110.14168" ]
2307.13528#45
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Math Problems There are two major types of errors in factuality detection for math problems: (1) Round-off error: Round-off errors can occur during numerical calculations in Python. For example, FACTOOL powered by GPT-4 incorrectly classify the math calculation â 60444034 / 12 = 5037002.83â as â Falseâ . (2) Reasoning error: Since the claims extracted by FACTOOL only involve mathematical calculations, FAC- TOOL will not verify the reasoning process of the mathematical solution. For example, â
2307.13528#44
2307.13528#46
2307.13528
[ "2110.14168" ]
2307.13528#46
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Kylar went to the question for store to buy glasses for his new apartment. but every second glass costs only 60% of the price. How much to buy 5364765 glasses. does he need to pay for them?â , the ChatGPT generated response contains reasoning error that incorrectly substitute the total cost as â 5,364,765 * 5â . However, since FACTOOL only checks math calculation errors, FACTOOL powered by GPT-4 did not identify the reasoning error. Detailed examples can be found in Fig. 17 of Appendix B.
2307.13528#45
2307.13528#47
2307.13528
[ "2110.14168" ]
2307.13528#47
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Scientiï¬ c Literature Review There are two major types of errors in factuality detection for scientiï¬ c literature review: (1) Errors in title match- ing: Title matching can sometimes be problematic due to abbreviations in the generated citations or the retrieved title. For example, although the paper â MDMA-assisted psychotherapy for treatment of PTSD: study design and rationale for phase 3 trials based on pooled analysis of six phase 2 randomized controlled trials exists, FACTOOL powered by GPT-4 identify the paper title as incorrect. (2) Errors in au- thor matching: the author matching process might sometimes not be robust. For example, although â Language Models are the Unsupervised Multitask Learners" are indeed â Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever, FAC- TOOL powered by GPT-4 identify the author list as incorrect. Detailed examples can be found in Fig. 18 of Appendix B. # 6.3 Exp-III:
2307.13528#46
2307.13528#48
2307.13528
[ "2110.14168" ]
2307.13528#48
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Using FACTOOL to Evaluate the Factuality of Modern Chatbots The purpose of developing a factuality detector is to audit the actual generative chatbots to assess the reliability of the responses generated by chatbots. To this end, we evaluate the factuality of modern chatbots, including GPT-4, ChatGPT, Claude-v1, Bard, and Vicuna-13B, using FACTOOL powered by GPT-4. It is important to note that in Exp-III, we consider FACTOOL as a golden evaluator, re- sponsible for evaluating the factual accuracy of the responses generated by different chatbots. For prompts selection, we follow the same intuition as (Zhou et al., 2023): KB-QA is the most com- mon scenario. Thus, we select 30 KB-QA prompts, 10 code prompts, 10 math prompts. and 10 scien- tiï¬ c prompts (i.e., 3 times more KB-QA prompts compare to prompts from other scenarios) to carry out this factuality evaluation on chatbots. The KB- QA prompts are collected from (Zhou et al., 2023), code prompts from HumanEval (Chen et al., 2021), math prompts from (Gao et al., 2022b), while the scientiï¬ c prompts are generated by us. Responses for these prompts are generated by each of the eval- uated chatbots. We report both the claim-level and response- level accuracies for each chatbot, as evaluated by FACTOOL powered by GPT-4. Given that KB-QA responses contain signiï¬ cantly more claims than responses from other scenarios, we report the weighted claim-level accuracy. This weight is determined by the ratio of the number of prompts in each scenario.
2307.13528#47
2307.13528#49
2307.13528
[ "2110.14168" ]
2307.13528#49
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
In other words, # weighted_claim_level_accuracy = 3 6 à claim_level _accuracy in KB â QA + + + 1 6 1 6 1 6 à claim_level _accuracy in Code à claim_level _accuracy in Math à claim_level _accuracy in Scientiï¬ c Adopting the weighted-claim level accuracy evaluation helps us provide a more holistic and fair assessment of each chatbotâ s factual accuracy. Results Tab. 6 shows that GPT-4 has the best weighted claim-level factual accuracy and response- level accuracy compared to ChatGPT, Bard, Claude-v1, and Vicuna. Fig. 4 and 5 demon- strate ï¬ ne-grained performance w.r.t each scenario (KB-QA, code, math, scientiï¬ c). We observe that (a) GPT-4 has the best claim-level accuracy and response-level accuracy in most of the scenarios. (b) Supervised ï¬ ne-tuned Chatbots like Vicuna- 13B perform reasonably well in more common sce- narios like KB-QA but less so in more challenging scenarios such as math, code, and scientiï¬ c. LLMs GPT-4 ChatGPT Claude-v1 Bard Vicuna-13B 75.60 68.63 63.95 61.15 50.35 43.33 36.67 26.67 33.33 21.67 196.83 144.05 208.70 263.77 207.13 Table 6: Factual accuracy of different chatbots evalu- ated by FACTOOL. WCL Acc. stands for weighted claim-level accuracy of each chatbot. RL Acc. stands for response-level accuracy of each chatbot. Avg. Resp. Len. stands for average response length of each chat- bot. Note that we consider FACTOOL as the golden evaluator that evaluates the factuality of the responses generated by each chatbot.
2307.13528#48
2307.13528#50
2307.13528
[ "2110.14168" ]
2307.13528#50
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
# 7 Conclusion We introduce FACTOOL, a task- and domain- agnostic framework designed to tackle the esca- lating challenge of factual error detection in gen- erative AI. We expand the conventional deï¬ nition of factuality, particularly focusing on auditing the capabilities of generative AI models. Realizing that (1) the generated texts of LLMs tend to be lengthy and lack a clearly deï¬ ned granularity for individ- ual facts, and that (2) there is a scarcity of explicit evidence available during the process of fact check- ing, we build FACTOOL as a 5-step tool-augmented framework that consists of claim extraction, query generation, tool querying, evidence collection, and veriï¬
2307.13528#49
2307.13528#51
2307.13528
[ "2110.14168" ]
2307.13528#51
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
cation. We demonstrate the potential of incorporating tools like Google Search, Google Scholar, code interpreters, Python, and even LLMs themselves in factual error detection through experimentation across diverse tasks such as knowledge-based QA, code generation, math problem solving, and sci- entiï¬ c literature review writing. We believe that our holistic and adaptable framework can be easily extended to more scenarios. GPT-4 ChatGPT Bard Claude-v1 Vicuna-13B 1 y c a r u c c A 0.5 0 KB-QA Code Math Scientiï¬ c Figure 4: Claim-Level Accuracy across scenarios for GPT-4, ChatGPT, Bard, Claude-v1, and Vicuna-13B # Acknowledgements We thank Yixin Liu, Zhengbao Jiang, Zhiruo Wang for the useful discussion and suggestions. # References Sébastien Bubeck, Varun Chandrasekaran, Ronen El- dan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lund- berg, et al. 2023. Sparks of artiï¬ cial general intelli- gence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712. Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. 2022.
2307.13528#50
2307.13528#52
2307.13528
[ "2110.14168" ]
2307.13528#52
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Codet: Code generation with generated tests. arXiv preprint arXiv:2207.10397. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- In Proceedings of the 55th An- domain questions. nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870â 1879, Vancouver, Canada. Association for Computa- tional Linguistics.
2307.13528#51
2307.13528#53
2307.13528
[ "2110.14168" ]
2307.13528#53
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sas- try, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cum- mings, Matthias Plappert, Fotios Chantzis, Eliza- beth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welin- der, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021.
2307.13528#52
2307.13528#54
2307.13528
[ "2110.14168" ]
2307.13528#54
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Eval- uating large language models trained on code. Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. 2023. Teaching large language mod- els to self-debug. arXiv preprint arXiv:2304.05128. 1 GPT-4 ChatGPT Bard Claude-v1 Vicuna-13B y c a r u c c A 0.8 0.6 0.4 0.2 0 KB-QA Code Math Scientiï¬ c Figure 5: Response-Level Accuracy across scenarios for GPT-4, ChatGPT, Bard, Claude-v1, and Vicuna- 13B Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training veriï¬ ers to solve math word problems. arXiv preprint arXiv:2110.14168. Alexander Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2022.
2307.13528#53
2307.13528#55
2307.13528
[ "2110.14168" ]
2307.13528#55
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
QAFactEval: Improved QA- based factual consistency evaluation for summariza- tion. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 2587â 2601, Seattle, United States. Asso- ciation for Computational Linguistics. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vin- cent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2022a.
2307.13528#54
2307.13528#56
2307.13528
[ "2110.14168" ]
2307.13528#56
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Rarr: Researching and re- vising what language models say, using language models. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Gra- ham Neubig. 2022b. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435. Rahul Jha, Reed Coke, and Dragomir Radev. 2015.
2307.13528#55
2307.13528#57
2307.13528
[ "2110.14168" ]
2307.13528#57
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Surveyor: A system for generating coherent sur- vey articles for scientiï¬ c topics. In Proceedings of the AAAI Conference on Artiï¬ cial Intelligence, vol- ume 29. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallu- cination in natural language generation. ACM Com- puting Surveys, 55(12):1â 38. Ryo Kamoi, Tanya Goyal, Juan Diego Rodriguez, and Greg Durrett. 2023. Wice:
2307.13528#56
2307.13528#58
2307.13528
[ "2110.14168" ]
2307.13528#58
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Real-world en- arXiv preprint tailment for claims in wikipedia. arXiv:2303.01432. Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022. Internet-augmented dialogue generation. In Proceedings of the 60th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 8460â 8478, Dublin, Ireland. Association for Computational Linguistics. Amrith Krishna, Sebastian Riedel, and Andreas Vla- chos. 2022.
2307.13528#57
2307.13528#59
2307.13528
[ "2110.14168" ]
2307.13528#59
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
ProoFVer: Natural logic theorem prov- ing for fact veriï¬ cation. Transactions of the Associ- ation for Computational Linguistics, 10:1013â 1030. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332â
2307.13528#58
2307.13528#60
2307.13528
[ "2110.14168" ]
2307.13528#60
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
9346, Online. Association for Computa- tional Linguistics. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neu- ral Information Processing Systems, 33:9459â 9474. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022.
2307.13528#59
2307.13528#61
2307.13528
[ "2110.14168" ]
2307.13528#61
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Solving quantita- tive reasoning problems with language models. Yaobo Liang, Chenfei Wu, Ting Song, Wenshan Wu, Yan Xia, Yu Liu, Yang Ou, Shuai Lu, Lei Ji, Shaoguang Mao, Yun Wang, Linjun Shou, Ming Gong, and Nan Duan. 2023. Taskmatrix.ai: Com- pleting tasks by connecting foundation models with millions of apis. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023.
2307.13528#60
2307.13528#62
2307.13528
[ "2110.14168" ]
2307.13528#62
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Letâ s verify step by step. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74â 81, Barcelona, Spain. Association for Computational Linguistics. Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214â 3252, Dublin, Ireland. Association for Computational Linguistics. Pengfei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaichen Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, and Graham Neubig. 2021.
2307.13528#61
2307.13528#63
2307.13528
[ "2110.14168" ]
2307.13528#63
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
ExplainaBoard: An explainable leaderboard for NLP. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 280â 289, Online. As- sociation for Computational Linguistics. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1â 35. Yixin Liu, Alexander R Fabbri, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shaï¬ q Joty, Chien-Sheng Wu, Caiming Xiong, et al. 2022.
2307.13528#62
2307.13528#64
2307.13528
[ "2110.14168" ]
2307.13528#64
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Revisiting the gold standard: Grounding sum- marization evaluation with robust human evaluation. arXiv preprint arXiv:2212.07981. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdan- bakhsh, and Peter Clark. 2023.
2307.13528#63
2307.13528#65
2307.13528
[ "2110.14168" ]
2307.13528#65
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Self-reï¬ ne: Iterative reï¬ nement with self-feedback. OpenAI. 2023. Gpt-4 technical report. Oï¬ r Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2022. Measuring and narrowing the compositionality gap in language models. Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023.
2307.13528#64
2307.13528#66
2307.13528
[ "2110.14168" ]
2307.13528#66
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Tool- former: Language models can teach themselves to use tools. John Schulman. 2023. Reinforcement learning from human feedback: Progress and challenges. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface. Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, and Sida I. Wang. 2022. Natural lan- In Pro- guage to code translation with execution. ceedings of the 2022 Conference on Empirical Meth- ods in Natural Language Processing, pages 3533â 3546, Abu Dhabi, United Arab Emirates. Associa- tion for Computational Linguistics.
2307.13528#65
2307.13528#67
2307.13528
[ "2110.14168" ]
2307.13528#67
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022. Galactica: A large language model for sci- ence. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239. Christos Thorne, Christodoulopoulos, and Arpit Mittal. 2018a. FEVER: a large-scale dataset for fact extraction In Proceedings of the 2018 and VERiï¬
2307.13528#66
2307.13528#68
2307.13528
[ "2110.14168" ]
2307.13528#68
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
cation. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809â 819, New Orleans, Louisiana. Association for Computational Linguistics. Christos Thorne, Christodoulopoulos, and Arpit Mittal. 2018b. FEVER: a large-scale dataset for fact extraction and VERiï¬ cation. In NAACL-HLT. Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- In Proceedings of tual consistency of summaries. the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 5008â 5020, Online. Association for Computational Linguistics. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Al- isa Liu, Noah A. Smith, Daniel Khashabi, and Han- naneh Hajishirzi. 2023.
2307.13528#67
2307.13528#69
2307.13528
[ "2110.14168" ]
2307.13528#69
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Self-instruct: Aligning lan- guage models with self-generated instructions. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompt- ing elicits reasoning in large language models. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 440â 450, Vancouver, Canada. Association for Computational Linguistics.
2307.13528#68
2307.13528#70
2307.13528
[ "2110.14168" ]
2307.13528#70
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2020. Reasoning over semantic-level graph for fact checking. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6170â 6180, Online. Association for Computa- tional Linguistics. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023. Lima: Less is more for align- ment. arXiv preprint arXiv:2305.11206. # A Prompts
2307.13528#69
2307.13528#71
2307.13528
[ "2110.14168" ]
2307.13528#71
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
We list the claim extraction, query generation, and agreement veriï¬ cation prompts used in this paper. All the prompts listed are user prompts. We use the same system prompt â You are a brilliant assistant.â [KB-Based QA] You are given a piece of text that includes knowledge claims. A claim is a statement that asserts something as true or false, which can be veriï¬ ed by humans. [Task] Your task is to accurately identify and extract every claim stated in the provided text. Then, resolve any coreference (pronouns or other referring expressions) in the claim for clarity. Each claim should be concise (less than 15 words) and self-contained. Your response MUST be a list of dictionaries. Each dictionary should contains the key "claim", which correspond to the extracted claim (with all corefer- ences resolved). You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VIOLATE THE RESPONSE FORMAT IS BANNED. START YOUR RESPONSE WITH â [â . [Response Format] [{"claim": "Ensure that the claim is fewer than 15 words and conveys a complete idea. Resolve any coreference (pronouns or other referring expressions) in the claim for clarity." },... ] Here are two examples: [text]:
2307.13528#70
2307.13528#72
2307.13528
[ "2110.14168" ]
2307.13528#72
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Tomas Berdych defeated Gael Monï¬ s 6-1, 6-4 on Sat- urday. The sixth-seed reaches Monte Carlo Masters ï¬ nal for the ï¬ rst time . Berdych will face either Rafael Nadal or Novak Djokovic in the ï¬ nal. [response]: [{"claim": "Tomas Berdych defeated Gael Mon- ï¬ s 6-1, 6-4"}, {"claim": "Tomas Berdych defeated Gael Monï¬ s 6-1, 6-4 on Saturday"}, {"claim": "Tomas Berdych reaches Monte Carlo Masters ï¬ - nal"}, {"claim": "Tomas Berdych is the sixth-seed"}, {"claim": "Tomas Berdych reaches Monte Carlo Mas- ters ï¬ nal for the ï¬
2307.13528#71
2307.13528#73
2307.13528
[ "2110.14168" ]
2307.13528#73
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
rst time"}, {"claim": "Berdych will face either Rafael Nadal or Novak Djokovic"}, {"claim": "Berdych will face either Rafael Nadal or Novak Djokovic in the ï¬ nal"}] [text]: Tinder only displays the last 34 photos - but users can easily see more. Firm also said it had improved its mutual friends feature. [response]: [{"claim": "Tinder only displays the last photos"}, {"claim": "Tinder only displays the last 34 photos"}, {"claim": "Tinder users can easily see more photos"}, {"claim": "Tinder said it had improved its feature"}, {"claim": "Tinder said it had improved its mutual friends feature"}] Now complete the following: [text]: {input_text} [response]:
2307.13528#72
2307.13528#74
2307.13528
[ "2110.14168" ]
2307.13528#74
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
[Math Problems] You are given a math problem and a potential solution to the math problem. [Task] Your task is to identify all the math calculations that in- volve arithmetic operations between known real num- bers within the potential solution. However, do not include math calculations that contain variable(s). Your response MUST be a list of dictionaries. Each dictionary should contains 2 key - "math_calculation" and "calculated_answer", which correspond to the ex- tracted math calculation, and the calculated answer within the potential solution. You MUST only re- spond in the format as described below. DO NOT RE- SPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VIOLATE THE RE- SPONSE FORMAT IS BANNED. START YOUR RE- SPONSE WITH â [â . [Response format]: [{"math_calculation": "Extracted math calculation in- volving real numbers within the potential solution. Do not include math calculations that contains vari- able(s). Do not include units such as $, %, etc.", "cal- culated_answer": "The calculated answer for the ex- tracted math calculation."},...] Here are two examples: [math problem]: What is the area of a circle with a diameter of 10 inches? [potential solution]:
2307.13528#73
2307.13528#75
2307.13528
[ "2110.14168" ]
2307.13528#75
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
To ï¬ nd the area, we ï¬ rst calculate the radius as the di- ameter divided by 2, so the radius is 10/2 = 5 inches. Then, we use the formula for the area of a circle, which is Ï r2. Plugging in the radius we get, Area = Ï 52 = 78.54 square inches. [response]: [{"math_calculation": "10/2", "calculated_answer": "5"}, {"math_calculation": "calcu- lated_answer": "78.54"}] [math problem]:
2307.13528#74
2307.13528#76
2307.13528
[ "2110.14168" ]
2307.13528#76
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
A store originally sold a shirt for $45. They are offer- ing a 20% discount on the shirt. How much will the shirt cost now? [potential solution]: The discount on the shirt is calculated as 20% of $45, which is 0.20 * 45 = $9. The new price of the shirt after the discount is $45 - $9 = $36. [response]: [{"math_calculation": lated_answer": 9","calculated_answer": "36"}] Now complete the following: [math problem]: {input_question} [potential solution]: {input_solution} [response]: "0.20 45", "calcu- "45 - # "9"}, {"math_calculation": Figure 6: Prompts for Claim Extraction [Scientiï¬ c Literature Review] You are given a piece of text that mentions some sci- entiï¬ c literature. [Task] Your task is to accurately ï¬ nd all papers mentioned in the text and identify the title, author(s), and publica- tion year for each paper. The response should be a list of dictionaries, with each dictionary having keys "pa- per_title", "paper_author(s)", and "paper_pub_year", which correspond to the title of the paper, the authors of the paper, and the publication year of the paper. The following is the given text: [text]: {input_text} You MUST only respond in the format as described be- low. DO NOT RESPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VI- OLATE THE RESPONSE FORMAT IS BANNED. START YOUR RESPONSE WITH â [â . [Response Format]: "pa- "Title of [ { "paper_title": per_author(s)": "pa- "Author(s) of per_pub_year": "Year of the paper published." }, ... ]
2307.13528#75
2307.13528#77
2307.13528
[ "2110.14168" ]
2307.13528#77
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
[KB-based QA] You are a query generator designed to help users verify a given claim using search engines. Your primary task is to generate a Python list of two effective and skeptical search engine queries. These queries should assist users in crit- ically evaluating the factuality of a provided claim using search engines. You should only respond in format as described below (a Python list of queries). PLEASE STRICTLY FOLLOW THE FORMAT. DO NOT RETURN ANY- THING ELSE. START YOUR RESPONSE WITH â [â . [response format]: [â query1â , â query2â
2307.13528#76
2307.13528#78
2307.13528
[ "2110.14168" ]
2307.13528#78
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
] Here are 3 examples: [claim]: The CEO of twitter is Bill Gates. [response]: ["Who is the CEO of twitter?", "CEO Twitter"] [claim]: Michael Phelps is the most decorated Olympian of all time. sponse]: Phelps"] [claim]: ChatGPT is created by Google. GPT?", "ChatGPT"] Now complete the following: [claim]: input [response]: [Math Problems] You are given a math calculation and its corresponding calculated answer. [Task] Your task is to write an executable Python snippet that validate the accuracy of the math calculation against the calculated answer. The Python snippet should print â Trueâ if the calculated answer is correct, and â Falseâ otherwise.
2307.13528#77
2307.13528#79
2307.13528
[ "2110.14168" ]
2307.13528#79
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Your response MUST be a dictionary with key "python_snippet", which corre- spond to the executable python snippet. [math calculation]: {math_calculation} [calculated answer]: {calculated_answer} You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VIOLATE THE RESPONSE FORMAT IS BANNED. START YOUR RESPONSE WITH â {â . [Response format]: { "python_snippet": "An executable Python snippet that validates the accuracy of the math calculation against the calculated answer. The Python snippet should print â Trueâ if the calculated answer is correct, and â Falseâ otherwise." }
2307.13528#78
2307.13528#80
2307.13528
[ "2110.14168" ]
2307.13528#80
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
[Code Potential Solution Generation] Please solve the given coding question. Make sure that the solution is opti- mized and correct. You MUST use Python to solve the coding question. Your response MUST be a dictionary with keys "reasoning" and "python_solution", which correspond to the reasoning and Python implementations of the function {entry_point}. The following is the given coding question - [coding question]: {input_question} You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VIOLATE THE RESPONSE FORMAT IS BANNED. START YOUR RESPONSE WITH â {â . [response format]: { "reasoning": "Reasoning for solution.", "python_solution": "Python implementation of the function {entry_point}. Include only the implementation of the function itself. Ensure the output of the function aligns with its speciï¬ ed return type." } [Code Unit test Generation] Please generate 3 distinct function calls for the given coding question to test the functionality of the function {entry_point} that attempts to solve the provided coding question. Your response must be a dictionary with 3 keys - "function_call_1", "func- tion_call_2", "function_call_3", which correspond to the 3 distinct function calls for function {entry_point}. The following is the given coding question - [coding question]: {input_question} You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VIOLATE THE RESPONSE FORMAT IS BANNED. START YOUR RESPONSE WITH â {â . [response format]: { "function_call_1": "First function call for function {en- try_point}. Do not include anything else.", "function_call_2": "Second func- tion call for function {entry_point}. Do not include anything else.", "func- tion_call_3": "Third function call for function {entry_point}. Do not include anything else." } Figure 7: Prompts for Query Generation [KB-based QA] You are given a piece of text. Your task is to identify whether there are any fac- tual errors within the text. When you are judging the factuality of the given text, you could reference the provided evidences if needed. The provided evidences may be helpful. Some evidences may contradict to each other.
2307.13528#79
2307.13528#81
2307.13528
[ "2110.14168" ]
2307.13528#81
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
You must be careful when using the evidences to judge the factuality of the given text. When The response should be a dictionary with four keys - "reasoning", "factuality", "error", and "correction", which correspond to the reasoning, whether the given text is factual or not (Boolean - True or False), the factual error present in the text, and the corrected text. The following is the given text [text]: claim The following is the provided evidences [evidences]: evidence You should only re- spond in format as described below.
2307.13528#80
2307.13528#82
2307.13528
[ "2110.14168" ]
2307.13528#82
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
DO NOT RETURN ANYTHING ELSE. START YOUR RESPONSE WITH â {{â . [response format]: {{ "reasoning": "Why is the given text factual or non-factual? Be careful when you said some- thing is non-factual. When you said something is non-factual, you must pro- vide mulitple evidences to support your decision.", "error": "None if the text is factual; otherwise, describe the error.", "correction": "The corrected text if there is an error.", "factuality": True if the given text is factual, False otherwise. }} [Scientiï¬
2307.13528#81
2307.13528#83
2307.13528
[ "2110.14168" ]
2307.13528#83
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
c Literature Review] Please generate 3 distinct function calls for the given coding question to test the You are provided with two inputs, a string (string1) containing several names, and a list (list1) also containing names. Your task is to assess whether all the last names mentioned in string1 are included in list1. You should only respond in format as described below. DO NOT RETURN ANYTHING ELSE. START YOUR RESPONSE WITH â {{â . [response for- mat]: {{ "reasoning":
2307.13528#82
2307.13528#84
2307.13528
[ "2110.14168" ]
2307.13528#84
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
"Explanation on whether all the last names in string1 are found within list1", "factuality": This will be True if all last names from string1 are present in list1, and False otherwise. }} Example 1: [string1]: "J. Devlin and M. Chang" [list1]: ["Devlin", "M Chang", "Kristina Toutanova"] [response]: "reasoning": "string1 contains 2 last names â Devlinâ and â Changâ .
2307.13528#83
2307.13528#85
2307.13528
[ "2110.14168" ]
2307.13528#85
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Both of these last names are present in list1.", "factual- ity": True Example 2: [string1]: "Tom Brown et. al" [list1]: ["Y. Lecun", "G. Hinton"] [response]: "reasoning": "string 1 contains 1 last name â Brownâ . Brown is not present in list1.", "factuality": False Complete the following: [string1]: string1 [list1]: list2 [response]:
2307.13528#84
2307.13528#86
2307.13528
[ "2110.14168" ]
2307.13528#86
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Figure 8: Prompts for Agreement Veriï¬ cation # B Example cases of FACTOOL We list the example cases of FACTOOL in each scenario. { "claim": "The modern-day version of fortune cookies was invented in the United States.", "label": true, "with_tool_classiï¬ cation": false, "with_tool_reasoning": "The given text is not entirely factual. The modern-day version of fortune cookies was not invented in the United States. Most people nowadays believe that fortune cookies were created by a Japanese man named Makoto Hagiwara in 1914 in San Francisco. Hagiwara owned what is now called the Golden Gate Park Japanese Tea Garden, where he served tea and fortune cookies. This is supported by the provided evidences.", "queries": ["Where were fortune cookies invented?", "History of fortune cookies in the United States"], "evidences": ["Most people nowadays believe that fortune cookies were created by a Japanese man named Makoto Hagiwara in 1914 in San Francisco. Hagiwara owned what is now called the Golden Gate Park Japanese Tea Garden, where he served tea and fortune cookies. Nov 19, 2019", "Most people nowadays believe that fortune cookies were created by a Japanese man named Makoto Hagiwara in 1914 in San Francisco. Hagiwara owned what is now called the Golden Gate Park Japanese Tea Garden, where he served tea and fortune cookies.
2307.13528#85
2307.13528#87
2307.13528
[ "2110.14168" ]
2307.13528#87
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Nov 19, 2019"] } { "claim": "Sonia Sotomayor is a female US Supreme Court justice", "label": true, "with_tool_classiï¬ cation": false, "with_tool_reasoning": "The given text is partially factual. While Sonia Sotomayor is indeed a female US Supreme Court justice, the text fails to mention that she was also the ï¬ rst Latina to serve on the Supreme Court. This information is provided in both of the provided evidences.", "queries": ["Is Sonia Sotomayor a US Supreme Court justice?", "Sonia Sotomayor gender US Supreme Court"], "evidences": ["In 2009, Sonia Sotomayor broke boundaries by becoming the ï¬ rst Latina, and the third woman, to serve on the Supreme Court of the United States, the highest court in the country. Sotomayor was born in New York City to Puerto Rican parents.", "Supreme Court of the United States. Sonia Sotomayor Ë 2013 the fearless federal trial court judge who saved Major League Baseball from a ruinous 1995 strike Ë 2013 entered the record book as the ï¬ rst Hispanic and the third woman to serve on the High Court."] }
2307.13528#86
2307.13528#88
2307.13528
[ "2110.14168" ]
2307.13528#88
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Figure 9: Reasoning errors of FACTOOL powered by ChatGPT on KB-based QA [Fact-checking recent events, discoveries, or news] { "claim": "Argentina has not won the World Cup since 1986", "label": false, "with_tool_classiï¬ cation": false, "with_tool_reasoning": "The given text states that Argentina has not won the World Cup since 1986. However, multiple pieces of evidence suggest that Argentina won the World Cup in 2022.", "queries": ["Argentina World Cup wins since 1986", "Last time Argentina won World Cup"], "evidences": ["Argentina is one of the most successful teams in the tourna- mentâ s history, having won three World Cups: in 1978, 1986, 2022. Argentina has also been runner up three times: in 1930, 1990 and 2014. In 18 World Cup tournaments, Argentina has 47 victories in 88 matches.", "Argentina is one of the most successful teams in the tournamentâ s history, having won three World Cups: in 1978, 1986, 2022. Argentina has also been runner up ...", "Previously, the last time Argentina won the World Cup was 1986, when it defeated Germany to win its second title in three tournaments.", "Argentina have won the FIFA World Cup three times Ë 2013 in 1978, 1986 and 2022.
2307.13528#87
2307.13528#89
2307.13528
[ "2110.14168" ]
2307.13528#89
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
They have reached the ï¬ nal six times, including a second-place ...", "The Argentines have won two FIFA World Cups, in 1978 and 1986. They have placed second three times before, in 1930, 1990, and 2014.", "Argentina last won the FIFA World Cup in 1986, defeating West Germany in the championship round, 3-2. Argentina World Cup Wins. Argentinaâ s ..."] } { "claim":
2307.13528#88
2307.13528#90
2307.13528
[ "2110.14168" ]
2307.13528#90
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
"The most valuable NFT ever sold is a digital artwork called â Every- days: The First 5000 Daysâ ", "label": false, "with_tool_classiï¬ cation": false, "with_tool_reasoning": "The given text states that the most valuable NFT ever sold is â Everydays: The First 5000 Daysâ . According to the provided evi- dence, â The First 5000 Daysâ is the second most expensive NFT ever sold, with â Mergeâ being the most expensive one.", "queries": Days"], "evidences": ["The Most Expensive NFTs Ever Sold 1. Merge - 2014 US $91.8M. Pak - Merge. . . . 2. The First 5000 Days - 2014 US$69.3M. Beeple - 2014 The First 5000 Days. . . . 3. Clock - 2014 US$52.7M. Julian Assange & Pak - 2014 Clock. . . . 4. HUMAN ONE - 2014 US$29M. Beeple - 2014 HU- MAN ONE. . . . 5. CryptoPunk #5822 - 2014 US$23.7M. CryptoPunk #5822.", "Everydays: the First 5000 Days: Work of art.", "Everydays: the First 5000 Days is a digital work of art created by Mike Winkelmann, known profession- ally as Beeple. The work is a collage of 5000 digital images created by Winkel- mann for his Everydays series.", "Everydays: the First 5000 Days Artist: Mike Winkelmann.", "Everydays: the First 5000 Days Owner: Vignesh Sundaresan (MetaKovan).", "Everydays: the First 5000 Days Medium: Digital (JPEG)."] } [Fact-checking high-precision statistics] { "claim": "Ireland has an obesity rate of 26.9%", "label": false, "with_tool_classiï¬ cation": false, "with_tool_reasoning": "The given text states that Ireland has an obesity rate of 26.9%, but the provided evidences show different numbers. The ï¬ rst evidence doesnâ
2307.13528#89
2307.13528#91
2307.13528
[ "2110.14168" ]
2307.13528#91
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
t provide a clear percentage for obesity, but the second evidence states that the prevalence of obesity in Irish adults is currently 18%, with men at 20% and women at 16%. This contradicts the given text.", "queries": ["Ireland obesity rate statistics", "Current obesity rate in Ireland"], "evidences": ["Just under four in ten (37%) of people have a normal weight, six out of ten (37% overweight and a further 23% obese) overweight or obese. A smaller difference exists between men and women aged under 25 than those older than this.", "The prevalence of obesity in Irish adults is currently 18%, with men at 20% and women at 16%. A further 47% of men and 33% of women are overweight (BMI 25.0-29.9 kg/m2). Since 1990, obesity has more than doubled in men from 8% to 20%, and increased from 13% to 16% in women."] } { "claim": " â
2307.13528#90
2307.13528#92
2307.13528
[ "2110.14168" ]
2307.13528#92
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Everydays: The First 5000 Daysâ sold for $69 million", "label": false, "with_tool_classiï¬ cation": false, "with_tool_reasoning": "The given text is almost factual, but the exact amount for which the artwork was sold is slightly off. Multiple evidences mention the correct amount.", "queries": ["Everydays: The First 5000 Days sale price", " $69 million Every- days artwork"], "evidences": ["The work is a collage of 5000 digital images created by Winkel- mann for his Everydays series. Its associated non-fungible token (NFT) was sold for $69.3 million at Christieâ s in 2021, making it the most expen- sive non-fungible token ever.", "Ë 201cNow there is a way to collect digital art.Ë 201d The piece that was sold, Everydays: The First 5000 Days, is a col- lage of Winkelmannâ s work starting ...", "That day, Winkelmann, a digital artist more commonly known by the handle Beeple, sold Ë 201cEverydays: The First 5000 Days,Ë 201d an NFT-linked ...", "Ë 201cEverydays Ë 2014 The First 5000 Days,Ë 201d by the artist known as Beeple, set a record for a digital artwork in a sale at Christieâ s.", "Artist Beeple, a North Fond du Lac native, sold a digital artwork for $69 million. Now you can see the same art in a book for a lot less.", "Beeple, aka artist Mike Winkelmann, appeared on ¨The ..."] }
2307.13528#91
2307.13528#93
2307.13528
[ "2110.14168" ]
2307.13528#93
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Figure 10: Examples of KB-QA claims veriï¬ ed by FACTOOL powered by GPT-4. The queries generated, evidences collected, the reasoning process, and the ï¬ nal decision of FACTOOL powered by GPT-4 is reported. # { "task_id": "HumanEval/2", "prompt": d e f f l o a t ) â > f l o a t : t r u n c a t e _ n u m b e r ( number : " " " G i v e n a p o s i t i v e i t ( l a r g e s t and d e c i m a l s ( l e f t o v e r p a r t a l w a y s R e t u r n t h e d e c i m a l p a r t o f f l o a t i n g p o i n t number , can be decomposed i n t o and i n t e g e r p a r t i n t e g e r s m a l l e r t h a n g i v e n number ) s m a l l e r t h a n 1 ) . t h e number . " " "
2307.13528#92
2307.13528#94
2307.13528
[ "2110.14168" ]
2307.13528#94
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
"entry_point": "truncate_number", "completion": t r u n c a t e _ n u m b e r ( number : i n t e g e r _ p a r t = number d e c i m a l _ p a r t = number â i n t e g e r _ p a r t r e t u r n d e c i m a l _ p a r t "result": "passed", "passed": true, "with_tool_classiï¬ cation": true, "testcases_input": "truncate_number(19.999)"], "multi_solutions": [ ["truncate_number(4.56)", "truncate_number(0.123)", d e f t r u n c a t e _ n u m b e r ( number : r e t u r n number â i n t ( number ) f l o a t ) â > f l o a t : d e f t r u n c a t e _ n u m b e r ( number : r e t u r n number â i n t ( number ) f l o a t ) â > f l o a t : d e f t r u n c a t e _ n u m b e r ( number : r e t u r n number â i n t ( number ) f l o a t ) â > f l o a t : t r u n c a t e _ n u m b e r ( number : i n t e g e r _ p a r t = number d e c i m a l _ p a r t = number â i n t e g e r _ p a r t r e t u r n d e c i m a l _ p a r t ] 0.5599999999999996, "exec_result": 0.5599999999999996, 0.123, 0.123], [0.9989999999999988, 0.9989999999999988, 0.9989999999999988, 0.9989999999999988]] } { "task_id":
2307.13528#93
2307.13528#95
2307.13528
[ "2110.14168" ]
2307.13528#95
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
"HumanEval/57", "prompt": d e f m o n o t o n i c ( l : l i s t ) : i f " " " R e t u r n T r u e i n c r e a s i n g o r d e c r e a s i n g . " " " l i s t e l e m e n t s a r e m o n o t o n i c a l l y "entry_point": "monotonic", "completion": d e f m o n o t o n i c ( l : l i s t ) : i f l e n ( l ) <= 1 : r e t u r n T r u e i n c r e a s i n g = a l l ( l [ i ] <= l [ i + 1 ] d e c r e a s i n g = a l l ( l [ i ] >= l [ i + 1 ] r e t u r n i n c r e a s i n g or d e c r e a s i n g f o r f o r i i "result": "passed", "passed": true, "with_tool_classiï¬ cation": true, "testcases_input": ["monotonic([1, 2, 3, 4, 5])", "monotonic([5, 4, 3, 2, 1])", "monotonic([1, 3, 5, 4, 7])"], "multi_solutions": [ d e f m o n o t o n i c ( l : l i s t ) : i f l e n ( l ) < 2 : r e t u r n T r u e i n c r e a s i n g = d e c r e a s i n g = T r u e f o r i i f i n range ( l e n ( l ) â
2307.13528#94
2307.13528#96
2307.13528
[ "2110.14168" ]
2307.13528#96
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
1 ) : l [ i ] < l [ i + 1 ] : d e c r e a s i n g = F a l s e i f l [ i ] > l [ i + 1 ] : i n c r e a s i n g = F a l s e i f n o t i n c r e a s i n g and n o t d e c r e a s i n g : r e t u r n F a l s e r e t u r n T r u e d e f m o n o t o n i c ( l : l i s t ) : i f l e n ( l ) < 2 : r e t u r n T r u e d i r e c t i o n = l [ 1 ] â l [ 0 ] i n range ( 2 , f o r i i f d i r e c t i o n == 0 : l e n ( l ) ) : i f d i r e c t i o n = l [ i ] â l [ i â 1 ] c o n t i n u e ( l [ i ] â l [ i â 1 ] ) * d i r e c t i o n < 0 : r e t u r n F a l s e r e t u r n T r u e d e f m o n o t o n i c ( l : l i s t ) : i f l e n ( l ) < 2 : r e t u r n T r u e d i r e c t i o n = l [ 1 ] â l [ 0 ] i n range ( 2 , f o r l e n ( l ) ) : i i f d i r e c t i o n == 0 : i f d i r e c t i o n = l [ i ] â l [ i â 1 ] c o n t i n u e ( l [ i ] â l [ i â
2307.13528#95
2307.13528#97
2307.13528
[ "2110.14168" ]
2307.13528#97
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
1 ] ) * d i r e c t i o n < 0 : r e t u r n F a l s e r e t u r n T r u e d e f m o n o t o n i c ( l : l i s t ) : i f l e n ( l ) <= 1 : r e t u r n T r u e i n c r e a s i n g = a l l ( l [ i ] <= l [ i + 1 ] d e c r e a s i n g = a l l ( l [ i ] >= l [ i + 1 ] r e t u r n i n c r e a s i n g or d e c r e a s i n g f o r f o r i i i n range ( l e n ( l ) â 1 ) ) i n range ( l e n ( l ) â 1 ) ) i n range ( l e n ( l ) â 1 ) ) i n range ( l e n ( l ) â 1 ) ) ] "exec_result": [[true, true, true, true], [true, true, true, true], [false, false, false, false]] } Figure 11: Examples of code claims veriï¬ ed by FACTOOL powered by GPT-4. The queries generated, evidences collected, the reasoning process, and the ï¬ nal decision of FACTOOL powered by GPT-4 is reported. [ChatGPT] { "task_id": "HumanEval/36", "prompt": d e f f i z z _ b u z z ( n : " " " R e t u r n t h e number o f a p p e a r s w h i c h a r e d i v i s i b l e by 11 o r 1 3 . " " " i n t ) : t i m e s t h e d i g i t 7 i n i n t e g e r s l e s s t h a n n , "entry_point": "ï¬
2307.13528#96
2307.13528#98
2307.13528
[ "2110.14168" ]
2307.13528#98
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
zz_buzz", "completion": d e f f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n t ) : i n range ( n ) : i % 11 == 0 or i % 13 == 0 : i f s t r ( i ) . c o u n t ( â 7 â ) > 0 : c o u n t += 1 r e t u r n c o u n t "result": "failed: AssertionError: ", "passed": false, "testcases_input": ["ï¬ zz_buzz(50)", "ï¬ zz_buzz(100)", "ï¬ zz_buzz(200)"], "multi_solutions": [ d e f f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n t ) : i n range ( n ) : i % 11 == 0 or i % 13 == 0 : i n s t r ( i ) : i f â 7 â c o u n t += 1 r e t u r n c o u n t d e f i n t ) : f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n range ( n ) : i % 11 == 0 or i % 13 == 0 : i n s t r ( i ) : i f â 7 â c o u n t += 1 r e t u r n c o u n t d e f i n t ) : f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n range ( n ) : i % 11 == 0 or i % 13 == 0 : c o u n t += s t r ( i ) . c o u n t ( â 7 â
2307.13528#97
2307.13528#99
2307.13528
[ "2110.14168" ]
2307.13528#99
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
) r e t u r n c o u n t d e f i n t ) : f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n range ( n ) : i % 11 == 0 or i % 13 == 0 : i f s t r ( i ) . c o u n t ( â 7 â ) > 0 : c o u n t += 1 r e t u r n c o u n t t h e d i g i t 7 [GPT-4] { "task_id": "HumanEval/36", "prompt": d e f f i z z _ b u z z ( n : " " " R e t u r n t h e number o f a p p e a r s w h i c h a r e d i v i s i b l e by 11 o r 1 3 . " " " i n t ) : t i m e s t h e d i g i t 7 i n i n t e g e r s l e s s t h a n n , "entry_point": "ï¬ zz_buzz", "completion": d e f f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n t ) : i n range ( n ) : i % 11 == 0 or i % 13 == 0 : i f s t r ( i ) . c o u n t ( â 7 â ) > 0 : c o u n t += 1 r e t u r n c o u n t "result": "failed: AssertionError: ", "passed": false, "with_tool_classiï¬ cation": false, "testcases_input": ["ï¬ zz_buzz(50)", "ï¬ zz_buzz(100)", "ï¬
2307.13528#98
2307.13528#100
2307.13528
[ "2110.14168" ]
2307.13528#100
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
zz_buzz(150)"], "multi_solutions": [ d e f i n t ) : f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n range ( 1 , n ) : i % 11 == 0 or i % 13 == 0 : c o u n t += s t r ( i ) . c o u n t ( â 7 â ) r e t u r n c o u n t d e f i n t ) : f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n range ( 1 , n ) : i % 11 == 0 or i % 13 == 0 : c o u n t += s t r ( i ) . c o u n t ( â 7 â ) r e t u r n c o u n t d e f f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n t ) : i n range ( 1 , n ) : i % 11 == 0 or i % 13 == 0 : c o u n t += s t r ( i ) . c o u n t ( â 7 â ) r e t u r n c o u n t d e f f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n t ) : i n range ( n ) : i % 11 == 0 or i % 13 == 0 : i f s t r ( i ) . c o u n t ( â 7 â ) > 0 : c o u n t += 1 r e t u r n c o u n t ] "exec_result": [[0, 0, 0, 0], [3, 3, 3, 2], [4, 4, 4, 3]] } ] "exec_result": [[0, 0, 0, 0], [2, 2, 3, 2], [5, 5, 6, 5]], "with_tool_classiï¬ cation": true } Figure 12: Examples of code claims veriï¬
2307.13528#99
2307.13528#101
2307.13528
[ "2110.14168" ]
2307.13528#101
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
ed by FACTOOL powered by ChatGPT and FACTOOL powered by GPT- 4. The queries generated, evidences collected, the reasoning process, and the ï¬ nal decisions of FACTOOL powered by ChatGPT and FACTOOL powered by GPT-4 are reported. [ChatGPT] { "math_calculation": "23 * 4319216", "calculated_answer": "99305768", "label": false, "query": "python_snippet": "print(23 * 4319216 == 99305768)", "exec_result": "False", "with_tool_classiï¬ cation": false } { "math_calculation": "20/100 x $10884297.00", "calculated_answer": "2176859.40", "label": true, "query": 2176859.40) < 0.01)"}, "exec_result": "True", "with_tool_classiï¬ cation": true } {"python_snippet": "print(abs((20/100) * 10884297.00 - [GPT-4] { "math_calculation": "23 * 4319216", "calculated_answer": "99305768", "label": false, "query": {"python_snippet": 99305768; print(calculation == answer)"}, "exec_result": "False", "with_tool_classiï¬ cation": false } { "math_calculation": "20/100 x $10884297.00", "calculated_answer": "2176859.40", "label": true, "query": {"python_snippet": "calculation = (20/100) * 10884297.00; answer = 2176859.40; print(abs(calculation - answer) < 0.01)"}, "exec_result": "True", "with_tool_classiï¬ cation": true } "calculation = 23 * 4319216; answer = Figure 13: Examples of math claims veriï¬ ed by FACTOOL powered by ChatGPT and GPT-4. The full response of FACTOOL powered by GPT-4 is reported, including the queries generated, evidences collected, the reasoning process, and the ï¬
2307.13528#100
2307.13528#102
2307.13528
[ "2110.14168" ]
2307.13528#102
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
nal decision. [ChatGPT] { "paper_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "paper_author(s)": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova", "paper_pub_year": "2018", "label": true, "with_tool_classiï¬ cation": true, "error": [] } { "paper_title": "The Impact of Artiï¬ cial Intelligence on Employment", "paper_author(s)": "Acemoglu and Restrepo", "paper_pub_year": "2019", "label": false, "with_tool_classiï¬ cation": false, "error": ["wrong_paper_author(s)", "wrong_paper_pub_year"] } [GPT-4] { "paper_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "paper_author(s)": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova", "paper_pub_year": "2018", "label": true, "with_tool_classiï¬ cation": true, "error": [] } { "paper_title": "The Impact of Artiï¬ cial Intelligence on Employment", "paper_author(s)": "Acemoglu and Restrepo", "paper_pub_year": "2019", "label": false, "with_tool_classiï¬ cation": false, "error": ["wrong_paper_author(s)", "wrong_paper_pub_year"] } Figure 14: Examples of scientiï¬ c claims veriï¬ ed by FACTOOL powered by ChatGPT and GPT-4. The full response of FACTOOL powered by ChatGPT and GPT-4 is reported, including the queries generated, evidences collected, the reasoning process, and the ï¬
2307.13528#101
2307.13528#103
2307.13528
[ "2110.14168" ]
2307.13528#103
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
nal decision. [Reasoning Error] { "claim": "Jupiter is less dense than Saturn", "label": false, "with_tool_classiï¬ cation": true, "with_tool_reasoning": "The given text is factual be- cause both provided evidences support the claim that Jupiter is less dense than Saturn. The ï¬ rst evidence provides numerical data on the densities of Jupiter and Saturn, and the second evidence conï¬ rms that Saturn is the least dense planet in the solar system.", "queries": ["Jupiter vs Saturn density", "Which planet is less dense Jupiter or Saturn?"], "evidences": is 1,33 ["The density of g/cmË 00b3 whereas the density of Saturn is only 0,687 g/cmË 00b3. If Saturn were the same density as Jupiter, it would have a radius of 30.079 km.", "Jupiter is the largest planet in the solar system, but itâ s SaturnË 2014the solar systemâ s second largest planetË 2014that takes the prize for least dense.
2307.13528#102
2307.13528#104
2307.13528
[ "2110.14168" ]