id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.01219#36
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
The Moss project (Sun et al., 2023b) open- sourced their SFT data, which includes such hon- est samples. We observed that models tuned with them could learn to refuse to answer specific ques- tions, therefore helping reduce hallucinations. Summary & Discussion. Curating the training data is one approach for mitigating hallucinations during the SFT phase. Thanks to the acceptable volume of SFT data, they can be manually curated by human experts. Recently, we have performed a preliminary human inspection and observed that some widely-used synthetic SFT data, such as Al- paca (Taori et al., 2023), contains a considerable amount of hallucinated answers due to the lack of human inspection. This calls for careful attention when researchers try to build SFT datasets based on self-instruct (Wang et al., 2023c). Previous work also pointed out that the SFT process may inadvertently introduce hallucina- tions, by forcing LLMs to answer questions that surpass their knowledge boundaries. Some re- searchers have suggested honesty-oriented SFT as a solution. However, we argue this method has two main problems. Firstly, it exhibits limited gen- eralization capabilities towards out-of-distribution (OOD) cases. Secondly, the annotated honest samples just reflect the incompetence and uncer- tainty of annotators rather than those of LLMs, as annotators are unaware of LLMsâ real knowledge boundaries. Such challenges make solving this is-
2309.01219#35
2309.01219#37
2309.01219
[ "2307.03109" ]
2309.01219#37
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
12 Situation Reward Value Unhedged Correct Hedged Correct Uninformative Hedged Wrong Unhedged Wrong +1 +0.5 0 -2 -4 Table 8: An example of reward design for mitigating LLM hallucinations through RL (Schulman, 2023). sue during SFT sub-optimal. # 5.3 Mitigation during RLHF Nowadays, many researchers attempt to fur- ther improve the supervised fine-tuned LLMs via reinforcement learning from human feedback (RLHF) (Fernandes et al., 2023). This process consists of two steps: 1) train a reward model (RW) as the proxy for human preference, which aims to assign an appropriate reward value to each LLM response; 2) optimize the SFT model with the reward modelâ s feedback, by using RL algo- rithms such as PPO (Schulman et al., 2017). Leveraging human feedback not only closes the gap between machine-generated content and hu- man preference but also helps LLMs align with desired criteria or goals. One commonly used criterion today is â 3Hâ , which denotes helpful, honest, and harmless (Ouyang et al., 2022; Bai et al., 2022; Zheng et al., 2023b). The hon- est aspect here just refers to the minimization of hallucinations in LLM responses. Current ad- vanced LLMs, such as InstructGPT (Ouyang et al., 2022), ChatGPT (OpenAI, 2023a), GPT4 (Ope- nAI, 2023b), and Llama2-Chat (Touvron et al., 2023b), have collectively considered this aspect during RLHF. For example, GPT4 uses synthetic hallucination data to train the reward model and perform RL, which increases accuracy on Truth- fulQA (Lin et al., 2021) from about 30% to 60%. Moreover, Lightman et al. (2023) use the process supervision to detect and mitigate hallucinations for reasoning tasks, which provides feedback for each intermediate reasoning step. As discussed in the previous section, the phe- nomenon of behavior cloning during the SFT stage can potentially lead to hallucinations.
2309.01219#36
2309.01219#38
2309.01219
[ "2307.03109" ]
2309.01219#38
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Some re- searchers have attempted to address this issue by integrating honest samples into the original SFT data. However, this approach has certain limita- tions, such as unsatisfactory OOD generalization capabilities and a misalignment between human User Input System Response Introduce the film â The Only Thingâ (Four Flaming Days) to me. â The Only Thingâ (Four Flaming Days) is a captivating drama film that delves into the complexities of human emotions, relationships, and the power of resilience. Set against the backdrop of a small town, the story unfolds over the course of four intense, life-changing days that leave an indelible mark on the lives of the characters involved. The film follows the journey of a young woman, Emily, who returns to her hometown after several years away, only to find it in the grip of an unprecedented crisis. As the town faces an impending disaster, Emily is forced to confront her own demons, rekindle old friendships, and navigate the intricate web of relationships that tie her to the townâ
2309.01219#37
2309.01219#39
2309.01219
[ "2307.03109" ]
2309.01219#39
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
s inhabitants. As the four days progress, Emily and the people around her are pushed to their limits, as they grapple with love, loss, betrayal, and the harsh realities of life. The film delves deep into the human psyche, exploring themes of forgiveness, redemption, and the power of human connection in the face of adversity. User Input System Response What genre of movie is â The Only Thingâ (Four Flaming Days)? Iâ m sorry. As of my last update in September 2021, there is no film titled â The Only Thingâ or â Four Flaming Daysâ that Iâ m aware of.
2309.01219#38
2309.01219#40
2309.01219
[ "2307.03109" ]
2309.01219#40
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Table 9: A real example of the over-conservative phenomenon of ChatGPT (July 2023 Version). As demonstrated in this example, ChatGPT refuses to provide a fairly clear answer it already knows, specifically, the genre of "The Only Thing" being a drama film (highlighted in red within the first response). and LLM knowledge boundaries. In light of this, Schulman (2023) propose to solve this problem during RLHF. They design a special reward func- tion just for mitigating hallucinations, as shown in Table 8. â Unhedged/Hedged Correct/Wrongâ here means the LLM provides correct or wrong answers with a positive or hesitant tone. â Unin- formativeâ denote the safe answers like â I donâ t knowâ .
2309.01219#39
2309.01219#41
2309.01219
[ "2307.03109" ]
2309.01219#41
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
The core idea is to encourage LLMs to challenge the premise, express uncertainty, and commit incapability by learning from specially de- signed rewards. This method, which we refer to as honesty-oriented RL, offers several advantages over honesty-oriented SFT. The primary benefit is that it allows LLMs to freely explore their knowl- edge boundaries, thereby enhancing their general- ization capabilities to OOD cases. Additionally, it reduces the need for extensive human annotation and eliminates the requirement for annotators to guess the knowledge boundaries of LLMs. Summary & Discussion. Reinforcement learn- ing can guide LLMs in exploring their knowl- edge boundaries, enabling them to decline to an- swer questions beyond their capacity rather than fabricating untruthful responses. However, we note this approach also poses unique challenges. For instance, RL-tuned LLMs may exhibit over- conservatism due to an imbalanced trade-off be- tween helpfulness and honesty (Ouyang et al., 2022). An example of this is illustrated in Ta- ble 9. As observed in this case, ChatGPT tends to be overly hedged and refrains from providing a clear answer that it already knows, as evidenced in another dialogue turn. This could be attributed to the unreasonable design of the reward function or the poor quality of the training data for the re- ward model. We hope future work can take such problems into consideration. # 5.4 Mitigation during Inference Compared with the aforementioned training-time mitigation approaches, mitigating hallucinations in the inference time could be more cost-effective and controllable. Therefore, most existing studies focus on this direction, which we will introduce in detail in the following sections. # 5.4.1 Designing Decoding Strategies Decoding strategies, such as greedy decoding and beam search decoding, determine how we choose output tokens from the probability distribution generated by models (ZarrieÃ
2309.01219#40
2309.01219#42
2309.01219
[ "2307.03109" ]
2309.01219#42
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
et al., 2021). Lee et al. (2022) carry out a factuality assess- ment of content generated by LLMs using differ- ent decoding strategies. They find that nucleus sampling (a.k.a top-p sampling) (Holtzman et al., 2019) falls short of greedy decoding in terms of factuality. They argue that this underperformance could be attributed to the randomness introduced by top-p sampling to boost diversity, which may inadvertently lead to hallucinations since LLMs tend to fabricate information to generate diverse responses. In view of this, they introduce a decod- ing algorithm termed factual-nucleus sampling, which aims to strike a more effective balance be- tween diversity and factuality by leveraging the strengths of both top-p and greedy decoding. Dhuliawala et al. (2023) develop a decoding framework known as the Chain-of-Verification (COVE). This framework is based on the obser- vation that independent verification questions typ-
2309.01219#41
2309.01219#43
2309.01219
[ "2307.03109" ]
2309.01219#43
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
13 Method Timing of Using Knowledge Source Application Task Generation-Time WebGPT (Nakano et al., 2021) Adaptive-Retrieval (Mallen et al., 2023) Generation-Time Generation-Time ReACT (Yao et al., 2022) Generation-Time RETRO (Borgeaud et al., 2022) Generation-Time Chain-of-Knowledge (Li et al., 2023d) Post-Processing RARR (Gao et al., 2023a) Post-Processing Verify-then-Edit (Zhao et al., 2023b) Post-Processing LLM-Augmenter (Peng et al., 2023a) Post-Processing REFEED (Yu et al., 2023b) Post-Processing CRITIC (Gou et al., 2023) Post-Processing FacTool (Chern et al., 2023) Search API Wikipedia Wikipedia Unstructured Corpus Structured Knowledge Base Search API Wikipedia, Search API, etc Web documents, Databases Wikipedia Search API, Code Executor, Calculator, etc Search API, Code Executor, Calculator, etc QA & Reasoning & Generation QA QA QA & FV LM & QA QA & FV & Decision QA QA QA QA, Dialogue QA & Program & Toxicity Table 10: A summary of some recent studies on resorting to external knowledge to mitigate hallucinations. We use abbreviations for some application task names, including QA (Question Answering), FV (Fact Verification), and LM (Language Modeling). ically yield more accurate facts than those pre- sented in long-form answers. The COVE frame- work initially plans verification questions, and then answers these questions to ultimately produce an enhanced, revised response. Experimental re- sults on list-based questions, closed book QA, and long-form text generation demonstrate that COVE can effectively mitigate hallucination. Another work, Li et al. (2023b), introduces a novel Inference-Time Intervention (ITI) method to improve the truthfulness of LLMs. This method is based on the assumption that LLMs possess latent, interpretable sub-structures associated with factu- ality.
2309.01219#42
2309.01219#44
2309.01219
[ "2307.03109" ]
2309.01219#44
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
The ITI method comprises two steps: 1) fitting a binary classifier on top of each attention head of the LLM to identify a set of heads that ex- hibit superior linear probing accuracy for answer- ing factual questions, and 2) shifting model activa- tions along these factuality-related directions dur- ing inference. The ITI method leads to a substan- tial performance improvement on the TruthfulQA benchmark (Lin et al., 2021). Distinct from the aforementioned studies, Shi et al. (2023b) instead concentrates on the retrieval- augmentation setting. Prior research has shown that LLMs sometimes fail to adequately attend to retrieved knowledge when addressing downstream tasks, particularly when the retrieved knowl- edge conflicts with the parametric knowledge of LLMs (Zhou et al., 2023b; Xie et al., 2023). To address this issue, Shi et al. (2023b) propose a straightforward context-aware decoding (CAD) strategy. The core idea of CAD is to perform a contrastive ensemble of pθ(yt | x, c, y<t) and pθ(yt | x, y<t), where θ represents the LM, x is the input query, c is the context, y is the response, and t is the time step. pθ(yt | x, c, y<t) means the gen- eration probability distribution of t-th token when given the context while pθ(yt | x, y<t) denotes the distribution only considering the query. The CAD method aims to compel LLMs to pay more at- tention to contextual information instead of over- relying their own parametric knowledge to make decisions. Experimental results show that CAD effectively elicits the ability of LLMs to exploit retrieved knowledge and thus reduces factual hal- lucinations on downstream tasks. Another work, DoLA (Chuang et al., 2023), also employ the idea of contrastive decoding to reduce hallucination. However, they contrast the generation probabili- ties from different layers of LLMs, as they find that linguistic and factual information is encoded in distinct sets of layers. Summary & Discussion. Designing decoding strategies to mitigate hallucinations in LLMs dur- ing inference is typically in a plug-and-play man- ner.
2309.01219#43
2309.01219#45
2309.01219
[ "2307.03109" ]
2309.01219#45
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Therefore, this method is easy to deploy, mak- ing it promising for practical applications. How- ever, for this approach, most existing works re- quire accessing the token-level output probabili- ties, while a substantial number of current LLMs can only return generated content through lim- ited APIs (e.g., ChatGPT). Consequently, we en- courage future research in this direction to explore within a more strict black-box setting. # 5.4.2 Resorting to External Knowledge Using external knowledge as supplementary ev- idence to assist LLMs in providing truthful re- sponses recently represents a burgeoning solution (Ren et al., 2023; Mialon et al., 2023). This ap- proach typically consists of two steps. The first step entails accurately obtaining knowledge re- lated to the user instructions. Once useful knowl- edge has been achieved, the second step involves
2309.01219#44
2309.01219#46
2309.01219
[ "2307.03109" ]
2309.01219#46
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
14 leveraging such knowledge to guide the genera- tion of the responses. We provide a comprehensive review of the latest progress in this direction, fo- cusing on the specific strategies employed in these two steps, respectively. We also present a sum- mary of recent studies in Table 4. Knowledge acquisition. LLMs have internal- ized vast amounts of knowledge into their pa- rameters through extensive pre-training and fine- tuning, which can be referred to as parametric knowledge (Roberts et al., 2020). However, incor- rect or outdated parametric knowledge can easily lead to hallucinations (Xie et al., 2023). To rem- edy this, researchers have proposed acquiring reli- able, up-to-date knowledge from credible sources as a form of hot patching for LLMs (Lewis et al., 2020b; Li et al., 2022a). We summarize the two primary sources of such knowledge as follows. The major- ity of existing works retrieve information from external knowledge bases, such as large-scale unstructured corpora (Cai et al., 2021; Borgeaud et al., 2022), structured databases (Liu, 2022; Li et al., 2023d), spe- cific websites like Wikipedia (Yao et al., 2022; Peng et al., 2023a; Li et al., 2023c; Yu et al., 2023b), or even the entire Inter- net (Lazaridou et al., 2022; Yao et al., 2022; Gao et al., 2023a; Liu et al., 2023c). The evidence retrieval process typically employs various sparse (e.g., BM25 (Robertson et al., 2009)) or dense (e.g., PLM-based meth- ods (Zhao et al., 2022)) retrievers. Search engines, such as Google Search, can also be viewed as a special kind of information re- triever (Nakano et al., 2021; Lazaridou et al., 2022; Yao et al., 2022; Gao et al., 2023a).
2309.01219#45
2309.01219#47
2309.01219
[ "2307.03109" ]
2309.01219#47
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Be- sides, Luo et al. (2023c) propose the param- eter knowledge guiding framework which re- trieves knowledge from the parametric mem- ory of fine-tuned white-box LLMs. Feng et al. (2023) try to teach LLMs to search rele- vant domain knowledge from external knowl- edge graphs to answer domain-specific ques- tions. (2) External tools. In addition to solely retriev- ing information from knowledge bases, there are also many other tools that can provide valuable evidence to enhance the factuality of content generated by LLMs (Mialon et al.,
2309.01219#46
2309.01219#48
2309.01219
[ "2307.03109" ]
2309.01219#48
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
15 Q, vw a0 Qe cu s Knowled x nowledge & Retriever LLM ; x s e r Code Intermediate xe Knowledge Executor Response Ss Search uM Eg » Pd Fixer Knowledge Sources Ss (b) Post-hoc Correction a4 ORES (a) Generation-time Supplement Figure 4: The illustrations of two distinct methods for utilizing external knowledge to reduce hallucinations in LLMsâ responses. 2023; Qin et al., 2023; Qiao et al., 2023). For instance, FacTool (Chern et al., 2023) em- ploys different tools to help detect hallucina- tions in LLMs for specific downstream tasks, such as search engine API for Knowledge- based QA, code executor for code gener- ation, and Google Scholar API for scien- tific literature review. CRITIC (Gou et al., 2023) also enables LLMs to interact with multiple tools and revise their responses au- tonomously, which has been proven to effec- tively improve truthfulness. Knowledge utilization. Once relevant knowl- edge is obtained, it could be employed at differ- ent stages to mitigate hallucinations within LLMs. Existing methods for knowledge utilization can be roughly divided into two categories, as detailed below and illustrated in Figure 4. (1) Generation-time supplement. The most straightforward approach to utilize retrieved knowledge or tool feedback is to directly concatenate them with user queries before prompting LLMs (Shi et al., 2023c; Mallen et al., 2023; Ram et al., 2023). This method is both effective and easy to implement. Such knowledge is also referred to as con- text knowledge (Shi et al., 2023b). Existing studies have demonstrated that LLMs pos- sess a strong capability for in-context learn- ing (Dong et al., 2022), which enables them to extract and utilize valuable information from context knowledge to rectify nonfactual claims they previously generated. (2) Post-hoc correction. Another common prac- tice involves constructing an auxiliary fixer to rectify hallucinations during the post- processing stage (Cao et al., 2020; Zhu et al., 2021; Fabbri et al., 2022).
2309.01219#47
2309.01219#49
2309.01219
[ "2307.03109" ]
2309.01219#49
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
The fixer can be either another LLM (Peng et al., 2023a; Zhang et al., 2023d; Chern et al., 2023; Gou et al., 2023) or a specific small model (Chen et al., 2023a). Such fix- ers first interact with external knowledge sources to gather sufficient evidence, and then correct hallucinations. For example, RARR (Gao et al., 2023a) directly prompts an LLM to ask questions about the content that needs to be corrected from multiple per- spectives. Then it uses search engines to re- trieve relevant knowledge. The LLM-based fixer finally makes corrections based on re- trieved evidence. The Verify-then-Edit ap- proach (Zhao et al., 2023a) aims to enhance the factuality of predictions by post-editing reasoning chains based on external knowl- edge sourced from Wikipedia. To achieve better performance, LLM-Augmenter (Peng et al., 2023a) prompts LLMs to summarize retrieved knowledge before feeding it into the fixer. Moreover, FacTool (Chern et al., 2023) and CRITIC (Gou et al., 2023) propose to uti- lize various external tools to obtain evidence for the fixer. Summary & Discussion. Resorting to external knowledge to mitigate hallucinations in LLMs of- fers several advantages. Firstly, this method cir- cumvents the need for modifying LLMs, making it a plug-and-play and efficient solution. Secondly, it facilitates the easy transfer of proprietary knowl- edge (e.g., a companyâ s internal data) and real- time updated information to LLMs. Lastly, this approach enhances the interpretability of infor- mation generated by LLMs by allowing the trac- ing of generation results back to the source evi- dence (Gao et al., 2023b; Yue et al., 2023). How- ever, this direction also presents some remaining challenges. We discuss some of them below.
2309.01219#48
2309.01219#50
2309.01219
[ "2307.03109" ]
2309.01219#50
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
(1) Knowledge verification. In the era of LLMs, the external knowledge source could extend beyond a single document corpus or a spe- cific website to encompass the entire Internet. However, the information from the Internet is in the wild, which means they may also be fabricated, or even generated by LLMs them- selves (Alemohammad et al., 2023). How to 16 User Query: What is the height __¢@% Answer: =] of Mount Kilimanjaro? (ea) P5932] (a) logit-based method User Query: What is the height =] of Mount Kilimanjaro? Trawenaheheann + CG © is 3932 meters. Tom Please provide your confidence eopconticents level (0-100). (b) verbalize-based method Answer:
2309.01219#49
2309.01219#51
2309.01219
[ "2307.03109" ]
2309.01219#51
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
The height is 5932 meters. Answer: The height 2. User Query: What is the height __. 6 is 5895 meters. of Mount Kilimanjaro? Answer: The height is 5921 meters. (©) consistency-based method Figure 5: The illustrations of three typical methods for estimating LLM uncertainty. In the example of the logit-based method, we use the red/green background to distinct tokens with low/high generation probabili- ties. In the example of the consistency-based method, the responses are acquired from multiple sampling. verify the authenticity of retrieved knowledge from the Internet is an open and challenging problem to be solved. (2) Performance/efficiency of retriever/fixer. The performance of the retriever/fixer plays a vital role in ensuring the effects of hallu- cination mitigation. Future work may con- sider jointly optimising the whole working flow (retrieverâ
2309.01219#50
2309.01219#52
2309.01219
[ "2307.03109" ]
2309.01219#52
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
LLMâ fixer) via reinforce- ment learning (Qiao et al., 2023) or other techniques. Besides, the efficiency of the retriever/fixer is another important factor to be considered, as the generation speed of existing LLMs is already a significant bur- den (Ning et al., 2023). (3) Knowledge conflict. As introduced be- fore, the retrieved knowledge may conflict with the parametric knowledge stored by LLMs (Qian et al., 2023). Shi et al. (2023b) reveal that LLMs may fail to sufficiently ex- ploit retrieved knowledge when knowledge conflict happens. Xie et al. (2023) take a more cautious look at this phenomenon. How to fully utilize context knowledge is an under-explored question. For example, Liu et al. (2023d) find the performance of retrieval-augmented LLMs significantly de- grades when they must access evidence in the middle of long contexts. 5.4.3 Exploiting Uncertainty Uncertainty serves as a valuable indicator for de- tecting and mitigating hallucinations during the inference process (Manakul et al., 2023). Typi- cally, it refers to the confidence level of model out- puts (Jiang et al., 2021; Huang et al., 2023a; Duan et al., 2023). Uncertainty can assist users in de- termining when to trust LLMs. Provided that the uncertainty of LLM responses can be accurately characterized, users can filter out or rectify LLMsâ claims with high uncertainty since such claims are more prone to be fabricated ones (Lin et al., 2023). Generally speaking, methods for estimating the uncertainty of LLMs can be categorized into three types (Xiong et al., 2023), as listed below. To fa- cilitate understanding, we also present illustrative examples for these methods in Figure 5. (1) Logit-based estimation. The first method is the logit-based method, which requires ac- cess to the model logits and typically mea- sures uncertainty by calculating token-level probability or entropy. This method has been widely used in the machine learning commu- nity (Guo et al., 2017). (2) Verbalize-based estimation.
2309.01219#51
2309.01219#53
2309.01219
[ "2307.03109" ]
2309.01219#53
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
The second is the verbalize-based method, which involves directly requesting LLMs to express their un- certainty, such as using the following prompt: â Please answer and provide your confidence score (from 0 to 100).â This method is effective due to the impressive verbal and instruction-following capabilities of LLMs. Notably, Xiong et al. (2023) further suggest using chain-of-thoughts prompts (Wei et al., 2022) to enhance this method. (3) Consistency-based estimation. The third is the consistency-based method (Wang et al., 2022; Shi et al., 2022; Zhao et al., 2023a). This method operates on the assumption that LLMs are likely to provide logically incon- sistent responses for the same question when they are indecisive and hallucinating facts. Several recent studies have leveraged uncer- tainty estimation for detecting and mitigating hal- lucinations in LLMs. SELFCHECKGPT (Man- akul et al., 2023) is the first framework to detect LLM hallucinations based on uncertainty mea- surement in a zero-resource and black-box set- ting. They employ a consistency-based approach for uncertainty estimation. A non-trivial chal- lenge in SELFCHECKGPT is determining how to measure the consistency of different responses.
2309.01219#52
2309.01219#54
2309.01219
[ "2307.03109" ]
2309.01219#54
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
17 Manakul et al. (2023) perform experiments with BERTScore (Zhang et al., 2019), QA-based met- rics (Wu and Xiong, 2023) and n-gram metrics. They finally find that a combination of these ap- proaches yields the best results. Mündler et al. (2023) directly utilize an additional LLM to as- sess whether two LLM responses are logically contradictory given the same context (Luo et al., 2023b), which means at least one of them is hal- lucinated. Consequently, they employ another LLM to revise such self-contradictory hallucina- tions from two responses. Agrawal et al. (2023) further adopt the verbalize-based method to eval- uate the hallucination rate of LLMs for fabricat- ing references. Varshney et al. (2023), on the other hand, use the logit-based method to detect false concepts in LLMsâ responses with high un- certainty. They then fix such content with auxil- iary retrieval-augmented LLMs. Besides, Zhao et al. (2023b) present a Pareto optimal self-supervision framework. This frame- work utilizes available programmatic supervision to assign a risk score to LLM responses, which can serve as an indicator of hallucinations. Luo et al. (2023a) introduce a pre-detection self-evaluation technique, which aims to evaluate the familiarity of LLMs with the concepts in user prompts and prevent the generation of content about those un- familiar concepts. Summary & Discussion. Exploiting uncer- tainty to identify and mitigate LLM hallucinations is a promising research direction today. Three pri- mary approaches exist for estimating the uncer- tainty of LLMs, each presenting its unique chal- lenges. Firstly, the logit-based method is becom- ing less applicable for modern commercial LLMs as they are usually closed-source and black-box, rendering their output logits inaccessible. Sec- ondly, regarding the verbalize-based method, re- searchers have observed that LLMs tend to display a high degree of overconfidence when expressing their confidence (Xiong et al., 2023).
2309.01219#53
2309.01219#55
2309.01219
[ "2307.03109" ]
2309.01219#55
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Thirdly, the effective measurement of the consistency of differ- ent responses remains an unresolved issue in the consistency-based method (Manakul et al., 2023). We believe that leveraging uncertainty is crucial in developing trustworthy LLMs and encourage fu- ture research to address the aforementioned chal- lenges in this field. User Input Multi-Agent Interaction © I see your point, but ... a) Most of your claims are right, but ... = Which musical currently holds the record as Broadway's fourth-longest running show? The musical â Chicagoâ holds the record as Broadway's fourth-longest running show. Final Response As of September 2021, the musical â Wicked" holds the record as Broadway's fourth-longest running show.
2309.01219#54
2309.01219#56
2309.01219
[ "2307.03109" ]
2309.01219#56
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Figure 6: An example of the process of multi-agent in- teraction for mitigating LLM hallucinations. # 5.5 Other Methods In addition to the above approaches, other tech- niques demonstrating the potential for reducing hallucinations are shown below. Multi-agent interaction. Some recent research has sought to address the hallucination problem in LLMs from a multi-agent perspective, wherein multiple LLMs (also known as agents) indepen- dently propose and collaboratively debate their re- sponses to reach a single consensus, as exempli- fied in Figure 6. Du et al. (2023) is a pioneer- ing work in this line. They initially developed a benchmark for assessing the factual accuracy of prominent computer scientist biographies gener- ated by LMs. Their findings reveal that an indi- vidual LLM can easily generate hallucinated in- formation within this benchmark; however, such hallucinations can be mitigated by engaging mul- tiple LLMs in a debate to achieve consensus. Be- sides, Cohen et al. (2023) ask one LLM to gen- erate claims (acting as EXAMINEE) and another to raise questions about these claims and check the truthfulness of them (acting as EXAMINER). Wang et al. (2023d) instead propose prompting a single LLM to identify, simulate, and iteratively self-collaborate with multiple personas, such as Harry Potter Fan and Jay Chou Fan. By leverag- ing an LLM as a cognitive synergist, it effectively reduces hallucinations with relatively low costs.
2309.01219#55
2309.01219#57
2309.01219
[ "2307.03109" ]
2309.01219#57
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
18 Prompt engineering. Existing research high- lights that the behavior of LLMs can significantly vary based on the prompts given by users (Si et al., 2022; Zhu et al., 2023). In terms of hallucina- tion, users may encounter an LLM that initially responds accurately but begins to hallucinate in- formation when using different prompts. In light of this observation, Zhang et al. (2023a) endeav- our to engineer more effective prompts to mitigate hallucination. Concretely, they employ the chain- of-thought prompt (Wei et al., 2022) to compel LLMs to generate reasoning steps before provid- ing the final answers. However, chain-of-thought may introduce some new challenges. The po- tential of hallucinated reasoning steps is one of them. Furthermore, a popular practice nowadays involves explicitly instructing LLMs not to dis- seminate false or unverifiable information when designing the â system promptâ , i.e., the special messages used to steer the behavior of LLMs. The following system prompt used for Llama 2- Chat (Touvron et al., 2023b) exemplifies this ap- proach:
2309.01219#56
2309.01219#58
2309.01219
[ "2307.03109" ]
2309.01219#58
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
If you donâ t know the answer to a ques- tion, please donâ t share false information. Analyzing LLMsâ internal states. Azaria and Mitchell (2023) contend that LLMs may be aware of their own falsehoods, implying that their in- ternal states could be utilized to detect halluci- nations. They propose Statement Accuracy Pre- diction based on Language Model Activations (SAPLMA), which adds a classifier on top of each hidden layer of the LLM to determine truthful- ness. Experimental results indicate that LLMs might â
2309.01219#57
2309.01219#59
2309.01219
[ "2307.03109" ]
2309.01219#59
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
knowâ when the statements they gener- ate are false, and SAPLMA can effectively ex- tract such information. The Inference-Time In- tervention (ITI) method (Li et al., 2023b) is also grounded in a similar hypothesis. They further shift model activations alongside factuality-related heads during inference and discover that this can mitigate hallucinations. These studies suggest that â the hallucination within LLMs may be more a re- sult of generation techniques than the underlying representationâ (Agrawal et al., 2023). Human-in-the-loop. Zhang et al. (2023c) posit that a potential cause of hallucination in LLMs could be the misalignment between knowledge and user questions, a phenomenon that is par- ticularly prevalent in the context of retrieval- augmented generation (RAG). To address this is- User Input (EN) LLM Response (EN) According to the 2020 United States Census, the population of Denver, Colorado, is 715,522 â What is the population of Denver according to the 2020 census? æ ¹æ ®2020å¹´äººå £æ ®æ ¥ï¼ ä¸¹ä½ ç äººå £æ ¯å¤ å° ï¼ User Input (ZH) LLM Response (ZH) æ ¹æ ®2020å¹´äººå £æ ®æ ¥ï¼ ä¸¹ä½ ç äººå £ä¸º73,921 â User Input (EN) LLM Response (ZH) æ ¹æ ®2020å¹´äººå £æ ®æ ¥ï¼ ä¸¹ä½ ç äººå £ä¸º704,621 â
2309.01219#58
2309.01219#60
2309.01219
[ "2307.03109" ]
2309.01219#60
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
What is the population of Denver according to the 2020 census? Answer in Chinese. Table 11: A real example in which ChatGPT (July 2023 Version) accurately answered a question in English conver- sation but presented hallucinations for the same question when communicating in Chinese (the correct population of Denver in 2020 is 715,522, according to https://en.wikipedia.org/wiki/Denver). sue, they introduce MixAlign, a human-in-the- loop framework that utilizes LLMs to align user queries with stored knowledge, and further en- courages users to clarify this alignment. By re- fining user queries iteratively, MixAlign not only reduces hallucinations but also enhances the qual- ity of the generated content. discrimination-style benchmark (Li et al., 2023a; Muhlgay et al., 2023) could relatively accurately evaluate a modelâ s ability to distinguish hallucina- tions, the relationship between discrimination per- formance and generation performance is still un- clear until now. These issues all need more in- depth exploration. Optimizing model architecture. Several stud- ies have explored modifying the architecture of LMs to mitigate hallucinations. Examples in- clude the multi-branch decoder (Rebuffel et al., 2022) and the uncertainty-aware decoder (Xiao and Wang, 2021). Li et al. (2023g) suggest em- ploying a bidirectional autoregressive architecture in the construction of LLMs, which enables lan- guage modeling from both left-to-right and right- to-left. They claim that this design strategy could contribute to the reduction of hallucinations by ef- fectively leveraging bidirectional information.
2309.01219#59
2309.01219#61
2309.01219
[ "2307.03109" ]
2309.01219#61
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
# 6 Outlooks In this section, we discuss a few unresolved chal- lenges in the investigation of hallucinations within LLMs and offer our insights into potential future research directions. Reliable evaluation. Although considerable ef- fort has been dedicated to building evaluation benchmarks for quantitatively assessing halluci- nation in LLMs, there are still issues that need to be solved. The automatic evaluation in the generation-style hallucination benchmark cannot accurately reflect the performance or align with human annotation. Such inaccuracy is reflected in two ways: (1) The automatic metric does not perfectly align with human annotations (Lin et al., 2021; Min et al., 2023; Muhlgay et al., 2023); (2) The reliability of automatic metric varies across texts from different domains or generated by dif- ferent LLMs (Min et al., 2023), resulting in re- duced robustness for generalization. Although the
2309.01219#60
2309.01219#62
2309.01219
[ "2307.03109" ]
2309.01219#62
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Multi-lingual hallucination. Existing work in LLM hallucination primarily focuses on English, despite the existence of thousands of languages in the world. We hope that LLMs can possess the ability to handle various languages uniformly. Some previous studies have investigated the per- formance of LLMs on some multi-lingual bench- marks (Ahuja et al., 2023; Lai et al., 2023), and collectively found that their performance degen- erates when generalizing to non-Latin languages. In terms of the hallucination problem, Guerreiro et al. (2023a) observe that multi-lingual LLMs predominantly struggle with hallucinations in low- resource languages in the translation task. Po- tential follow-up work could include systemati- cally measuring and analyzing LLM hallucina- tions across a wide variety of languages. As shown in Table 11, we find that LLMs such as ChatGPT provide accurate answers in English but expose hallucinations in other languages, leading to mul- tilingual inconsistencies. The transfer of knowl- edge within LLMs from high-resource languages to low-resource ones also presents an interesting and promising research direction. Multi-modal hallucination. In an effort to im- prove the performance of complex multi-modal tasks, recent studies have proposed replacing the text encoder of existing vision-large models with LLMs, resulting in large vision-language mod- els (LVLMs) (Liu et al., 2023b; Ye et al., 2023). Despite their success, some research reveals that LVLMs inherit the hallucination problem from LLMs and exhibit more severe multi-modal hal-
2309.01219#61
2309.01219#63
2309.01219
[ "2307.03109" ]
2309.01219#63
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
19 2. Is there a person under the tree? Yes, there is a person under the tree. Figure 7: An example of object hallucination in LVLMs. We highlight the hallucination in red, as there is no person under the tree in this picture. lucinations compared to smaller models. For in- stance, Li et al. (2023e) discuss the object halluci- nation of LVLMs, wherein LVLMs generate con- tent containing objects that are inconsistent with or absent from the input image, such as the ex- ample in Figure 7. To effectively measure ob- ject hallucinations generated by LVLMs, Liu et al. (2023a) propose a GPT4-Assisted Visual Instruc- tion Evaluation (GAVIE) benchmark. Gunjal et al. (2023) introduce a multi-modal hallucination de- tection dataset named M-HalDetect, further study the unfaithful descriptions and inaccurate rela- tionships beyond object hallucinations in LVLMs. Furthermore, in addition to images, some stud- ies have extended LLMs to other modalities such as audio (Wu et al., 2023a; Su et al., 2023) and video (Maaz et al., 2023), making it interesting to investigate hallucination in these new scenarios.
2309.01219#62
2309.01219#64
2309.01219
[ "2307.03109" ]
2309.01219#64
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Model editing. As elaborated in § 4, hallucina- tions in LLMs may primarily stem from the mem- orization of false information or the absence of correct factual knowledge. To mitigate these is- sues in LLMs with minimal computational over- head, the concept of model editing has been in- troduced (Sinitsin et al., 2020; De Cao et al., 2021). This approach involves modifying the be- havior of models in a manner that is both data- and computation-efficient. At present, there are two mainstream paradigms for model editing. The first involves the incorporation of an auxiliary sub-network (Mitchell et al., 2022; Huang et al.,
2309.01219#63
2309.01219#65
2309.01219
[ "2307.03109" ]
2309.01219#65
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
20 2023b), while the second entails direct modifi- cation of the original model parameters (Meng et al., 2022a,b). This technique may be instrumen- tal in eliminating LLMsâ hallucinations by editing their stored factual knowledge in purpose (Lan- ham et al., 2023; Onoe et al., 2023). How- ever, this emerging field still faces numerous chal- lenges. These could include editing black-box LLMs (Murty et al., 2022), in-context model edit- ing (Zheng et al., 2023a), and multi-hop model editing (Zhong et al., 2023), etc.
2309.01219#64
2309.01219#66
2309.01219
[ "2307.03109" ]
2309.01219#66
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Attack/defense for inducing hallucination. As previously discussed, significant efforts have been undertaken by both researchers and companies to guarantee that LLMs produce truthful responses, ultimately improving the overall user experi- ence. Cutting-edge commercial LLMs, such as GPT4 (OpenAI, 2023b), appear to have acquired a decent ability to generate proper responses to factuality-related queries. However, they are not invincible. Several studies show that LLMs can be manipulated using techniques like meticulously crafted jailbreak prompts to elicit arbitrary desired responses (Wei et al., 2023a; Zou et al., 2023), in- cluding hallucinations. Consequently, the attack- ing and defending strategies for inducing halluci- nations could also be a promising research direc- tion. This is particularly important as the gener- ation of fabricated information could potentially breach relevant laws, leading to the forced shut- down of LLM applications. This direction is also intimately tied to the robustness of existing hallu- cination mitigation methods.
2309.01219#65
2309.01219#67
2309.01219
[ "2307.03109" ]
2309.01219#67
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Others. Given that the current research on hal- lucinations in LLMs is still in its early stages, there are also many other intriguing and promis- For in- ing avenues for further investigation. stance, researchers have begun to treat LLMs as agents for open-world planning in the pursuit of AGI (Park et al., 2023; Wang et al., 2023a). Ad- dressing the hallucination problem within the con- text of LLMs-as-agents presents brand-new chal- lenges and holds considerable practical value. Be- sides, analyzing and tracing LLM hallucinations from the linguistic aspect is another interesting re- search topic. Rawte et al. (2023) show that the oc- currence of LLM hallucination is closely related to linguistic nuances of the user prompts, such as readability, formality, and concreteness. We believe all these directions merit thorough explo- ration in future research. # 7 Conclusion With their strong understanding and generation ca- pabilities in the open domain, LLMs have gar- nered significant attention from both academic and industrial communities. However, hallucination remains a critical challenge that impedes the prac- tical application of LLMs. In this survey, we of- fer a comprehensive review of the most recent ad- vances, primarily post the release of ChatGPT, that aim to evaluate, trace, and eliminate hallucinations within LLMs. We also delve into the existing chal- lenges and discuss potential future directions. We aspire for this survey to serve as a valuable re- source for researchers intrigued by the mystery of LLM hallucinations, thereby fostering the practi- cal application of LLMs. # Acknowledgments We would like to thank Yu Wu and Yang Liu for their valuable suggestions. # References Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, and Siva Evaluating correctness and Reddy. 2023. instruction-following mod- faithfulness of arXiv preprint els for question answering. arXiv:2307.16877. Ayush Agrawal, Lester Mackey, and Adam Tau- man Kalai. 2023. Do language models know when theyâ re hallucinating references? arXiv preprint arXiv:2305.18248.
2309.01219#66
2309.01219#68
2309.01219
[ "2307.03109" ]
2309.01219#68
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Kabir Ahuja, Rishav Hada, Millicent Ochieng, Prachi Jain, Harshita Diddee, Samuel Maina, Tanuja Ganu, Sameer Segal, Maxamed Axmed, Kalika Bali, et al. 2023. Mega: Multilin- gual evaluation of generative ai. arXiv preprint arXiv:2303.12528. Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, and Kelvin Guu. 2022. Tracing knowledge in lan- guage models back to the training data. arXiv preprint arXiv:2205.11482. Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hos- sein Babaei, Daniel LeJeune, Ali Siahkoohi,
2309.01219#67
2309.01219#69
2309.01219
[ "2307.03109" ]
2309.01219#69
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
21 and Richard G Baraniuk. 2023. Self-consuming arXiv preprint generative models go mad. arXiv:2307.01850. Amos Azaria and Tom Mitchell. 2023. The inter- nal state of an llm knows when its lying. arXiv preprint arXiv:2304.13734. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a help- ful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. 2023. A multitask, multilingual, multi- modal evaluation of chatgpt on reasoning, hal- arXiv preprint lucination, and interactivity. arXiv:2302.04023. Steffen Bickel, Peter Haider, and Tobias Scheffer. 2005. Predicting sentences using n-gram lan- In Proceedings of human lan- guage models. guage technology conference and conference on empirical methods in natural language process- ing, pages 193â 200.
2309.01219#68
2309.01219#70
2309.01219
[ "2307.03109" ]
2309.01219#70
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Sebastian Borgeaud, Arthur Mensch, Jordan Hoff- mann, Trevor Cai, Eliza Rutherford, Katie Mil- lican, George Bm Van Den Driessche, Jean- Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by re- In Interna- trieving from trillions of tokens. tional conference on machine learning, pages 2206â 2240. PMLR. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Ad- vances in neural information processing sys- tems, 33:1877â
2309.01219#69
2309.01219#71
2309.01219
[ "2307.03109" ]
2309.01219#71
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
1901. Deng Cai, Yan Wang, Huayang Li, Wai Lam, and Lemao Liu. 2021. Neural machine translation with monolingual translation memory. In Pro- ceedings of the 59th Annual Meeting of the As- sociation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7307â 7318. Meng Cao, Yue Dong, Jiapeng Wu, and Jackie Chi Kit Cheung. 2020.
2309.01219#70
2309.01219#72
2309.01219
[ "2307.03109" ]
2309.01219#72
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Factual error correc- In tion for abstractive summarization models. Proceedings of the 2020 Conference on Empir- ical Methods in Natural Language Processing (EMNLP), pages 6251â 6258. Yihan Cao, Yanbin Kang, and Lichao Sun. 2023. Instruction mining: High-quality instruction data selection for large language models. arXiv preprint arXiv:2307.06290. Kai-Wei Chang, Vinodkumar Prabhakaran, and Vicente Ordonez. 2019. Bias and fairness in natural language processing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP): Tutorial Abstracts. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xi- aoyuan Yi, Cunxiang Wang, Yidong Wang, A survey on evaluation of et al. 2023. arXiv preprint large language models. arXiv:2307.03109. Anthony Chen, Panupong Pasupat, Sameer Singh, Hongrae Lee, and Kelvin Guu. 2023a.
2309.01219#71
2309.01219#73
2309.01219
[ "2307.03109" ]
2309.01219#73
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Purr: Efficiently editing language model hallucina- tions by denoising language model corruptions. arXiv preprint arXiv:2305.14908. Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jil- iang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. Acm Sigkdd Explorations Newsletter, 19(2):25â 35. Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, et al. 2023b. Alpagasus: Training a bet- arXiv preprint ter alpaca with fewer data. arXiv:2307.08701.
2309.01219#72
2309.01219#74
2309.01219
[ "2307.03109" ]
2309.01219#74
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, and Pengfei Liu. 2023. Factool: Factuality detection in generative ai â a tool augmented framework for multi-task 22 and multi-domain scenarios. arXiv:2307.13528. arXiv preprint Aakanksha Chowdhery, Sharan Narang, Jacob De- vlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022.
2309.01219#73
2309.01219#75
2309.01219
[ "2307.03109" ]
2309.01219#75
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, and Pengcheng He. 2023. Dola: Decoding by contrasting layers improves factuality in large language models. arXiv preprint arXiv:2309.03883. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Scaling instruction- Brahma, et al. 2022. arXiv preprint finetuned language models. arXiv:2210.11416. Roi Cohen, May Hamri, Mor Geva, and Amir Globerson. 2023.
2309.01219#74
2309.01219#76
2309.01219
[ "2307.03109" ]
2309.01219#76
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Lm vs lm: Detecting factual errors via cross examination. arXiv preprint arXiv:2305.13281. Mike Conover, Matt Hayes, Ankit Mathur, Jian- wei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. 2023. Free dolly: Introducing the worldâ s first truly open instruction-tuned llm. Leyang Cui, Yu Wu, Shujie Liu, and Yue Zhang. 2021. Knowledge enhanced fine-tuning for bet- ter handling unseen entities in dialogue genera- tion. In EMNLP.
2309.01219#75
2309.01219#77
2309.01219
[ "2307.03109" ]
2309.01219#77
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
David Dale, Elena Voita, Loïc Barrault, and Marta R. Costa-jussà. 2023. Detecting and mit- igating hallucinations in machine translation: Model internal workings alone do well, sen- tence similarity even better. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 36â
2309.01219#76
2309.01219#78
2309.01219
[ "2307.03109" ]
2309.01219#78
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
50. Association for Computa- tional Linguistics. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Editing factual knowledge in language models. In Proceedings of the 2021 Conference on Em- pirical Methods in Natural Language Process- ing, pages 6491â 6506. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language In Proceedings of the 2019 understanding. the North American Chapter Conference of of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â 4186. Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and Jason Weston. 2023. Chain-of-verification reduces hallucination in large language models. arXiv preprint arXiv:2309.11495. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, A sur- arXiv preprint vey for in-context learning. arXiv:2301.00234. Wanyu Du, Vipul Raheja, Dhruv Kumar, and Zae Myung Kim, Melissa Lopez, Understanding it- Dongyeop Kang. 2022. erative revision from human-written text. arXiv preprint arXiv:2203.03802. Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. 2023. Improv- ing factuality and reasoning in language mod- els through multiagent debate. arXiv preprint arXiv:2305.14325. Jinhao Duan, Hao Cheng, Shiqi Wang, Chenan Wang, Alex Zavalny, Renjing Xu, Bhavya Kailkhura, and Kaidi Xu. 2023. Shifting at- tention to relevance: Towards the uncertainty arXiv estimation of large language models. preprint arXiv:2307.01379.
2309.01219#77
2309.01219#79
2309.01219
[ "2307.03109" ]
2309.01219#79
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Esin Durmus, He He, and Mona T. Diab. 2020. FEQA: A question answering evaluation frame- work for faithfulness assessment in abstractive summarization. In Proceedings of the 58th An- nual Meeting of the Association for Computa- tional Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5055â 5070. Association for Com- putational Linguistics. Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, and Siva Reddy. 2022. On the origin of hal- lucinations in conversational models: Is it the datasets or the models? In Proceedings of the
2309.01219#78
2309.01219#80
2309.01219
[ "2307.03109" ]
2309.01219#80
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
23 2022 Conference of the North American Chap- ter of the Association for Computational Lin- guistics: Human Language Technologies, pages 5271â 5285. Nouha Dziri, Hannah Rashkin, Tal Linzen, and David Reitter. 2021. Evaluating groundedness in dialogue systems: The BEGIN benchmark. CoRR, abs/2105.00071. Alex Fabbri, Prafulla Kumar Choubey, Jesse Vig, Chien-Sheng Wu, and Caiming Xiong. 2022. Improving factual consistency in summariza- tion with compression-based post-editing. In Proceedings of the 2022 Conference on Empir- ical Methods in Natural Language Processing, pages 9149â
2309.01219#79
2309.01219#81
2309.01219
[ "2307.03109" ]
2309.01219#81
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
9156. Chao Feng, Xinyu Zhang, and Zichu Fei. 2023. Knowledge solver: Teaching llms to search for domain knowledge from knowledge graphs. arXiv preprint arXiv:2309.03118. Patrick Fernandes, Aman Madaan, Emmy Liu, António Farinhas, Pedro Henrique Martins, Amanda Bertsch, José GC de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, et al. 2023. Bridging the gap: A survey on integrat- ing (human) feedback for natural language gen- eration. arXiv preprint arXiv:2305.00955. Leo Gao, John Schulman, and Jacob Hilton. 2022. Scaling laws for reward model overoptimiza- tion. Luyu Gao, Zhuyun Dai, Panupong Pasupat, An- thony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da- Cheng Juan, et al. 2023a.
2309.01219#80
2309.01219#82
2309.01219
[ "2307.03109" ]
2309.01219#82
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Rarr: Researching and revising what language models say, using In Proceedings of the 61st language models. Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 16477â 16508. Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023b. Enabling large language models to generate text with citations. arXiv preprint arXiv:2305.14627. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for NLG micro- In Proceedings of the 55th Annual planners. Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 179â 188.
2309.01219#81
2309.01219#83
2309.01219
[ "2307.03109" ]
2309.01219#83
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Ismael Garrido-Muñoz, Arturo Montejo-Ráez, Fernando Martà nez-Santiago, and L Alfonso Ureña-López. 2021. A survey on bias in deep nlp. Applied Sciences, 11(7):3184. Yoav Goldberg. 2023. Reinforcement learning for language models. Github Blog. Zhibin Gou, Zhihong Shao, Yeyun Gong, Ye- long Shen, Yujiu Yang, Nan Duan, and Weizhu Chen. 2023.
2309.01219#82
2309.01219#84
2309.01219
[ "2307.03109" ]
2309.01219#84
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Critic: Large language models can self-correct with tool-interactive critiquing. arXiv preprint arXiv:2305.11738. Nuno M Guerreiro, Duarte Alves, Jonas Walden- dorf, Barry Haddow, Alexandra Birch, Pierre Colombo, and André FT Martins. 2023a. Hal- lucinations in large multilingual translation models. arXiv preprint arXiv:2303.16104. Nuno Miguel Guerreiro, Elena Voita, and André F. T. Martins. 2023b.
2309.01219#83
2309.01219#85
2309.01219
[ "2307.03109" ]
2309.01219#85
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Looking for a needle in a haystack: A comprehensive study of hallucina- tions in neural machine translation. In Proceed- ings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2023, Dubrovnik, Croatia, May 2-6, 2023, pages 1059â 1075. Association for Computational Linguistics. Jihan Yin, and Erhan Bas. 2023. Detecting and preventing hallucinations in large vision language models. arXiv preprint arXiv:2308.06394. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International conference on machine learning, pages 1321â 1330. PMLR. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neu- ral text degeneration. In International Confer- ence on Learning Representations. Jiayang Song, Zhijie Wang, Huaming Chen, and Lei Ma. 2023a.
2309.01219#84
2309.01219#86
2309.01219
[ "2307.03109" ]
2309.01219#86
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Look be- fore you leap: An exploratory study of uncer- tainty measurement for large language models. arXiv preprint arXiv:2307.10236. 24 Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie Zhou, Wenge Rong, and Zhang Xiong. 2023b. Transformer-patcher: One mistake worth one neuron. arXiv preprint arXiv:2301.09785. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, An- drea Madotto, and Pascale Fung. 2023.
2309.01219#85
2309.01219#87
2309.01219
[ "2307.03109" ]
2309.01219#87
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1â 38. Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibra- tion of language models for question answer- ing. Transactions of the Association for Com- putational Linguistics, 9:962â 977. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Lan- guage models (mostly) know what they know. arXiv preprint arXiv:2207.05221. Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, and Robert McHardy. 2023. Challenges and applications arXiv preprint of large language models. arXiv:2307.10169. Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Stanley, Duc, Oliver Richárd Nagyfi, Openassistant conversationsâ et al. 2023. democratizing large language model alignment. arXiv preprint arXiv:2304.07327. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluat- ing the factual consistency of abstractive text In Proceedings of the 2020 summarization. Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9332â
2309.01219#86
2309.01219#88
2309.01219
[ "2307.03109" ]
2309.01219#88
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
9346. As- sociation for Computational Linguistics. Viet Dac Lai, Nghia Trung Ngo, Amir Pouran Ben Veyseh, Hieu Man, Franck Dernoncourt, Trung Bui, and Thien Huu Nguyen. 2023. Chatgpt be- yond english: Towards a comprehensive evalu- ation of large language models in multilingual learning. arXiv preprint arXiv:2304.05613. Zhenzhong Lan, Mingda Chen, Sebastian Good- man, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019.
2309.01219#87
2309.01219#89
2309.01219
[ "2307.03109" ]
2309.01219#89
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Albert: A lite bert for self- supervised learning of language representa- tions. In International Conference on Learning Representations. Tamera Lanham, Anna Chen, Ansh Radhakrish- nan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hub- inger, Jackson Kernion, et al. 2023. Measur- ing faithfulness in chain-of-thought reasoning. arXiv preprint arXiv:2307.13702. Angeliki Lazaridou, Elena Gribovskaya, Woj- ciech Stokowiec, and Nikolai Grigorev. 2022. Internet-augmented language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115. Ariel N Lee, Cole J Hunter, and Nataniel Ruiz.
2309.01219#88
2309.01219#90
2309.01219
[ "2307.03109" ]
2309.01219#90
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Platypus: Quick, cheap, and pow- arXiv preprint 2023. erful arXiv:2308.07317. refinement of llms. Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2019. Hallucinations in neural machine translation. Nayeon Lee, Wei Ping, Peng Xu, Mostofa Pat- wary, Pascale N Fung, Mohammad Shoeybi, Factuality en- and Bryan Catanzaro. 2022. hanced language models for open-ended text generation. Advances in Neural Information Processing Systems, 35:34586â
2309.01219#89
2309.01219#91
2309.01219
[ "2307.03109" ]
2309.01219#91
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
34599. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. Bart: Denoising sequence-to-sequence language generation, pre-training for natural In Proceed- translation, and comprehension. ings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, pages 7871â 7880.
2309.01219#90
2309.01219#92
2309.01219
[ "2307.03109" ]
2309.01219#92
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020b. Retrieval- augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Pro- cessing Systems, 33:9459â 9474. 25 Huayang Li, Yixuan Su, Deng Cai, Yan Wang, and Lemao Liu. 2022a. A survey on retrieval- arXiv preprint augmented text generation. arXiv:2202.01110. Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian- Yun Nie, and Ji-Rong Wen. 2023a.
2309.01219#91
2309.01219#93
2309.01219
[ "2307.03109" ]
2309.01219#93
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Halueval: A large-scale hallucination evaluation bench- mark for large language models. arXiv preprint arXiv:2305.11747. Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2022b. Pretrained lan- guage models for text generation: A survey. arXiv preprint arXiv:2201.05273. Kenneth Li, Oam Patel, Fernanda Viégas, and Martin Wattenberg. Hanspeter Pfister, 2023b. Inference-time intervention: Eliciting truthful answers from a language model. arXiv preprint arXiv:2306.03341. Miaoran Li, Baolin Peng, and Zhu Zhang. 2023c. Self-checker: Plug-and-play modules for fact- checking with large language models. arXiv preprint arXiv:2305.14623. Shaobo Li, Xiaoguang Li, Lifeng Shang, Zhenhua Dong, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Xin Jiang, and Qun Liu. 2022c. How pre- trained language models capture factual knowl- edge? a causal-inspired analysis. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1720â 1732. Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Lidong Bing, Shafiq Joty, and Soujanya Poria. 2023d.
2309.01219#92
2309.01219#94
2309.01219
[ "2307.03109" ]
2309.01219#94
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Chain of knowledge: A framework for grounding large language mod- arXiv els with structured knowledge bases. preprint arXiv:2305.13269. Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023e. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355. Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Al- lie Del Giorno, Suriya Gunasekar, and Yin Tat Textbooks are all you need Lee. 2023f. arXiv preprint ii: phi-1.5 technical report. arXiv:2309.05463. Zuchao Li, Shitou Zhang, Hai Zhao, Yifei Yang, and Dongjie Yang. 2023g. Batgpt: A bidirectional autoregessive talker from gener- arXiv preprint ative pre-trained transformer. arXiv:2307.00360.
2309.01219#93
2309.01219#95
2309.01219
[ "2307.03109" ]
2309.01219#95
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Letâ s verify step by step. arXiv preprint arXiv:2305.20050. Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. In Text summa- rization branches out, pages 74â 81.
2309.01219#94
2309.01219#96
2309.01219
[ "2307.03109" ]
2309.01219#96
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Stephanie Lin, Jacob Hilton, and Owain Evans. 2021. Truthfulqa: Measuring how mod- els mimic human falsehoods. arXiv preprint arXiv:2109.07958. Zhen Lin, Shubhendu Trivedi, and Jimeng Sun. 2023. Generating with confidence: Uncertainty quantification for black-box large language models. arXiv preprint arXiv:2305.19187. Adam Liska, Tomas Kocisky, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang Agrawal, Dâ Autume Cyprien De Masson, Tim Scholtes, Manzil Zaheer, Susannah Young, et al. 2022. Streamingqa: A benchmark for adaptation to new knowledge over time in question answering In International Conference on Ma- models. chine Learning, pages 13604â 13622. PMLR. Jianfeng and Lijuan Wang. Wang, Yaser Yacoob, 2023a. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023b. Visual instruction tuning. arXiv preprint arXiv:2304.08485. Jerry Liu. 2022. LlamaIndex. Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, and Ji-Rong Wen. 2023c. Reta-llm: A retrieval-augmented large language model toolkit. arXiv preprint arXiv:2306.05212. Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni,
2309.01219#95
2309.01219#97
2309.01219
[ "2307.03109" ]
2309.01219#97
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
26 and Percy Liang. 2023d. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172. Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, and Bill Dolan. 2022. A token-level reference-free halluci- nation detection benchmark for free-form text generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6723â 6737. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly opti- mized bert pretraining approach. arXiv preprint arXiv:1907.11692. and Fenglong Ma. 2023a. Zero-resource hallucination preven- tion for large language models. arXiv preprint arXiv:2309.02654. Zheheng Luo, Qianqian Xie, and Sophia Anani- adou. 2023b. Chatgpt as a factual inconsis- tency evaluator for abstractive text summariza- tion. arXiv preprint arXiv:2303.15621. Ziyang Luo, Can Xu, Pu Zhao, Xiubo Geng, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023c. Augmented large lan- guage models with parametric knowledge guid- ing. arXiv preprint arXiv:2305.04757. Kelvin Luu, Daniel Khashabi, Suchin Gururan- gan, Karishma Mandyam, and Noah A Smith. 2022. Time waits for no one! analysis and In Pro- challenges of temporal misalignment. ceedings of the 2022 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, pages 5944â 5958.
2309.01219#96
2309.01219#98
2309.01219
[ "2307.03109" ]
2309.01219#98
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. 2023. Video- chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424. Alex Mallen, Akari Asai, Victor Zhong, Ra- jarshi Das, Daniel Khashabi, and Hannaneh Ha- jishirzi. 2023. When not to trust language mod- Investigating effectiveness of parametric els: and non-parametric memories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9802â 9822. Potsawee Manakul, Adian Liusie, and Mark JF Gales. 2023. Selfcheckgpt:
2309.01219#97
2309.01219#99
2309.01219
[ "2307.03109" ]
2309.01219#99
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Zero-resource black-box hallucination detection for genera- arXiv preprint tive large language models. arXiv:2303.08896. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan T. McDonald. 2020. On faithful- ness and factuality in abstractive summariza- tion. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Lin- guistics, ACL 2020, Online, July 5-10, 2020, pages 1906â 1919. Association for Computa- tional Linguistics. Nick McKenna, Tianyi Li, Liang Cheng, Moham- mad Javad Hosseini, Mark Johnson, and Mark Steedman. 2023. Sources of hallucination by large language models on inference tasks. arXiv preprint arXiv:2305.14552. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022a.
2309.01219#98
2309.01219#100
2309.01219
[ "2307.03109" ]
2309.01219#100
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Locating and editing factual associations in gpt. Advances in Neu- ral Information Processing Systems, 35:17359â 17372. Kevin Meng, Arnab Sen Sharma, Alex Ando- nian, Yonatan Belinkov, and David Bau. 2022b. Mass-editing memory in a transformer. arXiv preprint arXiv:2210.07229. Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. 2023. Augmented language models: a survey. arXiv preprint arXiv:2302.07842. Tomas Mikolov, Martin Karafiát, Lukas Bur- get, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Interspeech.
2309.01219#99
2309.01219#101
2309.01219
[ "2307.03109" ]
2309.01219#101
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Makuhari. Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Os- car Sainz, Eneko Agirre, Ilana Heintz, and Dan Roth. 2021. Recent advances in natural lan- guage processing via large pre-trained language models: A survey. ACM Computing Surveys. 27 Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Ha- jishirzi. 2023.
2309.01219#100
2309.01219#102
2309.01219
[ "2307.03109" ]
2309.01219#102
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Factscore: Fine-grained atomic evaluation of factual precision in long form text generation. arXiv preprint arXiv:2305.14251. Eric Mitchell, Charles Lin, Antoine Bosselut, Christopher D Manning, and Chelsea Finn. 2022. Memory-based model editing at scale. In International Conference on Machine Learn- ing, pages 15817â 15831. PMLR. Elaraby Mohamed, Lu Mengyin, Dunn Jacob, Zhang Xueying, Wang Yu, and Liu Shizhu. 2023.
2309.01219#101
2309.01219#103
2309.01219
[ "2307.03109" ]
2309.01219#103
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Halo: Estimation and reduction of hal- lucinations in open-source weak large language models. arXiv preprint arXiv:2308.11764. Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, and Yoav Shoham. 2023. Generating bench- marks for factuality evaluation of language models. arXiv preprint arXiv:2307.06908. Niels Mündler, Jingxuan He, Slobodan Jenko, and Martin Vechev. 2023. Self-contradictory hal- lucinations of large language models: Evalua- tion, detection and mitigation. arXiv preprint arXiv:2305.15852. Shikhar Murty, Christopher Manning, Scott Lund- berg, and Marco Tulio Ribeiro. 2022. Fixing model bugs with natural language patches. In Proceedings of the 2022 Conference on Empir- ical Methods in Natural Language Processing, pages 11600â 11613. Reiichiro Nakano, Jacob Hilton, Suchir Bal- aji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. We- Browser-assisted question-answering bgpt: arXiv preprint with human feedback. arXiv:2112.09332. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural net- work based sequence model for extractive sum- marization of documents. In Proceedings of the AAAI conference on artificial intelligence. Courtney Napoles, Keisuke Sakaguchi, and Joel Jfleg: A fluency corpus and benchmark for grammatical error correction. In Proceedings of the 15th Conference of the Eu- ropean Chapter of the Association for Compu- tational Linguistics: Volume 2, Short Papers, pages 229â 234.
2309.01219#102
2309.01219#104
2309.01219
[ "2307.03109" ]
2309.01219#104
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Roberto Navigli, Simone Conia, and Björn Ross. 2023. Biases in large language models: Ori- gins, inventory and discussion. ACM Journal of Data and Information Quality. Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gus- tavo Hernández à brego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, and Yinfei Yang. 2022.
2309.01219#103
2309.01219#105
2309.01219
[ "2307.03109" ]
2309.01219#105
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Large dual encoders are generalizable retrievers. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 9844â 9855. Association for Com- putational Linguistics. Xuefei Ning, Zinan Lin, Zixuan Zhou, Huazhong Yang, and Yu Wang. 2023.
2309.01219#104
2309.01219#106
2309.01219
[ "2307.03109" ]
2309.01219#106
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Skeleton-of- thought: Large language models can do parallel decoding. arXiv preprint arXiv:2307.15337. Yasumasa Onoe, Michael JQ Zhang, Shankar Pad- manabhan, Greg Durrett, and Eunsol Choi. 2023. Can lms learn new entities from de- scriptions? challenges in propagating injected knowledge. arXiv preprint arXiv:2305.01651. OpenAI. 2023a. ChatGPT. https:// openai.com/blog/chatgpt. OpenAI. 2023b. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022.
2309.01219#105
2309.01219#107
2309.01219
[ "2307.03109" ]
2309.01219#107
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Training language models to follow instructions with human feed- back. Advances in Neural Information Process- ing Systems, 35:27730â 27744. Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to-text generation dataset. In Proceedings of the 2020 Conference on Empir- ical Methods in Natural Language Processing (EMNLP), pages 1173â
2309.01219#106
2309.01219#108
2309.01219
[ "2307.03109" ]
2309.01219#108
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
1186. Joon Sung Park, Joseph C Oâ Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442. Adam Pauls and Dan Klein. 2011. Faster and smaller n-gram language models. In Proceed- ings of the 49th annual meeting of the Asso- ciation for Computational Linguistics: Human Language Technologies, pages 258â 267. Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cap- pelli, Hamza Alobeidli, Baptiste Pannier, Ebte- sam Almazrouei, and Julien Launay. 2023. The refinedweb dataset for falcon llm: outperform- ing curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116. Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. 2023a.
2309.01219#107
2309.01219#109
2309.01219
[ "2307.03109" ]
2309.01219#109
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Check your facts and try again: Improving large language models with external knowl- edge and automated feedback. arXiv preprint arXiv:2302.12813. Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023b. In- arXiv preprint struction tuning with gpt-4. arXiv:2304.03277. Ethan Perez, Sam Ringer, KamilË e LukoÅ¡i¯utË e, Ka- rina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. 2022. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251. Xiao Pu, Mingqi Gao, and Xiaojun Wan. 2023. Summarization is (almost) dead. arXiv preprint arXiv:2309.09558. Cheng Qian, Xinran Zhao, and Sherry Tong- shuang Wu. 2023. " merge conflicts!" ex- ploring the impacts of external distractors to parametric knowledge graphs. arXiv preprint arXiv:2309.08594. Shuofei Qiao, Honghao Gui, Huajun Chen, and Ningyu Zhang. 2023. Making language mod- els better tool learners with execution feedback. arXiv preprint arXiv:2305.13068.
2309.01219#108
2309.01219#110
2309.01219
[ "2307.03109" ]
2309.01219#110
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
28 Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. 2023. Tool learning with foundation models. arXiv preprint arXiv:2304.08354. Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language pro- cessing:
2309.01219#109
2309.01219#111
2309.01219
[ "2307.03109" ]
2309.01219#111
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
A survey. Science China Technolog- ical Sciences, 63(10):1872â 1897. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised mul- titask learners. OpenAI blog, 1(8):9. Ansh Radhakrishnan, Karina Nguyen, Anna Chen, Carol Chen, Carson Denison, Danny Hernan- dez, Esin Durmus, Evan Hubinger, Jackson Kernion, KamilË
2309.01219#110
2309.01219#112
2309.01219
[ "2307.03109" ]
2309.01219#112
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
e LukoÅ¡i¯utË e, et al. 2023. Ques- tion decomposition improves the faithfulness of model-generated reasoning. arXiv preprint arXiv:2307.11768. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485â 5551. Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton- In-context Brown, and Yoav Shoham. 2023. arXiv retrieval-augmented language models. preprint arXiv:2302.00083. Vipula Rawte, Prachi Priya, SM Tonmoy, SM Za- man, Amit Sheth, and Amitava Das. 2023. Ex- ploring the relationship between llm hallucina- tions and prompt linguistic nuances: Readabil- ity, formality, and concreteness. arXiv preprint arXiv:2309.11064. Clément Rebuffel, Marco Roberti, Laure Soulier, Geoffrey Scoutheeten, Rossella Cancelliere, and Patrick Gallinari. 2022. Controlling hal- lucinations at word level in data-to-text gener- ation. Data Mining and Knowledge Discovery, pages 1â 37. Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua 29 Wu, Ji-Rong Wen, and Wang Haifeng. 2023.
2309.01219#111
2309.01219#113
2309.01219
[ "2307.03109" ]
2309.01219#113
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Investigating the factual knowledge boundary of large language models with retrieval aug- mentation. arXiv preprint arXiv:2307.11019. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empir- ical Methods in Natural Language Processing (EMNLP), pages 5418â 5426. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework:
2309.01219#112
2309.01219#114
2309.01219
[ "2307.03109" ]
2309.01219#114
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Bm25 and beyond. Foundations and Trends® in In- formation Retrieval, 3(4):333â 389. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176b-parameter open-access mul- arXiv preprint tilingual arXiv:2211.05100. John Schulman. 2023. Reinforcement learning from human feedback: Progress and challenges. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Prox- arXiv imal policy optimization algorithms. preprint arXiv:1707.06347. Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H. Chi, Nathanael Schärli, and Denny Zhou. 2023a. Large lan- guage models can be easily distracted by irrel- In Proceedings of the 40th In- evant context. ternational Conference on Machine Learning, volume 202, pages 31210â 31227.
2309.01219#113
2309.01219#115
2309.01219
[ "2307.03109" ]
2309.01219#115
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, and Sida I. Wang. 2022. Natural language to code translation with ex- In Proceedings of the 2022 Confer- ecution. ence on Empirical Methods in Natural Lan- guage Processing, pages 3533â 3546. Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, and Scott Wen-tau Yih. 2023b. Trusting your evidence: Halluci- nate less with context-aware decoding. arXiv preprint arXiv:2305.14739. Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023c. Replug: Retrieval-augmented black-box language mod- els. arXiv preprint arXiv:2301.12652. Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuo- hang Wang, Jianfeng Wang, Jordan Boyd- Prompt- Graber, and Lijuan Wang. 2022. arXiv preprint ing gpt-3 to be reliable. arXiv:2210.09150. Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitriy Pyrkin, Sergei Popov, and Artem Babenko. 2020. Editable neural networks. arXiv preprint arXiv:2004.00345. Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. 2023. Pandagpt: One arXiv model to instruction-follow them all. preprint arXiv:2305.16355. Kai Sun, Yifan Ethan Xu, Hanwen Zha, Yue Liu, and Xin Luna Dong. 2023a.
2309.01219#114
2309.01219#116
2309.01219
[ "2307.03109" ]
2309.01219#116
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Head-to-tail: How knowledgeable are large language models (llm)? aka will llms replace knowledge graphs? arXiv preprint arXiv:2308.10168. Tianxiang Sun, Yunfan Shao, Hong Qian, Xuan- jing Huang, and Xipeng Qiu. 2022. Black-box tuning for language-model-as-a-service. In In- ternational Conference on Machine Learning, pages 20841â 20855. PMLR. Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li, Qinyuan Cheng, Hang Yan, Xiangyang Liu, Yunfan Shao, Qiong Tang, Xingjian Zhao, Ke Chen, Yining Zheng, Zhejian Zhou, Ruixiao Li, Jun Zhan, Yunhua Zhou, Linyang Li, Xi- aogui Yang, Lingling Wu, Zhangyue Yin, Xu- anjing Huang, and Xipeng Qiu. 2023b.
2309.01219#115
2309.01219#117
2309.01219
[ "2307.03109" ]
2309.01219#117
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Moss: Training conversational language models from synthetic data. Alex Tamkin, Kunal Handa, Avash Shrestha, and Noah Goodman. 2022. Task ambiguity in hu- mans and language models. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction- 2023. following llama model. https://github. com/tatsu-lab/stanford_alpaca.
2309.01219#116
2309.01219#118
2309.01219
[ "2307.03109" ]
2309.01219#118
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
30 Faraz Torabi, Garrett Warnell, and Peter Stone. 2018. Behavioral cloning from observation. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4950â 4957. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Pe- ter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b.
2309.01219#117
2309.01219#119
2309.01219
[ "2307.03109" ]
2309.01219#119
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. and Logesh Kumar Umapathi, Ankit Pal, Malaikannan Sankarasubbu. 2023. Med- halt: Medical domain hallucination test arXiv preprint for large language models. arXiv:2307.15343. Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jianshu Chen, and Dong Yu. 2023.
2309.01219#118
2309.01219#120
2309.01219
[ "2307.03109" ]
2309.01219#120
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
A stitch in time saves nine: Detecting and mitigating hallu- cinations of llms by validating low-confidence generation. arXiv preprint arXiv:2307.03987. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. Advances in neural in- formation processing systems, 30. Chaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural machine translation. arXiv preprint arXiv:2005.03642. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023a.
2309.01219#119
2309.01219#121
2309.01219
[ "2307.03109" ]
2309.01219#121
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Voy- ager: An open-ended embodied agent with arXiv preprint large language models. arXiv:2305.16291. Hongmin Wang. 2019. Revisiting challenges in data-to-text generation with fact grounding. In Proceedings of the 12th International Confer- ence on Natural Language Generation, pages 311â 322. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022.
2309.01219#120
2309.01219#122
2309.01219
[ "2307.03109" ]
2309.01219#122
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Self-consistency improves chain of thought rea- soning in language models. In The Eleventh In- ternational Conference on Learning Represen- tations. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. 2023b. How far can camels go? exploring the state of instruc- tion tuning on open resources. arXiv preprint arXiv:2306.04751. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023c.
2309.01219#121
2309.01219#123
2309.01219
[ "2307.03109" ]
2309.01219#123
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Self-instruct: Aligning language models with self-generated In Proceedings of the 61st An- instructions. nual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 13484â 13508. Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. 2023d. Unleashing cognitive synergy in large language models: A task-solving agent through multi- arXiv preprint persona self-collaboration. arXiv:2307.05300. Alexander Wei, Nika Haghtalab, and Jacob Stein- hardt. 2023a.
2309.01219#122
2309.01219#124
2309.01219
[ "2307.03109" ]
2309.01219#124
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Jailbroken: How does llm safety training fail? arXiv preprint arXiv:2307.02483. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. In In- ternational Conference on Learning Represen- tations. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Pro- cessing Systems, 35:24824â 24837. Jerry Wei, Da Huang, Yifeng Lu, Denny Zhou, and Quoc V Le. 2023b.
2309.01219#123
2309.01219#125
2309.01219
[ "2307.03109" ]
2309.01219#125
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Simple synthetic data reduces sycophancy in large language models. arXiv preprint arXiv:2308.03958. 31 Alexander R Fabbri Chien-Sheng Wu and Wen- hao Liu Caiming Xiong. 2023. Qafacteval: Im- proved qa-based factual consistency evaluation for summarization. Jian Wu, Yashesh Gaur, Zhuo Chen, Long Zhou, Yimeng Zhu, Tianrui Wang, Jinyu Li, Shu- jie Liu, Bo Ren, Linquan Liu, et al. 2023a. On decoder-only architecture for speech-to-text and large language model integration. arXiv preprint arXiv:2307.03917. Weiqi Wu, Chengyue Jiang, Yong Jiang, Pengjun Xie, and Kewei Tu. 2023b. Do plms know and understand ontological knowledge? arXiv preprint arXiv:2309.05936. Yijun Xiao and William Yang Wang. 2021. On hallucination and predictive uncertainty in con- ditional language generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics:
2309.01219#124
2309.01219#126
2309.01219
[ "2307.03109" ]
2309.01219#126
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Main Volume, pages 2734â 2744. Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, and Yu Su. 2023. Adaptive chameleon or stub- born sloth: Unraveling the behavior of large language models in knowledge conflicts. arXiv preprint arXiv:2305.13300. Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie Fu, Junxian He, and Bryan Hooi. 2023. Can llms express their uncertainty? an empir- ical evaluation of confidence elicitation in llms. arXiv preprint arXiv:2306.13063. Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. 2023.
2309.01219#125
2309.01219#127
2309.01219
[ "2307.03109" ]
2309.01219#127
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Baize: An open-source chat model with parameter-efficient tuning on self- chat data. arXiv preprint arXiv:2304.01196. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Repre- sentations. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, An- wen Hu, Pengcheng Shi, Yaya Shi, et al. 2023. mplug-owl: Modularization empowers large arXiv language models with multimodality. preprint arXiv:2304.14178. Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuanjing Huang. 2023. Do large language models know what they donâ t know? arXiv preprint arXiv:2305.18153. Jifan Yu, Xiaozhi Wang, Shangqing Tu, Shulin Cao, Daniel Zhang-Li, Xin Lv, Hao Peng, Zi- jun Yao, Xiaohan Zhang, Hanming Li, et al. 2023a.
2309.01219#126
2309.01219#128
2309.01219
[ "2307.03109" ]
2309.01219#128
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Kola: Carefully benchmarking world arXiv knowledge of large language models. preprint arXiv:2306.09296. Wenhao Yu, Zhihan Zhang, Zhenwen Liang, Meng Jiang, and Ashish Sabharwal. 2023b. Improving language models via plug-and- arXiv preprint play retrieval arXiv:2305.14002. Xiang Yue, Boshi Wang, Kai Zhang, Ziru Chen, Yu Su, and Huan Sun. 2023. Automatic eval- uation of attribution by large language models. arXiv preprint arXiv:2305.06311.
2309.01219#127
2309.01219#129
2309.01219
[ "2307.03109" ]
2309.01219#129
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Sina Zarrieà , Henrik Voigt, and Simeon Schüz. 2021. Decoding methods in neural language generation: a survey. Information, 12(9):355. Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained In The Eleventh International Confer- model. ence on Learning Representations. Yuheng Zha, Yichi Yang, Ruichen Li, and Zhiting Hu. 2023.
2309.01219#128
2309.01219#130
2309.01219
[ "2307.03109" ]
2309.01219#130
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
AlignScore: Evaluating factual con- sistency with a unified alignment function. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 11328â 11348. Muru Zhang, Ofir Press, William Merrill, Al- isa Liu, and Noah A Smith. 2023a. How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534. Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, et al. 2023b. Instruction tuning for large language models: A survey. arXiv preprint arXiv:2308.10792. Shuo Zhang, Liangming Pan, Junzhou Zhao, and William Yang Wang. 2023c. Mitigating language model hallucination with interactive question-knowledge alignment. arXiv preprint arXiv:2305.13669. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In Inter- national Conference on Learning Representa- tions. Xuchao Zhang, Menglin Xia, Camille Couturier, Guoqing Zheng, Saravan Rajmohan, and Vic- tor Ruhle. 2023d. Hybrid retrieval-augmented generation for real-time composition assistance. arXiv preprint arXiv:2308.04215. Ruochen Zhao, Xingxuan Li, Shafiq Joty, Cheng- wei Qin, and Lidong Bing. 2023a. Verify-and- edit: A knowledge-enhanced chain-of-thought framework. arXiv preprint arXiv:2305.03268. Theodore Zhao, Mu Wei, J Samuel Preston, and Hoifung Poon. 2023b. Automatic calibration and error correction for large language mod- els via pareto optimal self-supervision. arXiv preprint arXiv:2306.16564. Wayne Xin Zhao, Jing Liu, Ruiyang Ren, and Ji- Rong Wen. 2022.
2309.01219#129
2309.01219#131
2309.01219
[ "2307.03109" ]
2309.01219#131
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Dense text retrieval based on pretrained language models: A survey. arXiv preprint arXiv:2211.14876. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023c. A survey of large language models. arXiv preprint arXiv:2303.18223. Ce Zheng, Lei Li, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, and Baobao Chang. 2023a. Can we edit factual knowl- arXiv preprint edge by in-context learning? arXiv:2305.12740. Rui Zheng, Shihan Dou, Songyang Gao, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Limao Xiong, Lu Chen, et al. 2023b. Se- crets of rlhf in large language models part i: Ppo. arXiv preprint arXiv:2307.04964. Shen Zheng, Jie Huang, and Kevin Chen-Chuan Chang. 2023c. Why does chatgpt fall short in providing truthful answers. arXiv preprint arXiv:2304.10513.
2309.01219#130
2309.01219#132
2309.01219
[ "2307.03109" ]
2309.01219#132
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
32 Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, and Dragomir R. Radev. 2021. Qm- sum: A new benchmark for query-based multi- In Proceed- domain meeting summarization. ings of the 2021 Conference of the North Amer- ican Chapter of the Association for Compu- tational Linguistics: Human Language Tech- nologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5905â 5921. Association for Com- putational Linguistics. Zexuan Zhong, Zhengxuan Wu, Christopher D Manning, Christopher Potts, and Danqi Chen. 2023. Mquake: Assessing knowledge edit- ing in language models via multi-hop questions. arXiv preprint arXiv:2305.14795. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023a. Lima: arXiv preprint Less is more for alignment. arXiv:2305.11206. Wenxuan Zhou, Sheng Zhang, Hoifung Poon, and Muhao Chen. 2023b. Context-faithful prompt- ing for large language models. arXiv preprint arXiv:2303.11315. Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, and Meng Jiang. 2021. Enhancing factual con- sistency of abstractive summarization. In Pro- ceedings of the 2021 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, pages 718â 733. Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, et al. 2023. Promptbench:
2309.01219#131
2309.01219#133
2309.01219
[ "2307.03109" ]
2309.01219#133
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Towards evaluating the ro- bustness of large language models on adversar- ial prompts. arXiv preprint arXiv:2306.04528. Andy Zou, Zifan Wang, J Zico Kolter, and Matt Fredrikson. 2023. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043. 33
2309.01219#132
2309.01219
[ "2307.03109" ]
2309.00986#0
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
3 2 0 2 p e S 2 ] L C . s c [ 1 v 6 8 9 0 0 . 9 0 3 2 : v i X r a # ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models Chenliang Li, Hehong Chen, Ming Yanâ , Weizhou Shen, Haiyang Xu, Zhikai Wu Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi Ji Zhang, Fei Huang, Jingren Zhou DAMO Academy, Alibaba Group, China
2309.00986#1
2309.00986
[ "2304.07849" ]
2309.00986#1
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
# Abstract Large language models (LLMs) have recently demonstrated remarkable capabilities to com- prehend human intentions, engage in reason- ing, and design planning-like behavior. To further unleash the power of LLMs to accom- plish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open- source LLMs as controllers. It provides a user- friendly system library, with customizable en- gine design to support model training on mul- tiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a compre- hensive framework has been proposed span- ning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent as- sistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library1 and online demo2 are now publicly available. spite the rapid advancements of open-source LLMs, e.g., LLaMA (Touvron et al., 2023) and Chat- GLM (THUDM, 2023), they still remain limited in performing complex tasks, such as following user instructions to use external tools and capture up-to-date information. To further unleash the power of LLMs for real- world practical applications, a rising trend of cur- rent research (Schick et al., 2023; Shen et al., 2023; Yang et al., 2023; Qin et al., 2023; Patil et al., 2023) begins to enable LLMs with tool-use abilities to- wards building an AI Agent.
2309.00986#0
2309.00986#2
2309.00986
[ "2304.07849" ]